Loading...
Note: File does not exist in v4.6.
1========================
2ftrace - Function Tracer
3========================
4
5Copyright 2008 Red Hat Inc.
6
7:Author: Steven Rostedt <srostedt@redhat.com>
8:License: The GNU Free Documentation License, Version 1.2
9 (dual licensed under the GPL v2)
10:Original Reviewers: Elias Oltmanns, Randy Dunlap, Andrew Morton,
11 John Kacur, and David Teigland.
12
13- Written for: 2.6.28-rc2
14- Updated for: 3.10
15- Updated for: 4.13 - Copyright 2017 VMware Inc. Steven Rostedt
16- Converted to rst format - Changbin Du <changbin.du@intel.com>
17
18Introduction
19------------
20
21Ftrace is an internal tracer designed to help out developers and
22designers of systems to find what is going on inside the kernel.
23It can be used for debugging or analyzing latencies and
24performance issues that take place outside of user-space.
25
26Although ftrace is typically considered the function tracer, it
27is really a framework of several assorted tracing utilities.
28There's latency tracing to examine what occurs between interrupts
29disabled and enabled, as well as for preemption and from a time
30a task is woken to the task is actually scheduled in.
31
32One of the most common uses of ftrace is the event tracing.
33Throughout the kernel is hundreds of static event points that
34can be enabled via the tracefs file system to see what is
35going on in certain parts of the kernel.
36
37See events.rst for more information.
38
39
40Implementation Details
41----------------------
42
43See Documentation/trace/ftrace-design.rst for details for arch porters and such.
44
45
46The File System
47---------------
48
49Ftrace uses the tracefs file system to hold the control files as
50well as the files to display output.
51
52When tracefs is configured into the kernel (which selecting any ftrace
53option will do) the directory /sys/kernel/tracing will be created. To mount
54this directory, you can add to your /etc/fstab file::
55
56 tracefs /sys/kernel/tracing tracefs defaults 0 0
57
58Or you can mount it at run time with::
59
60 mount -t tracefs nodev /sys/kernel/tracing
61
62For quicker access to that directory you may want to make a soft link to
63it::
64
65 ln -s /sys/kernel/tracing /tracing
66
67.. attention::
68
69 Before 4.1, all ftrace tracing control files were within the debugfs
70 file system, which is typically located at /sys/kernel/debug/tracing.
71 For backward compatibility, when mounting the debugfs file system,
72 the tracefs file system will be automatically mounted at:
73
74 /sys/kernel/debug/tracing
75
76 All files located in the tracefs file system will be located in that
77 debugfs file system directory as well.
78
79.. attention::
80
81 Any selected ftrace option will also create the tracefs file system.
82 The rest of the document will assume that you are in the ftrace directory
83 (cd /sys/kernel/tracing) and will only concentrate on the files within that
84 directory and not distract from the content with the extended
85 "/sys/kernel/tracing" path name.
86
87That's it! (assuming that you have ftrace configured into your kernel)
88
89After mounting tracefs you will have access to the control and output files
90of ftrace. Here is a list of some of the key files:
91
92
93 Note: all time values are in microseconds.
94
95 current_tracer:
96
97 This is used to set or display the current tracer
98 that is configured. Changing the current tracer clears
99 the ring buffer content as well as the "snapshot" buffer.
100
101 available_tracers:
102
103 This holds the different types of tracers that
104 have been compiled into the kernel. The
105 tracers listed here can be configured by
106 echoing their name into current_tracer.
107
108 tracing_on:
109
110 This sets or displays whether writing to the trace
111 ring buffer is enabled. Echo 0 into this file to disable
112 the tracer or 1 to enable it. Note, this only disables
113 writing to the ring buffer, the tracing overhead may
114 still be occurring.
115
116 The kernel function tracing_off() can be used within the
117 kernel to disable writing to the ring buffer, which will
118 set this file to "0". User space can re-enable tracing by
119 echoing "1" into the file.
120
121 Note, the function and event trigger "traceoff" will also
122 set this file to zero and stop tracing. Which can also
123 be re-enabled by user space using this file.
124
125 trace:
126
127 This file holds the output of the trace in a human
128 readable format (described below). Opening this file for
129 writing with the O_TRUNC flag clears the ring buffer content.
130 Note, this file is not a consumer. If tracing is off
131 (no tracer running, or tracing_on is zero), it will produce
132 the same output each time it is read. When tracing is on,
133 it may produce inconsistent results as it tries to read
134 the entire buffer without consuming it.
135
136 trace_pipe:
137
138 The output is the same as the "trace" file but this
139 file is meant to be streamed with live tracing.
140 Reads from this file will block until new data is
141 retrieved. Unlike the "trace" file, this file is a
142 consumer. This means reading from this file causes
143 sequential reads to display more current data. Once
144 data is read from this file, it is consumed, and
145 will not be read again with a sequential read. The
146 "trace" file is static, and if the tracer is not
147 adding more data, it will display the same
148 information every time it is read.
149
150 trace_options:
151
152 This file lets the user control the amount of data
153 that is displayed in one of the above output
154 files. Options also exist to modify how a tracer
155 or events work (stack traces, timestamps, etc).
156
157 options:
158
159 This is a directory that has a file for every available
160 trace option (also in trace_options). Options may also be set
161 or cleared by writing a "1" or "0" respectively into the
162 corresponding file with the option name.
163
164 tracing_max_latency:
165
166 Some of the tracers record the max latency.
167 For example, the maximum time that interrupts are disabled.
168 The maximum time is saved in this file. The max trace will also be
169 stored, and displayed by "trace". A new max trace will only be
170 recorded if the latency is greater than the value in this file
171 (in microseconds).
172
173 By echoing in a time into this file, no latency will be recorded
174 unless it is greater than the time in this file.
175
176 tracing_thresh:
177
178 Some latency tracers will record a trace whenever the
179 latency is greater than the number in this file.
180 Only active when the file contains a number greater than 0.
181 (in microseconds)
182
183 buffer_percent:
184
185 This is the watermark for how much the ring buffer needs to be filled
186 before a waiter is woken up. That is, if an application calls a
187 blocking read syscall on one of the per_cpu trace_pipe_raw files, it
188 will block until the given amount of data specified by buffer_percent
189 is in the ring buffer before it wakes the reader up. This also
190 controls how the splice system calls are blocked on this file::
191
192 0 - means to wake up as soon as there is any data in the ring buffer.
193 50 - means to wake up when roughly half of the ring buffer sub-buffers
194 are full.
195 100 - means to block until the ring buffer is totally full and is
196 about to start overwriting the older data.
197
198 buffer_size_kb:
199
200 This sets or displays the number of kilobytes each CPU
201 buffer holds. By default, the trace buffers are the same size
202 for each CPU. The displayed number is the size of the
203 CPU buffer and not total size of all buffers. The
204 trace buffers are allocated in pages (blocks of memory
205 that the kernel uses for allocation, usually 4 KB in size).
206 A few extra pages may be allocated to accommodate buffer management
207 meta-data. If the last page allocated has room for more bytes
208 than requested, the rest of the page will be used,
209 making the actual allocation bigger than requested or shown.
210 ( Note, the size may not be a multiple of the page size
211 due to buffer management meta-data. )
212
213 Buffer sizes for individual CPUs may vary
214 (see "per_cpu/cpu0/buffer_size_kb" below), and if they do
215 this file will show "X".
216
217 buffer_total_size_kb:
218
219 This displays the total combined size of all the trace buffers.
220
221 buffer_subbuf_size_kb:
222
223 This sets or displays the sub buffer size. The ring buffer is broken up
224 into several same size "sub buffers". An event can not be bigger than
225 the size of the sub buffer. Normally, the sub buffer is the size of the
226 architecture's page (4K on x86). The sub buffer also contains meta data
227 at the start which also limits the size of an event. That means when
228 the sub buffer is a page size, no event can be larger than the page
229 size minus the sub buffer meta data.
230
231 Note, the buffer_subbuf_size_kb is a way for the user to specify the
232 minimum size of the subbuffer. The kernel may make it bigger due to the
233 implementation details, or simply fail the operation if the kernel can
234 not handle the request.
235
236 Changing the sub buffer size allows for events to be larger than the
237 page size.
238
239 Note: When changing the sub-buffer size, tracing is stopped and any
240 data in the ring buffer and the snapshot buffer will be discarded.
241
242 free_buffer:
243
244 If a process is performing tracing, and the ring buffer should be
245 shrunk "freed" when the process is finished, even if it were to be
246 killed by a signal, this file can be used for that purpose. On close
247 of this file, the ring buffer will be resized to its minimum size.
248 Having a process that is tracing also open this file, when the process
249 exits its file descriptor for this file will be closed, and in doing so,
250 the ring buffer will be "freed".
251
252 It may also stop tracing if disable_on_free option is set.
253
254 tracing_cpumask:
255
256 This is a mask that lets the user only trace on specified CPUs.
257 The format is a hex string representing the CPUs.
258
259 set_ftrace_filter:
260
261 When dynamic ftrace is configured in (see the
262 section below "dynamic ftrace"), the code is dynamically
263 modified (code text rewrite) to disable calling of the
264 function profiler (mcount). This lets tracing be configured
265 in with practically no overhead in performance. This also
266 has a side effect of enabling or disabling specific functions
267 to be traced. Echoing names of functions into this file
268 will limit the trace to only those functions.
269 This influences the tracers "function" and "function_graph"
270 and thus also function profiling (see "function_profile_enabled").
271
272 The functions listed in "available_filter_functions" are what
273 can be written into this file.
274
275 This interface also allows for commands to be used. See the
276 "Filter commands" section for more details.
277
278 As a speed up, since processing strings can be quite expensive
279 and requires a check of all functions registered to tracing, instead
280 an index can be written into this file. A number (starting with "1")
281 written will instead select the same corresponding at the line position
282 of the "available_filter_functions" file.
283
284 set_ftrace_notrace:
285
286 This has an effect opposite to that of
287 set_ftrace_filter. Any function that is added here will not
288 be traced. If a function exists in both set_ftrace_filter
289 and set_ftrace_notrace, the function will _not_ be traced.
290
291 set_ftrace_pid:
292
293 Have the function tracer only trace the threads whose PID are
294 listed in this file.
295
296 If the "function-fork" option is set, then when a task whose
297 PID is listed in this file forks, the child's PID will
298 automatically be added to this file, and the child will be
299 traced by the function tracer as well. This option will also
300 cause PIDs of tasks that exit to be removed from the file.
301
302 set_ftrace_notrace_pid:
303
304 Have the function tracer ignore threads whose PID are listed in
305 this file.
306
307 If the "function-fork" option is set, then when a task whose
308 PID is listed in this file forks, the child's PID will
309 automatically be added to this file, and the child will not be
310 traced by the function tracer as well. This option will also
311 cause PIDs of tasks that exit to be removed from the file.
312
313 If a PID is in both this file and "set_ftrace_pid", then this
314 file takes precedence, and the thread will not be traced.
315
316 set_event_pid:
317
318 Have the events only trace a task with a PID listed in this file.
319 Note, sched_switch and sched_wake_up will also trace events
320 listed in this file.
321
322 To have the PIDs of children of tasks with their PID in this file
323 added on fork, enable the "event-fork" option. That option will also
324 cause the PIDs of tasks to be removed from this file when the task
325 exits.
326
327 set_event_notrace_pid:
328
329 Have the events not trace a task with a PID listed in this file.
330 Note, sched_switch and sched_wakeup will trace threads not listed
331 in this file, even if a thread's PID is in the file if the
332 sched_switch or sched_wakeup events also trace a thread that should
333 be traced.
334
335 To have the PIDs of children of tasks with their PID in this file
336 added on fork, enable the "event-fork" option. That option will also
337 cause the PIDs of tasks to be removed from this file when the task
338 exits.
339
340 set_graph_function:
341
342 Functions listed in this file will cause the function graph
343 tracer to only trace these functions and the functions that
344 they call. (See the section "dynamic ftrace" for more details).
345 Note, set_ftrace_filter and set_ftrace_notrace still affects
346 what functions are being traced.
347
348 set_graph_notrace:
349
350 Similar to set_graph_function, but will disable function graph
351 tracing when the function is hit until it exits the function.
352 This makes it possible to ignore tracing functions that are called
353 by a specific function.
354
355 available_filter_functions:
356
357 This lists the functions that ftrace has processed and can trace.
358 These are the function names that you can pass to
359 "set_ftrace_filter", "set_ftrace_notrace",
360 "set_graph_function", or "set_graph_notrace".
361 (See the section "dynamic ftrace" below for more details.)
362
363 available_filter_functions_addrs:
364
365 Similar to available_filter_functions, but with address displayed
366 for each function. The displayed address is the patch-site address
367 and can differ from /proc/kallsyms address.
368
369 dyn_ftrace_total_info:
370
371 This file is for debugging purposes. The number of functions that
372 have been converted to nops and are available to be traced.
373
374 enabled_functions:
375
376 This file is more for debugging ftrace, but can also be useful
377 in seeing if any function has a callback attached to it.
378 Not only does the trace infrastructure use ftrace function
379 trace utility, but other subsystems might too. This file
380 displays all functions that have a callback attached to them
381 as well as the number of callbacks that have been attached.
382 Note, a callback may also call multiple functions which will
383 not be listed in this count.
384
385 If the callback registered to be traced by a function with
386 the "save regs" attribute (thus even more overhead), a 'R'
387 will be displayed on the same line as the function that
388 is returning registers.
389
390 If the callback registered to be traced by a function with
391 the "ip modify" attribute (thus the regs->ip can be changed),
392 an 'I' will be displayed on the same line as the function that
393 can be overridden.
394
395 If a non ftrace trampoline is attached (BPF) a 'D' will be displayed.
396 Note, normal ftrace trampolines can also be attached, but only one
397 "direct" trampoline can be attached to a given function at a time.
398
399 Some architectures can not call direct trampolines, but instead have
400 the ftrace ops function located above the function entry point. In
401 such cases an 'O' will be displayed.
402
403 If a function had either the "ip modify" or a "direct" call attached to
404 it in the past, a 'M' will be shown. This flag is never cleared. It is
405 used to know if a function was every modified by the ftrace infrastructure,
406 and can be used for debugging.
407
408 If the architecture supports it, it will also show what callback
409 is being directly called by the function. If the count is greater
410 than 1 it most likely will be ftrace_ops_list_func().
411
412 If the callback of a function jumps to a trampoline that is
413 specific to the callback and which is not the standard trampoline,
414 its address will be printed as well as the function that the
415 trampoline calls.
416
417 touched_functions:
418
419 This file contains all the functions that ever had a function callback
420 to it via the ftrace infrastructure. It has the same format as
421 enabled_functions but shows all functions that have every been
422 traced.
423
424 To see any function that has every been modified by "ip modify" or a
425 direct trampoline, one can perform the following command:
426
427 grep ' M ' /sys/kernel/tracing/touched_functions
428
429 function_profile_enabled:
430
431 When set it will enable all functions with either the function
432 tracer, or if configured, the function graph tracer. It will
433 keep a histogram of the number of functions that were called
434 and if the function graph tracer was configured, it will also keep
435 track of the time spent in those functions. The histogram
436 content can be displayed in the files:
437
438 trace_stat/function<cpu> ( function0, function1, etc).
439
440 trace_stat:
441
442 A directory that holds different tracing stats.
443
444 kprobe_events:
445
446 Enable dynamic trace points. See kprobetrace.rst.
447
448 kprobe_profile:
449
450 Dynamic trace points stats. See kprobetrace.rst.
451
452 max_graph_depth:
453
454 Used with the function graph tracer. This is the max depth
455 it will trace into a function. Setting this to a value of
456 one will show only the first kernel function that is called
457 from user space.
458
459 printk_formats:
460
461 This is for tools that read the raw format files. If an event in
462 the ring buffer references a string, only a pointer to the string
463 is recorded into the buffer and not the string itself. This prevents
464 tools from knowing what that string was. This file displays the string
465 and address for the string allowing tools to map the pointers to what
466 the strings were.
467
468 saved_cmdlines:
469
470 Only the pid of the task is recorded in a trace event unless
471 the event specifically saves the task comm as well. Ftrace
472 makes a cache of pid mappings to comms to try to display
473 comms for events. If a pid for a comm is not listed, then
474 "<...>" is displayed in the output.
475
476 If the option "record-cmd" is set to "0", then comms of tasks
477 will not be saved during recording. By default, it is enabled.
478
479 saved_cmdlines_size:
480
481 By default, 128 comms are saved (see "saved_cmdlines" above). To
482 increase or decrease the amount of comms that are cached, echo
483 the number of comms to cache into this file.
484
485 saved_tgids:
486
487 If the option "record-tgid" is set, on each scheduling context switch
488 the Task Group ID of a task is saved in a table mapping the PID of
489 the thread to its TGID. By default, the "record-tgid" option is
490 disabled.
491
492 snapshot:
493
494 This displays the "snapshot" buffer and also lets the user
495 take a snapshot of the current running trace.
496 See the "Snapshot" section below for more details.
497
498 stack_max_size:
499
500 When the stack tracer is activated, this will display the
501 maximum stack size it has encountered.
502 See the "Stack Trace" section below.
503
504 stack_trace:
505
506 This displays the stack back trace of the largest stack
507 that was encountered when the stack tracer is activated.
508 See the "Stack Trace" section below.
509
510 stack_trace_filter:
511
512 This is similar to "set_ftrace_filter" but it limits what
513 functions the stack tracer will check.
514
515 trace_clock:
516
517 Whenever an event is recorded into the ring buffer, a
518 "timestamp" is added. This stamp comes from a specified
519 clock. By default, ftrace uses the "local" clock. This
520 clock is very fast and strictly per cpu, but on some
521 systems it may not be monotonic with respect to other
522 CPUs. In other words, the local clocks may not be in sync
523 with local clocks on other CPUs.
524
525 Usual clocks for tracing::
526
527 # cat trace_clock
528 [local] global counter x86-tsc
529
530 The clock with the square brackets around it is the one in effect.
531
532 local:
533 Default clock, but may not be in sync across CPUs
534
535 global:
536 This clock is in sync with all CPUs but may
537 be a bit slower than the local clock.
538
539 counter:
540 This is not a clock at all, but literally an atomic
541 counter. It counts up one by one, but is in sync
542 with all CPUs. This is useful when you need to
543 know exactly the order events occurred with respect to
544 each other on different CPUs.
545
546 uptime:
547 This uses the jiffies counter and the time stamp
548 is relative to the time since boot up.
549
550 perf:
551 This makes ftrace use the same clock that perf uses.
552 Eventually perf will be able to read ftrace buffers
553 and this will help out in interleaving the data.
554
555 x86-tsc:
556 Architectures may define their own clocks. For
557 example, x86 uses its own TSC cycle clock here.
558
559 ppc-tb:
560 This uses the powerpc timebase register value.
561 This is in sync across CPUs and can also be used
562 to correlate events across hypervisor/guest if
563 tb_offset is known.
564
565 mono:
566 This uses the fast monotonic clock (CLOCK_MONOTONIC)
567 which is monotonic and is subject to NTP rate adjustments.
568
569 mono_raw:
570 This is the raw monotonic clock (CLOCK_MONOTONIC_RAW)
571 which is monotonic but is not subject to any rate adjustments
572 and ticks at the same rate as the hardware clocksource.
573
574 boot:
575 This is the boot clock (CLOCK_BOOTTIME) and is based on the
576 fast monotonic clock, but also accounts for time spent in
577 suspend. Since the clock access is designed for use in
578 tracing in the suspend path, some side effects are possible
579 if clock is accessed after the suspend time is accounted before
580 the fast mono clock is updated. In this case, the clock update
581 appears to happen slightly sooner than it normally would have.
582 Also on 32-bit systems, it's possible that the 64-bit boot offset
583 sees a partial update. These effects are rare and post
584 processing should be able to handle them. See comments in the
585 ktime_get_boot_fast_ns() function for more information.
586
587 tai:
588 This is the tai clock (CLOCK_TAI) and is derived from the wall-
589 clock time. However, this clock does not experience
590 discontinuities and backwards jumps caused by NTP inserting leap
591 seconds. Since the clock access is designed for use in tracing,
592 side effects are possible. The clock access may yield wrong
593 readouts in case the internal TAI offset is updated e.g., caused
594 by setting the system time or using adjtimex() with an offset.
595 These effects are rare and post processing should be able to
596 handle them. See comments in the ktime_get_tai_fast_ns()
597 function for more information.
598
599 To set a clock, simply echo the clock name into this file::
600
601 # echo global > trace_clock
602
603 Setting a clock clears the ring buffer content as well as the
604 "snapshot" buffer.
605
606 trace_marker:
607
608 This is a very useful file for synchronizing user space
609 with events happening in the kernel. Writing strings into
610 this file will be written into the ftrace buffer.
611
612 It is useful in applications to open this file at the start
613 of the application and just reference the file descriptor
614 for the file::
615
616 void trace_write(const char *fmt, ...)
617 {
618 va_list ap;
619 char buf[256];
620 int n;
621
622 if (trace_fd < 0)
623 return;
624
625 va_start(ap, fmt);
626 n = vsnprintf(buf, 256, fmt, ap);
627 va_end(ap);
628
629 write(trace_fd, buf, n);
630 }
631
632 start::
633
634 trace_fd = open("trace_marker", O_WRONLY);
635
636 Note: Writing into the trace_marker file can also initiate triggers
637 that are written into /sys/kernel/tracing/events/ftrace/print/trigger
638 See "Event triggers" in Documentation/trace/events.rst and an
639 example in Documentation/trace/histogram.rst (Section 3.)
640
641 trace_marker_raw:
642
643 This is similar to trace_marker above, but is meant for binary data
644 to be written to it, where a tool can be used to parse the data
645 from trace_pipe_raw.
646
647 uprobe_events:
648
649 Add dynamic tracepoints in programs.
650 See uprobetracer.rst
651
652 uprobe_profile:
653
654 Uprobe statistics. See uprobetrace.txt
655
656 instances:
657
658 This is a way to make multiple trace buffers where different
659 events can be recorded in different buffers.
660 See "Instances" section below.
661
662 events:
663
664 This is the trace event directory. It holds event tracepoints
665 (also known as static tracepoints) that have been compiled
666 into the kernel. It shows what event tracepoints exist
667 and how they are grouped by system. There are "enable"
668 files at various levels that can enable the tracepoints
669 when a "1" is written to them.
670
671 See events.rst for more information.
672
673 set_event:
674
675 By echoing in the event into this file, will enable that event.
676
677 See events.rst for more information.
678
679 available_events:
680
681 A list of events that can be enabled in tracing.
682
683 See events.rst for more information.
684
685 timestamp_mode:
686
687 Certain tracers may change the timestamp mode used when
688 logging trace events into the event buffer. Events with
689 different modes can coexist within a buffer but the mode in
690 effect when an event is logged determines which timestamp mode
691 is used for that event. The default timestamp mode is
692 'delta'.
693
694 Usual timestamp modes for tracing:
695
696 # cat timestamp_mode
697 [delta] absolute
698
699 The timestamp mode with the square brackets around it is the
700 one in effect.
701
702 delta: Default timestamp mode - timestamp is a delta against
703 a per-buffer timestamp.
704
705 absolute: The timestamp is a full timestamp, not a delta
706 against some other value. As such it takes up more
707 space and is less efficient.
708
709 hwlat_detector:
710
711 Directory for the Hardware Latency Detector.
712 See "Hardware Latency Detector" section below.
713
714 per_cpu:
715
716 This is a directory that contains the trace per_cpu information.
717
718 per_cpu/cpu0/buffer_size_kb:
719
720 The ftrace buffer is defined per_cpu. That is, there's a separate
721 buffer for each CPU to allow writes to be done atomically,
722 and free from cache bouncing. These buffers may have different
723 size buffers. This file is similar to the buffer_size_kb
724 file, but it only displays or sets the buffer size for the
725 specific CPU. (here cpu0).
726
727 per_cpu/cpu0/trace:
728
729 This is similar to the "trace" file, but it will only display
730 the data specific for the CPU. If written to, it only clears
731 the specific CPU buffer.
732
733 per_cpu/cpu0/trace_pipe
734
735 This is similar to the "trace_pipe" file, and is a consuming
736 read, but it will only display (and consume) the data specific
737 for the CPU.
738
739 per_cpu/cpu0/trace_pipe_raw
740
741 For tools that can parse the ftrace ring buffer binary format,
742 the trace_pipe_raw file can be used to extract the data
743 from the ring buffer directly. With the use of the splice()
744 system call, the buffer data can be quickly transferred to
745 a file or to the network where a server is collecting the
746 data.
747
748 Like trace_pipe, this is a consuming reader, where multiple
749 reads will always produce different data.
750
751 per_cpu/cpu0/snapshot:
752
753 This is similar to the main "snapshot" file, but will only
754 snapshot the current CPU (if supported). It only displays
755 the content of the snapshot for a given CPU, and if
756 written to, only clears this CPU buffer.
757
758 per_cpu/cpu0/snapshot_raw:
759
760 Similar to the trace_pipe_raw, but will read the binary format
761 from the snapshot buffer for the given CPU.
762
763 per_cpu/cpu0/stats:
764
765 This displays certain stats about the ring buffer:
766
767 entries:
768 The number of events that are still in the buffer.
769
770 overrun:
771 The number of lost events due to overwriting when
772 the buffer was full.
773
774 commit overrun:
775 Should always be zero.
776 This gets set if so many events happened within a nested
777 event (ring buffer is re-entrant), that it fills the
778 buffer and starts dropping events.
779
780 bytes:
781 Bytes actually read (not overwritten).
782
783 oldest event ts:
784 The oldest timestamp in the buffer
785
786 now ts:
787 The current timestamp
788
789 dropped events:
790 Events lost due to overwrite option being off.
791
792 read events:
793 The number of events read.
794
795The Tracers
796-----------
797
798Here is the list of current tracers that may be configured.
799
800 "function"
801
802 Function call tracer to trace all kernel functions.
803
804 "function_graph"
805
806 Similar to the function tracer except that the
807 function tracer probes the functions on their entry
808 whereas the function graph tracer traces on both entry
809 and exit of the functions. It then provides the ability
810 to draw a graph of function calls similar to C code
811 source.
812
813 Note that the function graph calculates the timings of when the
814 function starts and returns internally and for each instance. If
815 there are two instances that run function graph tracer and traces
816 the same functions, the length of the timings may be slightly off as
817 each read the timestamp separately and not at the same time.
818
819 "blk"
820
821 The block tracer. The tracer used by the blktrace user
822 application.
823
824 "hwlat"
825
826 The Hardware Latency tracer is used to detect if the hardware
827 produces any latency. See "Hardware Latency Detector" section
828 below.
829
830 "irqsoff"
831
832 Traces the areas that disable interrupts and saves
833 the trace with the longest max latency.
834 See tracing_max_latency. When a new max is recorded,
835 it replaces the old trace. It is best to view this
836 trace with the latency-format option enabled, which
837 happens automatically when the tracer is selected.
838
839 "preemptoff"
840
841 Similar to irqsoff but traces and records the amount of
842 time for which preemption is disabled.
843
844 "preemptirqsoff"
845
846 Similar to irqsoff and preemptoff, but traces and
847 records the largest time for which irqs and/or preemption
848 is disabled.
849
850 "wakeup"
851
852 Traces and records the max latency that it takes for
853 the highest priority task to get scheduled after
854 it has been woken up.
855 Traces all tasks as an average developer would expect.
856
857 "wakeup_rt"
858
859 Traces and records the max latency that it takes for just
860 RT tasks (as the current "wakeup" does). This is useful
861 for those interested in wake up timings of RT tasks.
862
863 "wakeup_dl"
864
865 Traces and records the max latency that it takes for
866 a SCHED_DEADLINE task to be woken (as the "wakeup" and
867 "wakeup_rt" does).
868
869 "mmiotrace"
870
871 A special tracer that is used to trace binary module.
872 It will trace all the calls that a module makes to the
873 hardware. Everything it writes and reads from the I/O
874 as well.
875
876 "branch"
877
878 This tracer can be configured when tracing likely/unlikely
879 calls within the kernel. It will trace when a likely and
880 unlikely branch is hit and if it was correct in its prediction
881 of being correct.
882
883 "nop"
884
885 This is the "trace nothing" tracer. To remove all
886 tracers from tracing simply echo "nop" into
887 current_tracer.
888
889Error conditions
890----------------
891
892 For most ftrace commands, failure modes are obvious and communicated
893 using standard return codes.
894
895 For other more involved commands, extended error information may be
896 available via the tracing/error_log file. For the commands that
897 support it, reading the tracing/error_log file after an error will
898 display more detailed information about what went wrong, if
899 information is available. The tracing/error_log file is a circular
900 error log displaying a small number (currently, 8) of ftrace errors
901 for the last (8) failed commands.
902
903 The extended error information and usage takes the form shown in
904 this example::
905
906 # echo xxx > /sys/kernel/tracing/events/sched/sched_wakeup/trigger
907 echo: write error: Invalid argument
908
909 # cat /sys/kernel/tracing/error_log
910 [ 5348.887237] location: error: Couldn't yyy: zzz
911 Command: xxx
912 ^
913 [ 7517.023364] location: error: Bad rrr: sss
914 Command: ppp qqq
915 ^
916
917 To clear the error log, echo the empty string into it::
918
919 # echo > /sys/kernel/tracing/error_log
920
921Examples of using the tracer
922----------------------------
923
924Here are typical examples of using the tracers when controlling
925them only with the tracefs interface (without using any
926user-land utilities).
927
928Output format:
929--------------
930
931Here is an example of the output format of the file "trace"::
932
933 # tracer: function
934 #
935 # entries-in-buffer/entries-written: 140080/250280 #P:4
936 #
937 # _-----=> irqs-off
938 # / _----=> need-resched
939 # | / _---=> hardirq/softirq
940 # || / _--=> preempt-depth
941 # ||| / delay
942 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
943 # | | | |||| | |
944 bash-1977 [000] .... 17284.993652: sys_close <-system_call_fastpath
945 bash-1977 [000] .... 17284.993653: __close_fd <-sys_close
946 bash-1977 [000] .... 17284.993653: _raw_spin_lock <-__close_fd
947 sshd-1974 [003] .... 17284.993653: __srcu_read_unlock <-fsnotify
948 bash-1977 [000] .... 17284.993654: add_preempt_count <-_raw_spin_lock
949 bash-1977 [000] ...1 17284.993655: _raw_spin_unlock <-__close_fd
950 bash-1977 [000] ...1 17284.993656: sub_preempt_count <-_raw_spin_unlock
951 bash-1977 [000] .... 17284.993657: filp_close <-__close_fd
952 bash-1977 [000] .... 17284.993657: dnotify_flush <-filp_close
953 sshd-1974 [003] .... 17284.993658: sys_select <-system_call_fastpath
954 ....
955
956A header is printed with the tracer name that is represented by
957the trace. In this case the tracer is "function". Then it shows the
958number of events in the buffer as well as the total number of entries
959that were written. The difference is the number of entries that were
960lost due to the buffer filling up (250280 - 140080 = 110200 events
961lost).
962
963The header explains the content of the events. Task name "bash", the task
964PID "1977", the CPU that it was running on "000", the latency format
965(explained below), the timestamp in <secs>.<usecs> format, the
966function name that was traced "sys_close" and the parent function that
967called this function "system_call_fastpath". The timestamp is the time
968at which the function was entered.
969
970Latency trace format
971--------------------
972
973When the latency-format option is enabled or when one of the latency
974tracers is set, the trace file gives somewhat more information to see
975why a latency happened. Here is a typical trace::
976
977 # tracer: irqsoff
978 #
979 # irqsoff latency trace v1.1.5 on 3.8.0-test+
980 # --------------------------------------------------------------------
981 # latency: 259 us, #4/4, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
982 # -----------------
983 # | task: ps-6143 (uid:0 nice:0 policy:0 rt_prio:0)
984 # -----------------
985 # => started at: __lock_task_sighand
986 # => ended at: _raw_spin_unlock_irqrestore
987 #
988 #
989 # _------=> CPU#
990 # / _-----=> irqs-off
991 # | / _----=> need-resched
992 # || / _---=> hardirq/softirq
993 # ||| / _--=> preempt-depth
994 # |||| / delay
995 # cmd pid ||||| time | caller
996 # \ / ||||| \ | /
997 ps-6143 2d... 0us!: trace_hardirqs_off <-__lock_task_sighand
998 ps-6143 2d..1 259us+: trace_hardirqs_on <-_raw_spin_unlock_irqrestore
999 ps-6143 2d..1 263us+: time_hardirqs_on <-_raw_spin_unlock_irqrestore
1000 ps-6143 2d..1 306us : <stack trace>
1001 => trace_hardirqs_on_caller
1002 => trace_hardirqs_on
1003 => _raw_spin_unlock_irqrestore
1004 => do_task_stat
1005 => proc_tgid_stat
1006 => proc_single_show
1007 => seq_read
1008 => vfs_read
1009 => sys_read
1010 => system_call_fastpath
1011
1012
1013This shows that the current tracer is "irqsoff" tracing the time
1014for which interrupts were disabled. It gives the trace version (which
1015never changes) and the version of the kernel upon which this was executed on
1016(3.8). Then it displays the max latency in microseconds (259 us). The number
1017of trace entries displayed and the total number (both are four: #4/4).
1018VP, KP, SP, and HP are always zero and are reserved for later use.
1019#P is the number of online CPUs (#P:4).
1020
1021The task is the process that was running when the latency
1022occurred. (ps pid: 6143).
1023
1024The start and stop (the functions in which the interrupts were
1025disabled and enabled respectively) that caused the latencies:
1026
1027 - __lock_task_sighand is where the interrupts were disabled.
1028 - _raw_spin_unlock_irqrestore is where they were enabled again.
1029
1030The next lines after the header are the trace itself. The header
1031explains which is which.
1032
1033 cmd: The name of the process in the trace.
1034
1035 pid: The PID of that process.
1036
1037 CPU#: The CPU which the process was running on.
1038
1039 irqs-off: 'd' interrupts are disabled. '.' otherwise.
1040
1041 need-resched:
1042 - 'B' all, TIF_NEED_RESCHED, PREEMPT_NEED_RESCHED and TIF_RESCHED_LAZY is set,
1043 - 'N' both TIF_NEED_RESCHED and PREEMPT_NEED_RESCHED is set,
1044 - 'n' only TIF_NEED_RESCHED is set,
1045 - 'p' only PREEMPT_NEED_RESCHED is set,
1046 - 'L' both PREEMPT_NEED_RESCHED and TIF_RESCHED_LAZY is set,
1047 - 'b' both TIF_NEED_RESCHED and TIF_RESCHED_LAZY is set,
1048 - 'l' only TIF_RESCHED_LAZY is set
1049 - '.' otherwise.
1050
1051 hardirq/softirq:
1052 - 'Z' - NMI occurred inside a hardirq
1053 - 'z' - NMI is running
1054 - 'H' - hard irq occurred inside a softirq.
1055 - 'h' - hard irq is running
1056 - 's' - soft irq is running
1057 - '.' - normal context.
1058
1059 preempt-depth: The level of preempt_disabled
1060
1061The above is mostly meaningful for kernel developers.
1062
1063 time:
1064 When the latency-format option is enabled, the trace file
1065 output includes a timestamp relative to the start of the
1066 trace. This differs from the output when latency-format
1067 is disabled, which includes an absolute timestamp.
1068
1069 delay:
1070 This is just to help catch your eye a bit better. And
1071 needs to be fixed to be only relative to the same CPU.
1072 The marks are determined by the difference between this
1073 current trace and the next trace.
1074
1075 - '$' - greater than 1 second
1076 - '@' - greater than 100 millisecond
1077 - '*' - greater than 10 millisecond
1078 - '#' - greater than 1000 microsecond
1079 - '!' - greater than 100 microsecond
1080 - '+' - greater than 10 microsecond
1081 - ' ' - less than or equal to 10 microsecond.
1082
1083 The rest is the same as the 'trace' file.
1084
1085 Note, the latency tracers will usually end with a back trace
1086 to easily find where the latency occurred.
1087
1088trace_options
1089-------------
1090
1091The trace_options file (or the options directory) is used to control
1092what gets printed in the trace output, or manipulate the tracers.
1093To see what is available, simply cat the file::
1094
1095 cat trace_options
1096 print-parent
1097 nosym-offset
1098 nosym-addr
1099 noverbose
1100 noraw
1101 nohex
1102 nobin
1103 noblock
1104 nofields
1105 trace_printk
1106 annotate
1107 nouserstacktrace
1108 nosym-userobj
1109 noprintk-msg-only
1110 context-info
1111 nolatency-format
1112 record-cmd
1113 norecord-tgid
1114 overwrite
1115 nodisable_on_free
1116 irq-info
1117 markers
1118 noevent-fork
1119 function-trace
1120 nofunction-fork
1121 nodisplay-graph
1122 nostacktrace
1123 nobranch
1124
1125To disable one of the options, echo in the option prepended with
1126"no"::
1127
1128 echo noprint-parent > trace_options
1129
1130To enable an option, leave off the "no"::
1131
1132 echo sym-offset > trace_options
1133
1134Here are the available options:
1135
1136 print-parent
1137 On function traces, display the calling (parent)
1138 function as well as the function being traced.
1139 ::
1140
1141 print-parent:
1142 bash-4000 [01] 1477.606694: simple_strtoul <-kstrtoul
1143
1144 noprint-parent:
1145 bash-4000 [01] 1477.606694: simple_strtoul
1146
1147
1148 sym-offset
1149 Display not only the function name, but also the
1150 offset in the function. For example, instead of
1151 seeing just "ktime_get", you will see
1152 "ktime_get+0xb/0x20".
1153 ::
1154
1155 sym-offset:
1156 bash-4000 [01] 1477.606694: simple_strtoul+0x6/0xa0
1157
1158 sym-addr
1159 This will also display the function address as well
1160 as the function name.
1161 ::
1162
1163 sym-addr:
1164 bash-4000 [01] 1477.606694: simple_strtoul <c0339346>
1165
1166 verbose
1167 This deals with the trace file when the
1168 latency-format option is enabled.
1169 ::
1170
1171 bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
1172 (+0.000ms): simple_strtoul (kstrtoul)
1173
1174 raw
1175 This will display raw numbers. This option is best for
1176 use with user applications that can translate the raw
1177 numbers better than having it done in the kernel.
1178
1179 hex
1180 Similar to raw, but the numbers will be in a hexadecimal format.
1181
1182 bin
1183 This will print out the formats in raw binary.
1184
1185 block
1186 When set, reading trace_pipe will not block when polled.
1187
1188 fields
1189 Print the fields as described by their types. This is a better
1190 option than using hex, bin or raw, as it gives a better parsing
1191 of the content of the event.
1192
1193 trace_printk
1194 Can disable trace_printk() from writing into the buffer.
1195
1196 trace_printk_dest
1197 Set to have trace_printk() and similar internal tracing functions
1198 write into this instance. Note, only one trace instance can have
1199 this set. By setting this flag, it clears the trace_printk_dest flag
1200 of the instance that had it set previously. By default, the top
1201 level trace has this set, and will get it set again if another
1202 instance has it set then clears it.
1203
1204 This flag cannot be cleared by the top level instance, as it is the
1205 default instance. The only way the top level instance has this flag
1206 cleared, is by it being set in another instance.
1207
1208 annotate
1209 It is sometimes confusing when the CPU buffers are full
1210 and one CPU buffer had a lot of events recently, thus
1211 a shorter time frame, were another CPU may have only had
1212 a few events, which lets it have older events. When
1213 the trace is reported, it shows the oldest events first,
1214 and it may look like only one CPU ran (the one with the
1215 oldest events). When the annotate option is set, it will
1216 display when a new CPU buffer started::
1217
1218 <idle>-0 [001] dNs4 21169.031481: wake_up_idle_cpu <-add_timer_on
1219 <idle>-0 [001] dNs4 21169.031482: _raw_spin_unlock_irqrestore <-add_timer_on
1220 <idle>-0 [001] .Ns4 21169.031484: sub_preempt_count <-_raw_spin_unlock_irqrestore
1221 ##### CPU 2 buffer started ####
1222 <idle>-0 [002] .N.1 21169.031484: rcu_idle_exit <-cpu_idle
1223 <idle>-0 [001] .Ns3 21169.031484: _raw_spin_unlock <-clocksource_watchdog
1224 <idle>-0 [001] .Ns3 21169.031485: sub_preempt_count <-_raw_spin_unlock
1225
1226 userstacktrace
1227 This option changes the trace. It records a
1228 stacktrace of the current user space thread after
1229 each trace event.
1230
1231 sym-userobj
1232 when user stacktrace are enabled, look up which
1233 object the address belongs to, and print a
1234 relative address. This is especially useful when
1235 ASLR is on, otherwise you don't get a chance to
1236 resolve the address to object/file/line after
1237 the app is no longer running
1238
1239 The lookup is performed when you read
1240 trace,trace_pipe. Example::
1241
1242 a.out-1623 [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0
1243 x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
1244
1245
1246 printk-msg-only
1247 When set, trace_printk()s will only show the format
1248 and not their parameters (if trace_bprintk() or
1249 trace_bputs() was used to save the trace_printk()).
1250
1251 context-info
1252 Show only the event data. Hides the comm, PID,
1253 timestamp, CPU, and other useful data.
1254
1255 latency-format
1256 This option changes the trace output. When it is enabled,
1257 the trace displays additional information about the
1258 latency, as described in "Latency trace format".
1259
1260 pause-on-trace
1261 When set, opening the trace file for read, will pause
1262 writing to the ring buffer (as if tracing_on was set to zero).
1263 This simulates the original behavior of the trace file.
1264 When the file is closed, tracing will be enabled again.
1265
1266 hash-ptr
1267 When set, "%p" in the event printk format displays the
1268 hashed pointer value instead of real address.
1269 This will be useful if you want to find out which hashed
1270 value is corresponding to the real value in trace log.
1271
1272 record-cmd
1273 When any event or tracer is enabled, a hook is enabled
1274 in the sched_switch trace point to fill comm cache
1275 with mapped pids and comms. But this may cause some
1276 overhead, and if you only care about pids, and not the
1277 name of the task, disabling this option can lower the
1278 impact of tracing. See "saved_cmdlines".
1279
1280 record-tgid
1281 When any event or tracer is enabled, a hook is enabled
1282 in the sched_switch trace point to fill the cache of
1283 mapped Thread Group IDs (TGID) mapping to pids. See
1284 "saved_tgids".
1285
1286 overwrite
1287 This controls what happens when the trace buffer is
1288 full. If "1" (default), the oldest events are
1289 discarded and overwritten. If "0", then the newest
1290 events are discarded.
1291 (see per_cpu/cpu0/stats for overrun and dropped)
1292
1293 disable_on_free
1294 When the free_buffer is closed, tracing will
1295 stop (tracing_on set to 0).
1296
1297 irq-info
1298 Shows the interrupt, preempt count, need resched data.
1299 When disabled, the trace looks like::
1300
1301 # tracer: function
1302 #
1303 # entries-in-buffer/entries-written: 144405/9452052 #P:4
1304 #
1305 # TASK-PID CPU# TIMESTAMP FUNCTION
1306 # | | | | |
1307 <idle>-0 [002] 23636.756054: ttwu_do_activate.constprop.89 <-try_to_wake_up
1308 <idle>-0 [002] 23636.756054: activate_task <-ttwu_do_activate.constprop.89
1309 <idle>-0 [002] 23636.756055: enqueue_task <-activate_task
1310
1311
1312 markers
1313 When set, the trace_marker is writable (only by root).
1314 When disabled, the trace_marker will error with EINVAL
1315 on write.
1316
1317 event-fork
1318 When set, tasks with PIDs listed in set_event_pid will have
1319 the PIDs of their children added to set_event_pid when those
1320 tasks fork. Also, when tasks with PIDs in set_event_pid exit,
1321 their PIDs will be removed from the file.
1322
1323 This affects PIDs listed in set_event_notrace_pid as well.
1324
1325 function-trace
1326 The latency tracers will enable function tracing
1327 if this option is enabled (default it is). When
1328 it is disabled, the latency tracers do not trace
1329 functions. This keeps the overhead of the tracer down
1330 when performing latency tests.
1331
1332 function-fork
1333 When set, tasks with PIDs listed in set_ftrace_pid will
1334 have the PIDs of their children added to set_ftrace_pid
1335 when those tasks fork. Also, when tasks with PIDs in
1336 set_ftrace_pid exit, their PIDs will be removed from the
1337 file.
1338
1339 This affects PIDs in set_ftrace_notrace_pid as well.
1340
1341 display-graph
1342 When set, the latency tracers (irqsoff, wakeup, etc) will
1343 use function graph tracing instead of function tracing.
1344
1345 stacktrace
1346 When set, a stack trace is recorded after any trace event
1347 is recorded.
1348
1349 branch
1350 Enable branch tracing with the tracer. This enables branch
1351 tracer along with the currently set tracer. Enabling this
1352 with the "nop" tracer is the same as just enabling the
1353 "branch" tracer.
1354
1355.. tip:: Some tracers have their own options. They only appear in this
1356 file when the tracer is active. They always appear in the
1357 options directory.
1358
1359
1360Here are the per tracer options:
1361
1362Options for function tracer:
1363
1364 func_stack_trace
1365 When set, a stack trace is recorded after every
1366 function that is recorded. NOTE! Limit the functions
1367 that are recorded before enabling this, with
1368 "set_ftrace_filter" otherwise the system performance
1369 will be critically degraded. Remember to disable
1370 this option before clearing the function filter.
1371
1372Options for function_graph tracer:
1373
1374 Since the function_graph tracer has a slightly different output
1375 it has its own options to control what is displayed.
1376
1377 funcgraph-overrun
1378 When set, the "overrun" of the graph stack is
1379 displayed after each function traced. The
1380 overrun, is when the stack depth of the calls
1381 is greater than what is reserved for each task.
1382 Each task has a fixed array of functions to
1383 trace in the call graph. If the depth of the
1384 calls exceeds that, the function is not traced.
1385 The overrun is the number of functions missed
1386 due to exceeding this array.
1387
1388 funcgraph-cpu
1389 When set, the CPU number of the CPU where the trace
1390 occurred is displayed.
1391
1392 funcgraph-overhead
1393 When set, if the function takes longer than
1394 A certain amount, then a delay marker is
1395 displayed. See "delay" above, under the
1396 header description.
1397
1398 funcgraph-proc
1399 Unlike other tracers, the process' command line
1400 is not displayed by default, but instead only
1401 when a task is traced in and out during a context
1402 switch. Enabling this options has the command
1403 of each process displayed at every line.
1404
1405 funcgraph-duration
1406 At the end of each function (the return)
1407 the duration of the amount of time in the
1408 function is displayed in microseconds.
1409
1410 funcgraph-abstime
1411 When set, the timestamp is displayed at each line.
1412
1413 funcgraph-irqs
1414 When disabled, functions that happen inside an
1415 interrupt will not be traced.
1416
1417 funcgraph-tail
1418 When set, the return event will include the function
1419 that it represents. By default this is off, and
1420 only a closing curly bracket "}" is displayed for
1421 the return of a function.
1422
1423 funcgraph-retval
1424 When set, the return value of each traced function
1425 will be printed after an equal sign "=". By default
1426 this is off.
1427
1428 funcgraph-retval-hex
1429 When set, the return value will always be printed
1430 in hexadecimal format. If the option is not set and
1431 the return value is an error code, it will be printed
1432 in signed decimal format; otherwise it will also be
1433 printed in hexadecimal format. By default, this option
1434 is off.
1435
1436 sleep-time
1437 When running function graph tracer, to include
1438 the time a task schedules out in its function.
1439 When enabled, it will account time the task has been
1440 scheduled out as part of the function call.
1441
1442 graph-time
1443 When running function profiler with function graph tracer,
1444 to include the time to call nested functions. When this is
1445 not set, the time reported for the function will only
1446 include the time the function itself executed for, not the
1447 time for functions that it called.
1448
1449Options for blk tracer:
1450
1451 blk_classic
1452 Shows a more minimalistic output.
1453
1454
1455irqsoff
1456-------
1457
1458When interrupts are disabled, the CPU can not react to any other
1459external event (besides NMIs and SMIs). This prevents the timer
1460interrupt from triggering or the mouse interrupt from letting
1461the kernel know of a new mouse event. The result is a latency
1462with the reaction time.
1463
1464The irqsoff tracer tracks the time for which interrupts are
1465disabled. When a new maximum latency is hit, the tracer saves
1466the trace leading up to that latency point so that every time a
1467new maximum is reached, the old saved trace is discarded and the
1468new trace is saved.
1469
1470To reset the maximum, echo 0 into tracing_max_latency. Here is
1471an example::
1472
1473 # echo 0 > options/function-trace
1474 # echo irqsoff > current_tracer
1475 # echo 1 > tracing_on
1476 # echo 0 > tracing_max_latency
1477 # ls -ltr
1478 [...]
1479 # echo 0 > tracing_on
1480 # cat trace
1481 # tracer: irqsoff
1482 #
1483 # irqsoff latency trace v1.1.5 on 3.8.0-test+
1484 # --------------------------------------------------------------------
1485 # latency: 16 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1486 # -----------------
1487 # | task: swapper/0-0 (uid:0 nice:0 policy:0 rt_prio:0)
1488 # -----------------
1489 # => started at: run_timer_softirq
1490 # => ended at: run_timer_softirq
1491 #
1492 #
1493 # _------=> CPU#
1494 # / _-----=> irqs-off
1495 # | / _----=> need-resched
1496 # || / _---=> hardirq/softirq
1497 # ||| / _--=> preempt-depth
1498 # |||| / delay
1499 # cmd pid ||||| time | caller
1500 # \ / ||||| \ | /
1501 <idle>-0 0d.s2 0us+: _raw_spin_lock_irq <-run_timer_softirq
1502 <idle>-0 0dNs3 17us : _raw_spin_unlock_irq <-run_timer_softirq
1503 <idle>-0 0dNs3 17us+: trace_hardirqs_on <-run_timer_softirq
1504 <idle>-0 0dNs3 25us : <stack trace>
1505 => _raw_spin_unlock_irq
1506 => run_timer_softirq
1507 => __do_softirq
1508 => call_softirq
1509 => do_softirq
1510 => irq_exit
1511 => smp_apic_timer_interrupt
1512 => apic_timer_interrupt
1513 => rcu_idle_exit
1514 => cpu_idle
1515 => rest_init
1516 => start_kernel
1517 => x86_64_start_reservations
1518 => x86_64_start_kernel
1519
1520Here we see that we had a latency of 16 microseconds (which is
1521very good). The _raw_spin_lock_irq in run_timer_softirq disabled
1522interrupts. The difference between the 16 and the displayed
1523timestamp 25us occurred because the clock was incremented
1524between the time of recording the max latency and the time of
1525recording the function that had that latency.
1526
1527Note the above example had function-trace not set. If we set
1528function-trace, we get a much larger output::
1529
1530 with echo 1 > options/function-trace
1531
1532 # tracer: irqsoff
1533 #
1534 # irqsoff latency trace v1.1.5 on 3.8.0-test+
1535 # --------------------------------------------------------------------
1536 # latency: 71 us, #168/168, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1537 # -----------------
1538 # | task: bash-2042 (uid:0 nice:0 policy:0 rt_prio:0)
1539 # -----------------
1540 # => started at: ata_scsi_queuecmd
1541 # => ended at: ata_scsi_queuecmd
1542 #
1543 #
1544 # _------=> CPU#
1545 # / _-----=> irqs-off
1546 # | / _----=> need-resched
1547 # || / _---=> hardirq/softirq
1548 # ||| / _--=> preempt-depth
1549 # |||| / delay
1550 # cmd pid ||||| time | caller
1551 # \ / ||||| \ | /
1552 bash-2042 3d... 0us : _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1553 bash-2042 3d... 0us : add_preempt_count <-_raw_spin_lock_irqsave
1554 bash-2042 3d..1 1us : ata_scsi_find_dev <-ata_scsi_queuecmd
1555 bash-2042 3d..1 1us : __ata_scsi_find_dev <-ata_scsi_find_dev
1556 bash-2042 3d..1 2us : ata_find_dev.part.14 <-__ata_scsi_find_dev
1557 bash-2042 3d..1 2us : ata_qc_new_init <-__ata_scsi_queuecmd
1558 bash-2042 3d..1 3us : ata_sg_init <-__ata_scsi_queuecmd
1559 bash-2042 3d..1 4us : ata_scsi_rw_xlat <-__ata_scsi_queuecmd
1560 bash-2042 3d..1 4us : ata_build_rw_tf <-ata_scsi_rw_xlat
1561 [...]
1562 bash-2042 3d..1 67us : delay_tsc <-__delay
1563 bash-2042 3d..1 67us : add_preempt_count <-delay_tsc
1564 bash-2042 3d..2 67us : sub_preempt_count <-delay_tsc
1565 bash-2042 3d..1 67us : add_preempt_count <-delay_tsc
1566 bash-2042 3d..2 68us : sub_preempt_count <-delay_tsc
1567 bash-2042 3d..1 68us+: ata_bmdma_start <-ata_bmdma_qc_issue
1568 bash-2042 3d..1 71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1569 bash-2042 3d..1 71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1570 bash-2042 3d..1 72us+: trace_hardirqs_on <-ata_scsi_queuecmd
1571 bash-2042 3d..1 120us : <stack trace>
1572 => _raw_spin_unlock_irqrestore
1573 => ata_scsi_queuecmd
1574 => scsi_dispatch_cmd
1575 => scsi_request_fn
1576 => __blk_run_queue_uncond
1577 => __blk_run_queue
1578 => blk_queue_bio
1579 => submit_bio_noacct
1580 => submit_bio
1581 => submit_bh
1582 => __ext3_get_inode_loc
1583 => ext3_iget
1584 => ext3_lookup
1585 => lookup_real
1586 => __lookup_hash
1587 => walk_component
1588 => lookup_last
1589 => path_lookupat
1590 => filename_lookup
1591 => user_path_at_empty
1592 => user_path_at
1593 => vfs_fstatat
1594 => vfs_stat
1595 => sys_newstat
1596 => system_call_fastpath
1597
1598
1599Here we traced a 71 microsecond latency. But we also see all the
1600functions that were called during that time. Note that by
1601enabling function tracing, we incur an added overhead. This
1602overhead may extend the latency times. But nevertheless, this
1603trace has provided some very helpful debugging information.
1604
1605If we prefer function graph output instead of function, we can set
1606display-graph option::
1607
1608 with echo 1 > options/display-graph
1609
1610 # tracer: irqsoff
1611 #
1612 # irqsoff latency trace v1.1.5 on 4.20.0-rc6+
1613 # --------------------------------------------------------------------
1614 # latency: 3751 us, #274/274, CPU#0 | (M:desktop VP:0, KP:0, SP:0 HP:0 #P:4)
1615 # -----------------
1616 # | task: bash-1507 (uid:0 nice:0 policy:0 rt_prio:0)
1617 # -----------------
1618 # => started at: free_debug_processing
1619 # => ended at: return_to_handler
1620 #
1621 #
1622 # _-----=> irqs-off
1623 # / _----=> need-resched
1624 # | / _---=> hardirq/softirq
1625 # || / _--=> preempt-depth
1626 # ||| /
1627 # REL TIME CPU TASK/PID |||| DURATION FUNCTION CALLS
1628 # | | | | |||| | | | | | |
1629 0 us | 0) bash-1507 | d... | 0.000 us | _raw_spin_lock_irqsave();
1630 0 us | 0) bash-1507 | d..1 | 0.378 us | do_raw_spin_trylock();
1631 1 us | 0) bash-1507 | d..2 | | set_track() {
1632 2 us | 0) bash-1507 | d..2 | | save_stack_trace() {
1633 2 us | 0) bash-1507 | d..2 | | __save_stack_trace() {
1634 3 us | 0) bash-1507 | d..2 | | __unwind_start() {
1635 3 us | 0) bash-1507 | d..2 | | get_stack_info() {
1636 3 us | 0) bash-1507 | d..2 | 0.351 us | in_task_stack();
1637 4 us | 0) bash-1507 | d..2 | 1.107 us | }
1638 [...]
1639 3750 us | 0) bash-1507 | d..1 | 0.516 us | do_raw_spin_unlock();
1640 3750 us | 0) bash-1507 | d..1 | 0.000 us | _raw_spin_unlock_irqrestore();
1641 3764 us | 0) bash-1507 | d..1 | 0.000 us | tracer_hardirqs_on();
1642 bash-1507 0d..1 3792us : <stack trace>
1643 => free_debug_processing
1644 => __slab_free
1645 => kmem_cache_free
1646 => vm_area_free
1647 => remove_vma
1648 => exit_mmap
1649 => mmput
1650 => begin_new_exec
1651 => load_elf_binary
1652 => search_binary_handler
1653 => __do_execve_file.isra.32
1654 => __x64_sys_execve
1655 => do_syscall_64
1656 => entry_SYSCALL_64_after_hwframe
1657
1658preemptoff
1659----------
1660
1661When preemption is disabled, we may be able to receive
1662interrupts but the task cannot be preempted and a higher
1663priority task must wait for preemption to be enabled again
1664before it can preempt a lower priority task.
1665
1666The preemptoff tracer traces the places that disable preemption.
1667Like the irqsoff tracer, it records the maximum latency for
1668which preemption was disabled. The control of preemptoff tracer
1669is much like the irqsoff tracer.
1670::
1671
1672 # echo 0 > options/function-trace
1673 # echo preemptoff > current_tracer
1674 # echo 1 > tracing_on
1675 # echo 0 > tracing_max_latency
1676 # ls -ltr
1677 [...]
1678 # echo 0 > tracing_on
1679 # cat trace
1680 # tracer: preemptoff
1681 #
1682 # preemptoff latency trace v1.1.5 on 3.8.0-test+
1683 # --------------------------------------------------------------------
1684 # latency: 46 us, #4/4, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1685 # -----------------
1686 # | task: sshd-1991 (uid:0 nice:0 policy:0 rt_prio:0)
1687 # -----------------
1688 # => started at: do_IRQ
1689 # => ended at: do_IRQ
1690 #
1691 #
1692 # _------=> CPU#
1693 # / _-----=> irqs-off
1694 # | / _----=> need-resched
1695 # || / _---=> hardirq/softirq
1696 # ||| / _--=> preempt-depth
1697 # |||| / delay
1698 # cmd pid ||||| time | caller
1699 # \ / ||||| \ | /
1700 sshd-1991 1d.h. 0us+: irq_enter <-do_IRQ
1701 sshd-1991 1d..1 46us : irq_exit <-do_IRQ
1702 sshd-1991 1d..1 47us+: trace_preempt_on <-do_IRQ
1703 sshd-1991 1d..1 52us : <stack trace>
1704 => sub_preempt_count
1705 => irq_exit
1706 => do_IRQ
1707 => ret_from_intr
1708
1709
1710This has some more changes. Preemption was disabled when an
1711interrupt came in (notice the 'h'), and was enabled on exit.
1712But we also see that interrupts have been disabled when entering
1713the preempt off section and leaving it (the 'd'). We do not know if
1714interrupts were enabled in the mean time or shortly after this
1715was over.
1716::
1717
1718 # tracer: preemptoff
1719 #
1720 # preemptoff latency trace v1.1.5 on 3.8.0-test+
1721 # --------------------------------------------------------------------
1722 # latency: 83 us, #241/241, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1723 # -----------------
1724 # | task: bash-1994 (uid:0 nice:0 policy:0 rt_prio:0)
1725 # -----------------
1726 # => started at: wake_up_new_task
1727 # => ended at: task_rq_unlock
1728 #
1729 #
1730 # _------=> CPU#
1731 # / _-----=> irqs-off
1732 # | / _----=> need-resched
1733 # || / _---=> hardirq/softirq
1734 # ||| / _--=> preempt-depth
1735 # |||| / delay
1736 # cmd pid ||||| time | caller
1737 # \ / ||||| \ | /
1738 bash-1994 1d..1 0us : _raw_spin_lock_irqsave <-wake_up_new_task
1739 bash-1994 1d..1 0us : select_task_rq_fair <-select_task_rq
1740 bash-1994 1d..1 1us : __rcu_read_lock <-select_task_rq_fair
1741 bash-1994 1d..1 1us : source_load <-select_task_rq_fair
1742 bash-1994 1d..1 1us : source_load <-select_task_rq_fair
1743 [...]
1744 bash-1994 1d..1 12us : irq_enter <-smp_apic_timer_interrupt
1745 bash-1994 1d..1 12us : rcu_irq_enter <-irq_enter
1746 bash-1994 1d..1 13us : add_preempt_count <-irq_enter
1747 bash-1994 1d.h1 13us : exit_idle <-smp_apic_timer_interrupt
1748 bash-1994 1d.h1 13us : hrtimer_interrupt <-smp_apic_timer_interrupt
1749 bash-1994 1d.h1 13us : _raw_spin_lock <-hrtimer_interrupt
1750 bash-1994 1d.h1 14us : add_preempt_count <-_raw_spin_lock
1751 bash-1994 1d.h2 14us : ktime_get_update_offsets <-hrtimer_interrupt
1752 [...]
1753 bash-1994 1d.h1 35us : lapic_next_event <-clockevents_program_event
1754 bash-1994 1d.h1 35us : irq_exit <-smp_apic_timer_interrupt
1755 bash-1994 1d.h1 36us : sub_preempt_count <-irq_exit
1756 bash-1994 1d..2 36us : do_softirq <-irq_exit
1757 bash-1994 1d..2 36us : __do_softirq <-call_softirq
1758 bash-1994 1d..2 36us : __local_bh_disable <-__do_softirq
1759 bash-1994 1d.s2 37us : add_preempt_count <-_raw_spin_lock_irq
1760 bash-1994 1d.s3 38us : _raw_spin_unlock <-run_timer_softirq
1761 bash-1994 1d.s3 39us : sub_preempt_count <-_raw_spin_unlock
1762 bash-1994 1d.s2 39us : call_timer_fn <-run_timer_softirq
1763 [...]
1764 bash-1994 1dNs2 81us : cpu_needs_another_gp <-rcu_process_callbacks
1765 bash-1994 1dNs2 82us : __local_bh_enable <-__do_softirq
1766 bash-1994 1dNs2 82us : sub_preempt_count <-__local_bh_enable
1767 bash-1994 1dN.2 82us : idle_cpu <-irq_exit
1768 bash-1994 1dN.2 83us : rcu_irq_exit <-irq_exit
1769 bash-1994 1dN.2 83us : sub_preempt_count <-irq_exit
1770 bash-1994 1.N.1 84us : _raw_spin_unlock_irqrestore <-task_rq_unlock
1771 bash-1994 1.N.1 84us+: trace_preempt_on <-task_rq_unlock
1772 bash-1994 1.N.1 104us : <stack trace>
1773 => sub_preempt_count
1774 => _raw_spin_unlock_irqrestore
1775 => task_rq_unlock
1776 => wake_up_new_task
1777 => do_fork
1778 => sys_clone
1779 => stub_clone
1780
1781
1782The above is an example of the preemptoff trace with
1783function-trace set. Here we see that interrupts were not disabled
1784the entire time. The irq_enter code lets us know that we entered
1785an interrupt 'h'. Before that, the functions being traced still
1786show that it is not in an interrupt, but we can see from the
1787functions themselves that this is not the case.
1788
1789preemptirqsoff
1790--------------
1791
1792Knowing the locations that have interrupts disabled or
1793preemption disabled for the longest times is helpful. But
1794sometimes we would like to know when either preemption and/or
1795interrupts are disabled.
1796
1797Consider the following code::
1798
1799 local_irq_disable();
1800 call_function_with_irqs_off();
1801 preempt_disable();
1802 call_function_with_irqs_and_preemption_off();
1803 local_irq_enable();
1804 call_function_with_preemption_off();
1805 preempt_enable();
1806
1807The irqsoff tracer will record the total length of
1808call_function_with_irqs_off() and
1809call_function_with_irqs_and_preemption_off().
1810
1811The preemptoff tracer will record the total length of
1812call_function_with_irqs_and_preemption_off() and
1813call_function_with_preemption_off().
1814
1815But neither will trace the time that interrupts and/or
1816preemption is disabled. This total time is the time that we can
1817not schedule. To record this time, use the preemptirqsoff
1818tracer.
1819
1820Again, using this trace is much like the irqsoff and preemptoff
1821tracers.
1822::
1823
1824 # echo 0 > options/function-trace
1825 # echo preemptirqsoff > current_tracer
1826 # echo 1 > tracing_on
1827 # echo 0 > tracing_max_latency
1828 # ls -ltr
1829 [...]
1830 # echo 0 > tracing_on
1831 # cat trace
1832 # tracer: preemptirqsoff
1833 #
1834 # preemptirqsoff latency trace v1.1.5 on 3.8.0-test+
1835 # --------------------------------------------------------------------
1836 # latency: 100 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1837 # -----------------
1838 # | task: ls-2230 (uid:0 nice:0 policy:0 rt_prio:0)
1839 # -----------------
1840 # => started at: ata_scsi_queuecmd
1841 # => ended at: ata_scsi_queuecmd
1842 #
1843 #
1844 # _------=> CPU#
1845 # / _-----=> irqs-off
1846 # | / _----=> need-resched
1847 # || / _---=> hardirq/softirq
1848 # ||| / _--=> preempt-depth
1849 # |||| / delay
1850 # cmd pid ||||| time | caller
1851 # \ / ||||| \ | /
1852 ls-2230 3d... 0us+: _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1853 ls-2230 3...1 100us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1854 ls-2230 3...1 101us+: trace_preempt_on <-ata_scsi_queuecmd
1855 ls-2230 3...1 111us : <stack trace>
1856 => sub_preempt_count
1857 => _raw_spin_unlock_irqrestore
1858 => ata_scsi_queuecmd
1859 => scsi_dispatch_cmd
1860 => scsi_request_fn
1861 => __blk_run_queue_uncond
1862 => __blk_run_queue
1863 => blk_queue_bio
1864 => submit_bio_noacct
1865 => submit_bio
1866 => submit_bh
1867 => ext3_bread
1868 => ext3_dir_bread
1869 => htree_dirblock_to_tree
1870 => ext3_htree_fill_tree
1871 => ext3_readdir
1872 => vfs_readdir
1873 => sys_getdents
1874 => system_call_fastpath
1875
1876
1877The trace_hardirqs_off_thunk is called from assembly on x86 when
1878interrupts are disabled in the assembly code. Without the
1879function tracing, we do not know if interrupts were enabled
1880within the preemption points. We do see that it started with
1881preemption enabled.
1882
1883Here is a trace with function-trace set::
1884
1885 # tracer: preemptirqsoff
1886 #
1887 # preemptirqsoff latency trace v1.1.5 on 3.8.0-test+
1888 # --------------------------------------------------------------------
1889 # latency: 161 us, #339/339, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1890 # -----------------
1891 # | task: ls-2269 (uid:0 nice:0 policy:0 rt_prio:0)
1892 # -----------------
1893 # => started at: schedule
1894 # => ended at: mutex_unlock
1895 #
1896 #
1897 # _------=> CPU#
1898 # / _-----=> irqs-off
1899 # | / _----=> need-resched
1900 # || / _---=> hardirq/softirq
1901 # ||| / _--=> preempt-depth
1902 # |||| / delay
1903 # cmd pid ||||| time | caller
1904 # \ / ||||| \ | /
1905 kworker/-59 3...1 0us : __schedule <-schedule
1906 kworker/-59 3d..1 0us : rcu_preempt_qs <-rcu_note_context_switch
1907 kworker/-59 3d..1 1us : add_preempt_count <-_raw_spin_lock_irq
1908 kworker/-59 3d..2 1us : deactivate_task <-__schedule
1909 kworker/-59 3d..2 1us : dequeue_task <-deactivate_task
1910 kworker/-59 3d..2 2us : update_rq_clock <-dequeue_task
1911 kworker/-59 3d..2 2us : dequeue_task_fair <-dequeue_task
1912 kworker/-59 3d..2 2us : update_curr <-dequeue_task_fair
1913 kworker/-59 3d..2 2us : update_min_vruntime <-update_curr
1914 kworker/-59 3d..2 3us : cpuacct_charge <-update_curr
1915 kworker/-59 3d..2 3us : __rcu_read_lock <-cpuacct_charge
1916 kworker/-59 3d..2 3us : __rcu_read_unlock <-cpuacct_charge
1917 kworker/-59 3d..2 3us : update_cfs_rq_blocked_load <-dequeue_task_fair
1918 kworker/-59 3d..2 4us : clear_buddies <-dequeue_task_fair
1919 kworker/-59 3d..2 4us : account_entity_dequeue <-dequeue_task_fair
1920 kworker/-59 3d..2 4us : update_min_vruntime <-dequeue_task_fair
1921 kworker/-59 3d..2 4us : update_cfs_shares <-dequeue_task_fair
1922 kworker/-59 3d..2 5us : hrtick_update <-dequeue_task_fair
1923 kworker/-59 3d..2 5us : wq_worker_sleeping <-__schedule
1924 kworker/-59 3d..2 5us : kthread_data <-wq_worker_sleeping
1925 kworker/-59 3d..2 5us : put_prev_task_fair <-__schedule
1926 kworker/-59 3d..2 6us : pick_next_task_fair <-pick_next_task
1927 kworker/-59 3d..2 6us : clear_buddies <-pick_next_task_fair
1928 kworker/-59 3d..2 6us : set_next_entity <-pick_next_task_fair
1929 kworker/-59 3d..2 6us : update_stats_wait_end <-set_next_entity
1930 ls-2269 3d..2 7us : finish_task_switch <-__schedule
1931 ls-2269 3d..2 7us : _raw_spin_unlock_irq <-finish_task_switch
1932 ls-2269 3d..2 8us : do_IRQ <-ret_from_intr
1933 ls-2269 3d..2 8us : irq_enter <-do_IRQ
1934 ls-2269 3d..2 8us : rcu_irq_enter <-irq_enter
1935 ls-2269 3d..2 9us : add_preempt_count <-irq_enter
1936 ls-2269 3d.h2 9us : exit_idle <-do_IRQ
1937 [...]
1938 ls-2269 3d.h3 20us : sub_preempt_count <-_raw_spin_unlock
1939 ls-2269 3d.h2 20us : irq_exit <-do_IRQ
1940 ls-2269 3d.h2 21us : sub_preempt_count <-irq_exit
1941 ls-2269 3d..3 21us : do_softirq <-irq_exit
1942 ls-2269 3d..3 21us : __do_softirq <-call_softirq
1943 ls-2269 3d..3 21us+: __local_bh_disable <-__do_softirq
1944 ls-2269 3d.s4 29us : sub_preempt_count <-_local_bh_enable_ip
1945 ls-2269 3d.s5 29us : sub_preempt_count <-_local_bh_enable_ip
1946 ls-2269 3d.s5 31us : do_IRQ <-ret_from_intr
1947 ls-2269 3d.s5 31us : irq_enter <-do_IRQ
1948 ls-2269 3d.s5 31us : rcu_irq_enter <-irq_enter
1949 [...]
1950 ls-2269 3d.s5 31us : rcu_irq_enter <-irq_enter
1951 ls-2269 3d.s5 32us : add_preempt_count <-irq_enter
1952 ls-2269 3d.H5 32us : exit_idle <-do_IRQ
1953 ls-2269 3d.H5 32us : handle_irq <-do_IRQ
1954 ls-2269 3d.H5 32us : irq_to_desc <-handle_irq
1955 ls-2269 3d.H5 33us : handle_fasteoi_irq <-handle_irq
1956 [...]
1957 ls-2269 3d.s5 158us : _raw_spin_unlock_irqrestore <-rtl8139_poll
1958 ls-2269 3d.s3 158us : net_rps_action_and_irq_enable.isra.65 <-net_rx_action
1959 ls-2269 3d.s3 159us : __local_bh_enable <-__do_softirq
1960 ls-2269 3d.s3 159us : sub_preempt_count <-__local_bh_enable
1961 ls-2269 3d..3 159us : idle_cpu <-irq_exit
1962 ls-2269 3d..3 159us : rcu_irq_exit <-irq_exit
1963 ls-2269 3d..3 160us : sub_preempt_count <-irq_exit
1964 ls-2269 3d... 161us : __mutex_unlock_slowpath <-mutex_unlock
1965 ls-2269 3d... 162us+: trace_hardirqs_on <-mutex_unlock
1966 ls-2269 3d... 186us : <stack trace>
1967 => __mutex_unlock_slowpath
1968 => mutex_unlock
1969 => process_output
1970 => n_tty_write
1971 => tty_write
1972 => vfs_write
1973 => sys_write
1974 => system_call_fastpath
1975
1976This is an interesting trace. It started with kworker running and
1977scheduling out and ls taking over. But as soon as ls released the
1978rq lock and enabled interrupts (but not preemption) an interrupt
1979triggered. When the interrupt finished, it started running softirqs.
1980But while the softirq was running, another interrupt triggered.
1981When an interrupt is running inside a softirq, the annotation is 'H'.
1982
1983
1984wakeup
1985------
1986
1987One common case that people are interested in tracing is the
1988time it takes for a task that is woken to actually wake up.
1989Now for non Real-Time tasks, this can be arbitrary. But tracing
1990it nonetheless can be interesting.
1991
1992Without function tracing::
1993
1994 # echo 0 > options/function-trace
1995 # echo wakeup > current_tracer
1996 # echo 1 > tracing_on
1997 # echo 0 > tracing_max_latency
1998 # chrt -f 5 sleep 1
1999 # echo 0 > tracing_on
2000 # cat trace
2001 # tracer: wakeup
2002 #
2003 # wakeup latency trace v1.1.5 on 3.8.0-test+
2004 # --------------------------------------------------------------------
2005 # latency: 15 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2006 # -----------------
2007 # | task: kworker/3:1H-312 (uid:0 nice:-20 policy:0 rt_prio:0)
2008 # -----------------
2009 #
2010 # _------=> CPU#
2011 # / _-----=> irqs-off
2012 # | / _----=> need-resched
2013 # || / _---=> hardirq/softirq
2014 # ||| / _--=> preempt-depth
2015 # |||| / delay
2016 # cmd pid ||||| time | caller
2017 # \ / ||||| \ | /
2018 <idle>-0 3dNs7 0us : 0:120:R + [003] 312:100:R kworker/3:1H
2019 <idle>-0 3dNs7 1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
2020 <idle>-0 3d..3 15us : __schedule <-schedule
2021 <idle>-0 3d..3 15us : 0:120:R ==> [003] 312:100:R kworker/3:1H
2022
2023The tracer only traces the highest priority task in the system
2024to avoid tracing the normal circumstances. Here we see that
2025the kworker with a nice priority of -20 (not very nice), took
2026just 15 microseconds from the time it woke up, to the time it
2027ran.
2028
2029Non Real-Time tasks are not that interesting. A more interesting
2030trace is to concentrate only on Real-Time tasks.
2031
2032wakeup_rt
2033---------
2034
2035In a Real-Time environment it is very important to know the
2036wakeup time it takes for the highest priority task that is woken
2037up to the time that it executes. This is also known as "schedule
2038latency". I stress the point that this is about RT tasks. It is
2039also important to know the scheduling latency of non-RT tasks,
2040but the average schedule latency is better for non-RT tasks.
2041Tools like LatencyTop are more appropriate for such
2042measurements.
2043
2044Real-Time environments are interested in the worst case latency.
2045That is the longest latency it takes for something to happen,
2046and not the average. We can have a very fast scheduler that may
2047only have a large latency once in a while, but that would not
2048work well with Real-Time tasks. The wakeup_rt tracer was designed
2049to record the worst case wakeups of RT tasks. Non-RT tasks are
2050not recorded because the tracer only records one worst case and
2051tracing non-RT tasks that are unpredictable will overwrite the
2052worst case latency of RT tasks (just run the normal wakeup
2053tracer for a while to see that effect).
2054
2055Since this tracer only deals with RT tasks, we will run this
2056slightly differently than we did with the previous tracers.
2057Instead of performing an 'ls', we will run 'sleep 1' under
2058'chrt' which changes the priority of the task.
2059::
2060
2061 # echo 0 > options/function-trace
2062 # echo wakeup_rt > current_tracer
2063 # echo 1 > tracing_on
2064 # echo 0 > tracing_max_latency
2065 # chrt -f 5 sleep 1
2066 # echo 0 > tracing_on
2067 # cat trace
2068 # tracer: wakeup
2069 #
2070 # tracer: wakeup_rt
2071 #
2072 # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
2073 # --------------------------------------------------------------------
2074 # latency: 5 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2075 # -----------------
2076 # | task: sleep-2389 (uid:0 nice:0 policy:1 rt_prio:5)
2077 # -----------------
2078 #
2079 # _------=> CPU#
2080 # / _-----=> irqs-off
2081 # | / _----=> need-resched
2082 # || / _---=> hardirq/softirq
2083 # ||| / _--=> preempt-depth
2084 # |||| / delay
2085 # cmd pid ||||| time | caller
2086 # \ / ||||| \ | /
2087 <idle>-0 3d.h4 0us : 0:120:R + [003] 2389: 94:R sleep
2088 <idle>-0 3d.h4 1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
2089 <idle>-0 3d..3 5us : __schedule <-schedule
2090 <idle>-0 3d..3 5us : 0:120:R ==> [003] 2389: 94:R sleep
2091
2092
2093Running this on an idle system, we see that it only took 5 microseconds
2094to perform the task switch. Note, since the trace point in the schedule
2095is before the actual "switch", we stop the tracing when the recorded task
2096is about to schedule in. This may change if we add a new marker at the
2097end of the scheduler.
2098
2099Notice that the recorded task is 'sleep' with the PID of 2389
2100and it has an rt_prio of 5. This priority is user-space priority
2101and not the internal kernel priority. The policy is 1 for
2102SCHED_FIFO and 2 for SCHED_RR.
2103
2104Note, that the trace data shows the internal priority (99 - rtprio).
2105::
2106
2107 <idle>-0 3d..3 5us : 0:120:R ==> [003] 2389: 94:R sleep
2108
2109The 0:120:R means idle was running with a nice priority of 0 (120 - 120)
2110and in the running state 'R'. The sleep task was scheduled in with
21112389: 94:R. That is the priority is the kernel rtprio (99 - 5 = 94)
2112and it too is in the running state.
2113
2114Doing the same with chrt -r 5 and function-trace set.
2115::
2116
2117 echo 1 > options/function-trace
2118
2119 # tracer: wakeup_rt
2120 #
2121 # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
2122 # --------------------------------------------------------------------
2123 # latency: 29 us, #85/85, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2124 # -----------------
2125 # | task: sleep-2448 (uid:0 nice:0 policy:1 rt_prio:5)
2126 # -----------------
2127 #
2128 # _------=> CPU#
2129 # / _-----=> irqs-off
2130 # | / _----=> need-resched
2131 # || / _---=> hardirq/softirq
2132 # ||| / _--=> preempt-depth
2133 # |||| / delay
2134 # cmd pid ||||| time | caller
2135 # \ / ||||| \ | /
2136 <idle>-0 3d.h4 1us+: 0:120:R + [003] 2448: 94:R sleep
2137 <idle>-0 3d.h4 2us : ttwu_do_activate.constprop.87 <-try_to_wake_up
2138 <idle>-0 3d.h3 3us : check_preempt_curr <-ttwu_do_wakeup
2139 <idle>-0 3d.h3 3us : resched_curr <-check_preempt_curr
2140 <idle>-0 3dNh3 4us : task_woken_rt <-ttwu_do_wakeup
2141 <idle>-0 3dNh3 4us : _raw_spin_unlock <-try_to_wake_up
2142 <idle>-0 3dNh3 4us : sub_preempt_count <-_raw_spin_unlock
2143 <idle>-0 3dNh2 5us : ttwu_stat <-try_to_wake_up
2144 <idle>-0 3dNh2 5us : _raw_spin_unlock_irqrestore <-try_to_wake_up
2145 <idle>-0 3dNh2 6us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2146 <idle>-0 3dNh1 6us : _raw_spin_lock <-__run_hrtimer
2147 <idle>-0 3dNh1 6us : add_preempt_count <-_raw_spin_lock
2148 <idle>-0 3dNh2 7us : _raw_spin_unlock <-hrtimer_interrupt
2149 <idle>-0 3dNh2 7us : sub_preempt_count <-_raw_spin_unlock
2150 <idle>-0 3dNh1 7us : tick_program_event <-hrtimer_interrupt
2151 <idle>-0 3dNh1 7us : clockevents_program_event <-tick_program_event
2152 <idle>-0 3dNh1 8us : ktime_get <-clockevents_program_event
2153 <idle>-0 3dNh1 8us : lapic_next_event <-clockevents_program_event
2154 <idle>-0 3dNh1 8us : irq_exit <-smp_apic_timer_interrupt
2155 <idle>-0 3dNh1 9us : sub_preempt_count <-irq_exit
2156 <idle>-0 3dN.2 9us : idle_cpu <-irq_exit
2157 <idle>-0 3dN.2 9us : rcu_irq_exit <-irq_exit
2158 <idle>-0 3dN.2 10us : rcu_eqs_enter_common.isra.45 <-rcu_irq_exit
2159 <idle>-0 3dN.2 10us : sub_preempt_count <-irq_exit
2160 <idle>-0 3.N.1 11us : rcu_idle_exit <-cpu_idle
2161 <idle>-0 3dN.1 11us : rcu_eqs_exit_common.isra.43 <-rcu_idle_exit
2162 <idle>-0 3.N.1 11us : tick_nohz_idle_exit <-cpu_idle
2163 <idle>-0 3dN.1 12us : menu_hrtimer_cancel <-tick_nohz_idle_exit
2164 <idle>-0 3dN.1 12us : ktime_get <-tick_nohz_idle_exit
2165 <idle>-0 3dN.1 12us : tick_do_update_jiffies64 <-tick_nohz_idle_exit
2166 <idle>-0 3dN.1 13us : cpu_load_update_nohz <-tick_nohz_idle_exit
2167 <idle>-0 3dN.1 13us : _raw_spin_lock <-cpu_load_update_nohz
2168 <idle>-0 3dN.1 13us : add_preempt_count <-_raw_spin_lock
2169 <idle>-0 3dN.2 13us : __cpu_load_update <-cpu_load_update_nohz
2170 <idle>-0 3dN.2 14us : sched_avg_update <-__cpu_load_update
2171 <idle>-0 3dN.2 14us : _raw_spin_unlock <-cpu_load_update_nohz
2172 <idle>-0 3dN.2 14us : sub_preempt_count <-_raw_spin_unlock
2173 <idle>-0 3dN.1 15us : calc_load_nohz_stop <-tick_nohz_idle_exit
2174 <idle>-0 3dN.1 15us : touch_softlockup_watchdog <-tick_nohz_idle_exit
2175 <idle>-0 3dN.1 15us : hrtimer_cancel <-tick_nohz_idle_exit
2176 <idle>-0 3dN.1 15us : hrtimer_try_to_cancel <-hrtimer_cancel
2177 <idle>-0 3dN.1 16us : lock_hrtimer_base.isra.18 <-hrtimer_try_to_cancel
2178 <idle>-0 3dN.1 16us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
2179 <idle>-0 3dN.1 16us : add_preempt_count <-_raw_spin_lock_irqsave
2180 <idle>-0 3dN.2 17us : __remove_hrtimer <-remove_hrtimer.part.16
2181 <idle>-0 3dN.2 17us : hrtimer_force_reprogram <-__remove_hrtimer
2182 <idle>-0 3dN.2 17us : tick_program_event <-hrtimer_force_reprogram
2183 <idle>-0 3dN.2 18us : clockevents_program_event <-tick_program_event
2184 <idle>-0 3dN.2 18us : ktime_get <-clockevents_program_event
2185 <idle>-0 3dN.2 18us : lapic_next_event <-clockevents_program_event
2186 <idle>-0 3dN.2 19us : _raw_spin_unlock_irqrestore <-hrtimer_try_to_cancel
2187 <idle>-0 3dN.2 19us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2188 <idle>-0 3dN.1 19us : hrtimer_forward <-tick_nohz_idle_exit
2189 <idle>-0 3dN.1 20us : ktime_add_safe <-hrtimer_forward
2190 <idle>-0 3dN.1 20us : ktime_add_safe <-hrtimer_forward
2191 <idle>-0 3dN.1 20us : hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
2192 <idle>-0 3dN.1 20us : __hrtimer_start_range_ns <-hrtimer_start_range_ns
2193 <idle>-0 3dN.1 21us : lock_hrtimer_base.isra.18 <-__hrtimer_start_range_ns
2194 <idle>-0 3dN.1 21us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
2195 <idle>-0 3dN.1 21us : add_preempt_count <-_raw_spin_lock_irqsave
2196 <idle>-0 3dN.2 22us : ktime_add_safe <-__hrtimer_start_range_ns
2197 <idle>-0 3dN.2 22us : enqueue_hrtimer <-__hrtimer_start_range_ns
2198 <idle>-0 3dN.2 22us : tick_program_event <-__hrtimer_start_range_ns
2199 <idle>-0 3dN.2 23us : clockevents_program_event <-tick_program_event
2200 <idle>-0 3dN.2 23us : ktime_get <-clockevents_program_event
2201 <idle>-0 3dN.2 23us : lapic_next_event <-clockevents_program_event
2202 <idle>-0 3dN.2 24us : _raw_spin_unlock_irqrestore <-__hrtimer_start_range_ns
2203 <idle>-0 3dN.2 24us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2204 <idle>-0 3dN.1 24us : account_idle_ticks <-tick_nohz_idle_exit
2205 <idle>-0 3dN.1 24us : account_idle_time <-account_idle_ticks
2206 <idle>-0 3.N.1 25us : sub_preempt_count <-cpu_idle
2207 <idle>-0 3.N.. 25us : schedule <-cpu_idle
2208 <idle>-0 3.N.. 25us : __schedule <-preempt_schedule
2209 <idle>-0 3.N.. 26us : add_preempt_count <-__schedule
2210 <idle>-0 3.N.1 26us : rcu_note_context_switch <-__schedule
2211 <idle>-0 3.N.1 26us : rcu_sched_qs <-rcu_note_context_switch
2212 <idle>-0 3dN.1 27us : rcu_preempt_qs <-rcu_note_context_switch
2213 <idle>-0 3.N.1 27us : _raw_spin_lock_irq <-__schedule
2214 <idle>-0 3dN.1 27us : add_preempt_count <-_raw_spin_lock_irq
2215 <idle>-0 3dN.2 28us : put_prev_task_idle <-__schedule
2216 <idle>-0 3dN.2 28us : pick_next_task_stop <-pick_next_task
2217 <idle>-0 3dN.2 28us : pick_next_task_rt <-pick_next_task
2218 <idle>-0 3dN.2 29us : dequeue_pushable_task <-pick_next_task_rt
2219 <idle>-0 3d..3 29us : __schedule <-preempt_schedule
2220 <idle>-0 3d..3 30us : 0:120:R ==> [003] 2448: 94:R sleep
2221
2222This isn't that big of a trace, even with function tracing enabled,
2223so I included the entire trace.
2224
2225The interrupt went off while when the system was idle. Somewhere
2226before task_woken_rt() was called, the NEED_RESCHED flag was set,
2227this is indicated by the first occurrence of the 'N' flag.
2228
2229Latency tracing and events
2230--------------------------
2231As function tracing can induce a much larger latency, but without
2232seeing what happens within the latency it is hard to know what
2233caused it. There is a middle ground, and that is with enabling
2234events.
2235::
2236
2237 # echo 0 > options/function-trace
2238 # echo wakeup_rt > current_tracer
2239 # echo 1 > events/enable
2240 # echo 1 > tracing_on
2241 # echo 0 > tracing_max_latency
2242 # chrt -f 5 sleep 1
2243 # echo 0 > tracing_on
2244 # cat trace
2245 # tracer: wakeup_rt
2246 #
2247 # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
2248 # --------------------------------------------------------------------
2249 # latency: 6 us, #12/12, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2250 # -----------------
2251 # | task: sleep-5882 (uid:0 nice:0 policy:1 rt_prio:5)
2252 # -----------------
2253 #
2254 # _------=> CPU#
2255 # / _-----=> irqs-off
2256 # | / _----=> need-resched
2257 # || / _---=> hardirq/softirq
2258 # ||| / _--=> preempt-depth
2259 # |||| / delay
2260 # cmd pid ||||| time | caller
2261 # \ / ||||| \ | /
2262 <idle>-0 2d.h4 0us : 0:120:R + [002] 5882: 94:R sleep
2263 <idle>-0 2d.h4 0us : ttwu_do_activate.constprop.87 <-try_to_wake_up
2264 <idle>-0 2d.h4 1us : sched_wakeup: comm=sleep pid=5882 prio=94 success=1 target_cpu=002
2265 <idle>-0 2dNh2 1us : hrtimer_expire_exit: hrtimer=ffff88007796feb8
2266 <idle>-0 2.N.2 2us : power_end: cpu_id=2
2267 <idle>-0 2.N.2 3us : cpu_idle: state=4294967295 cpu_id=2
2268 <idle>-0 2dN.3 4us : hrtimer_cancel: hrtimer=ffff88007d50d5e0
2269 <idle>-0 2dN.3 4us : hrtimer_start: hrtimer=ffff88007d50d5e0 function=tick_sched_timer expires=34311211000000 softexpires=34311211000000
2270 <idle>-0 2.N.2 5us : rcu_utilization: Start context switch
2271 <idle>-0 2.N.2 5us : rcu_utilization: End context switch
2272 <idle>-0 2d..3 6us : __schedule <-schedule
2273 <idle>-0 2d..3 6us : 0:120:R ==> [002] 5882: 94:R sleep
2274
2275
2276Hardware Latency Detector
2277-------------------------
2278
2279The hardware latency detector is executed by enabling the "hwlat" tracer.
2280
2281NOTE, this tracer will affect the performance of the system as it will
2282periodically make a CPU constantly busy with interrupts disabled.
2283::
2284
2285 # echo hwlat > current_tracer
2286 # sleep 100
2287 # cat trace
2288 # tracer: hwlat
2289 #
2290 # entries-in-buffer/entries-written: 13/13 #P:8
2291 #
2292 # _-----=> irqs-off
2293 # / _----=> need-resched
2294 # | / _---=> hardirq/softirq
2295 # || / _--=> preempt-depth
2296 # ||| / delay
2297 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
2298 # | | | |||| | |
2299 <...>-1729 [001] d... 678.473449: #1 inner/outer(us): 11/12 ts:1581527483.343962693 count:6
2300 <...>-1729 [004] d... 689.556542: #2 inner/outer(us): 16/9 ts:1581527494.889008092 count:1
2301 <...>-1729 [005] d... 714.756290: #3 inner/outer(us): 16/16 ts:1581527519.678961629 count:5
2302 <...>-1729 [001] d... 718.788247: #4 inner/outer(us): 9/17 ts:1581527523.889012713 count:1
2303 <...>-1729 [002] d... 719.796341: #5 inner/outer(us): 13/9 ts:1581527524.912872606 count:1
2304 <...>-1729 [006] d... 844.787091: #6 inner/outer(us): 9/12 ts:1581527649.889048502 count:2
2305 <...>-1729 [003] d... 849.827033: #7 inner/outer(us): 18/9 ts:1581527654.889013793 count:1
2306 <...>-1729 [007] d... 853.859002: #8 inner/outer(us): 9/12 ts:1581527658.889065736 count:1
2307 <...>-1729 [001] d... 855.874978: #9 inner/outer(us): 9/11 ts:1581527660.861991877 count:1
2308 <...>-1729 [001] d... 863.938932: #10 inner/outer(us): 9/11 ts:1581527668.970010500 count:1 nmi-total:7 nmi-count:1
2309 <...>-1729 [007] d... 878.050780: #11 inner/outer(us): 9/12 ts:1581527683.385002600 count:1 nmi-total:5 nmi-count:1
2310 <...>-1729 [007] d... 886.114702: #12 inner/outer(us): 9/12 ts:1581527691.385001600 count:1
2311
2312
2313The above output is somewhat the same in the header. All events will have
2314interrupts disabled 'd'. Under the FUNCTION title there is:
2315
2316 #1
2317 This is the count of events recorded that were greater than the
2318 tracing_threshold (See below).
2319
2320 inner/outer(us): 11/11
2321
2322 This shows two numbers as "inner latency" and "outer latency". The test
2323 runs in a loop checking a timestamp twice. The latency detected within
2324 the two timestamps is the "inner latency" and the latency detected
2325 after the previous timestamp and the next timestamp in the loop is
2326 the "outer latency".
2327
2328 ts:1581527483.343962693
2329
2330 The absolute timestamp that the first latency was recorded in the window.
2331
2332 count:6
2333
2334 The number of times a latency was detected during the window.
2335
2336 nmi-total:7 nmi-count:1
2337
2338 On architectures that support it, if an NMI comes in during the
2339 test, the time spent in NMI is reported in "nmi-total" (in
2340 microseconds).
2341
2342 All architectures that have NMIs will show the "nmi-count" if an
2343 NMI comes in during the test.
2344
2345hwlat files:
2346
2347 tracing_threshold
2348 This gets automatically set to "10" to represent 10
2349 microseconds. This is the threshold of latency that
2350 needs to be detected before the trace will be recorded.
2351
2352 Note, when hwlat tracer is finished (another tracer is
2353 written into "current_tracer"), the original value for
2354 tracing_threshold is placed back into this file.
2355
2356 hwlat_detector/width
2357 The length of time the test runs with interrupts disabled.
2358
2359 hwlat_detector/window
2360 The length of time of the window which the test
2361 runs. That is, the test will run for "width"
2362 microseconds per "window" microseconds
2363
2364 tracing_cpumask
2365 When the test is started. A kernel thread is created that
2366 runs the test. This thread will alternate between CPUs
2367 listed in the tracing_cpumask between each period
2368 (one "window"). To limit the test to specific CPUs
2369 set the mask in this file to only the CPUs that the test
2370 should run on.
2371
2372function
2373--------
2374
2375This tracer is the function tracer. Enabling the function tracer
2376can be done from the debug file system. Make sure the
2377ftrace_enabled is set; otherwise this tracer is a nop.
2378See the "ftrace_enabled" section below.
2379::
2380
2381 # sysctl kernel.ftrace_enabled=1
2382 # echo function > current_tracer
2383 # echo 1 > tracing_on
2384 # usleep 1
2385 # echo 0 > tracing_on
2386 # cat trace
2387 # tracer: function
2388 #
2389 # entries-in-buffer/entries-written: 24799/24799 #P:4
2390 #
2391 # _-----=> irqs-off
2392 # / _----=> need-resched
2393 # | / _---=> hardirq/softirq
2394 # || / _--=> preempt-depth
2395 # ||| / delay
2396 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
2397 # | | | |||| | |
2398 bash-1994 [002] .... 3082.063030: mutex_unlock <-rb_simple_write
2399 bash-1994 [002] .... 3082.063031: __mutex_unlock_slowpath <-mutex_unlock
2400 bash-1994 [002] .... 3082.063031: __fsnotify_parent <-fsnotify_modify
2401 bash-1994 [002] .... 3082.063032: fsnotify <-fsnotify_modify
2402 bash-1994 [002] .... 3082.063032: __srcu_read_lock <-fsnotify
2403 bash-1994 [002] .... 3082.063032: add_preempt_count <-__srcu_read_lock
2404 bash-1994 [002] ...1 3082.063032: sub_preempt_count <-__srcu_read_lock
2405 bash-1994 [002] .... 3082.063033: __srcu_read_unlock <-fsnotify
2406 [...]
2407
2408
2409Note: function tracer uses ring buffers to store the above
2410entries. The newest data may overwrite the oldest data.
2411Sometimes using echo to stop the trace is not sufficient because
2412the tracing could have overwritten the data that you wanted to
2413record. For this reason, it is sometimes better to disable
2414tracing directly from a program. This allows you to stop the
2415tracing at the point that you hit the part that you are
2416interested in. To disable the tracing directly from a C program,
2417something like following code snippet can be used::
2418
2419 int trace_fd;
2420 [...]
2421 int main(int argc, char *argv[]) {
2422 [...]
2423 trace_fd = open(tracing_file("tracing_on"), O_WRONLY);
2424 [...]
2425 if (condition_hit()) {
2426 write(trace_fd, "0", 1);
2427 }
2428 [...]
2429 }
2430
2431
2432Single thread tracing
2433---------------------
2434
2435By writing into set_ftrace_pid you can trace a
2436single thread. For example::
2437
2438 # cat set_ftrace_pid
2439 no pid
2440 # echo 3111 > set_ftrace_pid
2441 # cat set_ftrace_pid
2442 3111
2443 # echo function > current_tracer
2444 # cat trace | head
2445 # tracer: function
2446 #
2447 # TASK-PID CPU# TIMESTAMP FUNCTION
2448 # | | | | |
2449 yum-updatesd-3111 [003] 1637.254676: finish_task_switch <-thread_return
2450 yum-updatesd-3111 [003] 1637.254681: hrtimer_cancel <-schedule_hrtimeout_range
2451 yum-updatesd-3111 [003] 1637.254682: hrtimer_try_to_cancel <-hrtimer_cancel
2452 yum-updatesd-3111 [003] 1637.254683: lock_hrtimer_base <-hrtimer_try_to_cancel
2453 yum-updatesd-3111 [003] 1637.254685: fget_light <-do_sys_poll
2454 yum-updatesd-3111 [003] 1637.254686: pipe_poll <-do_sys_poll
2455 # echo > set_ftrace_pid
2456 # cat trace |head
2457 # tracer: function
2458 #
2459 # TASK-PID CPU# TIMESTAMP FUNCTION
2460 # | | | | |
2461 ##### CPU 3 buffer started ####
2462 yum-updatesd-3111 [003] 1701.957688: free_poll_entry <-poll_freewait
2463 yum-updatesd-3111 [003] 1701.957689: remove_wait_queue <-free_poll_entry
2464 yum-updatesd-3111 [003] 1701.957691: fput <-free_poll_entry
2465 yum-updatesd-3111 [003] 1701.957692: audit_syscall_exit <-sysret_audit
2466 yum-updatesd-3111 [003] 1701.957693: path_put <-audit_syscall_exit
2467
2468If you want to trace a function when executing, you could use
2469something like this simple program.
2470::
2471
2472 #include <stdio.h>
2473 #include <stdlib.h>
2474 #include <sys/types.h>
2475 #include <sys/stat.h>
2476 #include <fcntl.h>
2477 #include <unistd.h>
2478 #include <string.h>
2479
2480 #define _STR(x) #x
2481 #define STR(x) _STR(x)
2482 #define MAX_PATH 256
2483
2484 const char *find_tracefs(void)
2485 {
2486 static char tracefs[MAX_PATH+1];
2487 static int tracefs_found;
2488 char type[100];
2489 FILE *fp;
2490
2491 if (tracefs_found)
2492 return tracefs;
2493
2494 if ((fp = fopen("/proc/mounts","r")) == NULL) {
2495 perror("/proc/mounts");
2496 return NULL;
2497 }
2498
2499 while (fscanf(fp, "%*s %"
2500 STR(MAX_PATH)
2501 "s %99s %*s %*d %*d\n",
2502 tracefs, type) == 2) {
2503 if (strcmp(type, "tracefs") == 0)
2504 break;
2505 }
2506 fclose(fp);
2507
2508 if (strcmp(type, "tracefs") != 0) {
2509 fprintf(stderr, "tracefs not mounted");
2510 return NULL;
2511 }
2512
2513 strcat(tracefs, "/tracing/");
2514 tracefs_found = 1;
2515
2516 return tracefs;
2517 }
2518
2519 const char *tracing_file(const char *file_name)
2520 {
2521 static char trace_file[MAX_PATH+1];
2522 snprintf(trace_file, MAX_PATH, "%s/%s", find_tracefs(), file_name);
2523 return trace_file;
2524 }
2525
2526 int main (int argc, char **argv)
2527 {
2528 if (argc < 1)
2529 exit(-1);
2530
2531 if (fork() > 0) {
2532 int fd, ffd;
2533 char line[64];
2534 int s;
2535
2536 ffd = open(tracing_file("current_tracer"), O_WRONLY);
2537 if (ffd < 0)
2538 exit(-1);
2539 write(ffd, "nop", 3);
2540
2541 fd = open(tracing_file("set_ftrace_pid"), O_WRONLY);
2542 s = sprintf(line, "%d\n", getpid());
2543 write(fd, line, s);
2544
2545 write(ffd, "function", 8);
2546
2547 close(fd);
2548 close(ffd);
2549
2550 execvp(argv[1], argv+1);
2551 }
2552
2553 return 0;
2554 }
2555
2556Or this simple script!
2557::
2558
2559 #!/bin/bash
2560
2561 tracefs=`sed -ne 's/^tracefs \(.*\) tracefs.*/\1/p' /proc/mounts`
2562 echo 0 > $tracefs/tracing_on
2563 echo $$ > $tracefs/set_ftrace_pid
2564 echo function > $tracefs/current_tracer
2565 echo 1 > $tracefs/tracing_on
2566 exec "$@"
2567
2568
2569function graph tracer
2570---------------------------
2571
2572This tracer is similar to the function tracer except that it
2573probes a function on its entry and its exit. This is done by
2574using a dynamically allocated stack of return addresses in each
2575task_struct. On function entry the tracer overwrites the return
2576address of each function traced to set a custom probe. Thus the
2577original return address is stored on the stack of return address
2578in the task_struct.
2579
2580Probing on both ends of a function leads to special features
2581such as:
2582
2583- measure of a function's time execution
2584- having a reliable call stack to draw function calls graph
2585
2586This tracer is useful in several situations:
2587
2588- you want to find the reason of a strange kernel behavior and
2589 need to see what happens in detail on any areas (or specific
2590 ones).
2591
2592- you are experiencing weird latencies but it's difficult to
2593 find its origin.
2594
2595- you want to find quickly which path is taken by a specific
2596 function
2597
2598- you just want to peek inside a working kernel and want to see
2599 what happens there.
2600
2601::
2602
2603 # tracer: function_graph
2604 #
2605 # CPU DURATION FUNCTION CALLS
2606 # | | | | | | |
2607
2608 0) | sys_open() {
2609 0) | do_sys_open() {
2610 0) | getname() {
2611 0) | kmem_cache_alloc() {
2612 0) 1.382 us | __might_sleep();
2613 0) 2.478 us | }
2614 0) | strncpy_from_user() {
2615 0) | might_fault() {
2616 0) 1.389 us | __might_sleep();
2617 0) 2.553 us | }
2618 0) 3.807 us | }
2619 0) 7.876 us | }
2620 0) | alloc_fd() {
2621 0) 0.668 us | _spin_lock();
2622 0) 0.570 us | expand_files();
2623 0) 0.586 us | _spin_unlock();
2624
2625
2626There are several columns that can be dynamically
2627enabled/disabled. You can use every combination of options you
2628want, depending on your needs.
2629
2630- The cpu number on which the function executed is default
2631 enabled. It is sometimes better to only trace one cpu (see
2632 tracing_cpumask file) or you might sometimes see unordered
2633 function calls while cpu tracing switch.
2634
2635 - hide: echo nofuncgraph-cpu > trace_options
2636 - show: echo funcgraph-cpu > trace_options
2637
2638- The duration (function's time of execution) is displayed on
2639 the closing bracket line of a function or on the same line
2640 than the current function in case of a leaf one. It is default
2641 enabled.
2642
2643 - hide: echo nofuncgraph-duration > trace_options
2644 - show: echo funcgraph-duration > trace_options
2645
2646- The overhead field precedes the duration field in case of
2647 reached duration thresholds.
2648
2649 - hide: echo nofuncgraph-overhead > trace_options
2650 - show: echo funcgraph-overhead > trace_options
2651 - depends on: funcgraph-duration
2652
2653 ie::
2654
2655 3) # 1837.709 us | } /* __switch_to */
2656 3) | finish_task_switch() {
2657 3) 0.313 us | _raw_spin_unlock_irq();
2658 3) 3.177 us | }
2659 3) # 1889.063 us | } /* __schedule */
2660 3) ! 140.417 us | } /* __schedule */
2661 3) # 2034.948 us | } /* schedule */
2662 3) * 33998.59 us | } /* schedule_preempt_disabled */
2663
2664 [...]
2665
2666 1) 0.260 us | msecs_to_jiffies();
2667 1) 0.313 us | __rcu_read_unlock();
2668 1) + 61.770 us | }
2669 1) + 64.479 us | }
2670 1) 0.313 us | rcu_bh_qs();
2671 1) 0.313 us | __local_bh_enable();
2672 1) ! 217.240 us | }
2673 1) 0.365 us | idle_cpu();
2674 1) | rcu_irq_exit() {
2675 1) 0.417 us | rcu_eqs_enter_common.isra.47();
2676 1) 3.125 us | }
2677 1) ! 227.812 us | }
2678 1) ! 457.395 us | }
2679 1) @ 119760.2 us | }
2680
2681 [...]
2682
2683 2) | handle_IPI() {
2684 1) 6.979 us | }
2685 2) 0.417 us | scheduler_ipi();
2686 1) 9.791 us | }
2687 1) + 12.917 us | }
2688 2) 3.490 us | }
2689 1) + 15.729 us | }
2690 1) + 18.542 us | }
2691 2) $ 3594274 us | }
2692
2693Flags::
2694
2695 + means that the function exceeded 10 usecs.
2696 ! means that the function exceeded 100 usecs.
2697 # means that the function exceeded 1000 usecs.
2698 * means that the function exceeded 10 msecs.
2699 @ means that the function exceeded 100 msecs.
2700 $ means that the function exceeded 1 sec.
2701
2702
2703- The task/pid field displays the thread cmdline and pid which
2704 executed the function. It is default disabled.
2705
2706 - hide: echo nofuncgraph-proc > trace_options
2707 - show: echo funcgraph-proc > trace_options
2708
2709 ie::
2710
2711 # tracer: function_graph
2712 #
2713 # CPU TASK/PID DURATION FUNCTION CALLS
2714 # | | | | | | | | |
2715 0) sh-4802 | | d_free() {
2716 0) sh-4802 | | call_rcu() {
2717 0) sh-4802 | | __call_rcu() {
2718 0) sh-4802 | 0.616 us | rcu_process_gp_end();
2719 0) sh-4802 | 0.586 us | check_for_new_grace_period();
2720 0) sh-4802 | 2.899 us | }
2721 0) sh-4802 | 4.040 us | }
2722 0) sh-4802 | 5.151 us | }
2723 0) sh-4802 | + 49.370 us | }
2724
2725
2726- The absolute time field is an absolute timestamp given by the
2727 system clock since it started. A snapshot of this time is
2728 given on each entry/exit of functions
2729
2730 - hide: echo nofuncgraph-abstime > trace_options
2731 - show: echo funcgraph-abstime > trace_options
2732
2733 ie::
2734
2735 #
2736 # TIME CPU DURATION FUNCTION CALLS
2737 # | | | | | | | |
2738 360.774522 | 1) 0.541 us | }
2739 360.774522 | 1) 4.663 us | }
2740 360.774523 | 1) 0.541 us | __wake_up_bit();
2741 360.774524 | 1) 6.796 us | }
2742 360.774524 | 1) 7.952 us | }
2743 360.774525 | 1) 9.063 us | }
2744 360.774525 | 1) 0.615 us | journal_mark_dirty();
2745 360.774527 | 1) 0.578 us | __brelse();
2746 360.774528 | 1) | reiserfs_prepare_for_journal() {
2747 360.774528 | 1) | unlock_buffer() {
2748 360.774529 | 1) | wake_up_bit() {
2749 360.774529 | 1) | bit_waitqueue() {
2750 360.774530 | 1) 0.594 us | __phys_addr();
2751
2752
2753The function name is always displayed after the closing bracket
2754for a function if the start of that function is not in the
2755trace buffer.
2756
2757Display of the function name after the closing bracket may be
2758enabled for functions whose start is in the trace buffer,
2759allowing easier searching with grep for function durations.
2760It is default disabled.
2761
2762 - hide: echo nofuncgraph-tail > trace_options
2763 - show: echo funcgraph-tail > trace_options
2764
2765 Example with nofuncgraph-tail (default)::
2766
2767 0) | putname() {
2768 0) | kmem_cache_free() {
2769 0) 0.518 us | __phys_addr();
2770 0) 1.757 us | }
2771 0) 2.861 us | }
2772
2773 Example with funcgraph-tail::
2774
2775 0) | putname() {
2776 0) | kmem_cache_free() {
2777 0) 0.518 us | __phys_addr();
2778 0) 1.757 us | } /* kmem_cache_free() */
2779 0) 2.861 us | } /* putname() */
2780
2781The return value of each traced function can be displayed after
2782an equal sign "=". When encountering system call failures, it
2783can be very helpful to quickly locate the function that first
2784returns an error code.
2785
2786 - hide: echo nofuncgraph-retval > trace_options
2787 - show: echo funcgraph-retval > trace_options
2788
2789 Example with funcgraph-retval::
2790
2791 1) | cgroup_migrate() {
2792 1) 0.651 us | cgroup_migrate_add_task(); /* = 0xffff93fcfd346c00 */
2793 1) | cgroup_migrate_execute() {
2794 1) | cpu_cgroup_can_attach() {
2795 1) | cgroup_taskset_first() {
2796 1) 0.732 us | cgroup_taskset_next(); /* = 0xffff93fc8fb20000 */
2797 1) 1.232 us | } /* cgroup_taskset_first = 0xffff93fc8fb20000 */
2798 1) 0.380 us | sched_rt_can_attach(); /* = 0x0 */
2799 1) 2.335 us | } /* cpu_cgroup_can_attach = -22 */
2800 1) 4.369 us | } /* cgroup_migrate_execute = -22 */
2801 1) 7.143 us | } /* cgroup_migrate = -22 */
2802
2803The above example shows that the function cpu_cgroup_can_attach
2804returned the error code -22 firstly, then we can read the code
2805of this function to get the root cause.
2806
2807When the option funcgraph-retval-hex is not set, the return value can
2808be displayed in a smart way. Specifically, if it is an error code,
2809it will be printed in signed decimal format, otherwise it will
2810printed in hexadecimal format.
2811
2812 - smart: echo nofuncgraph-retval-hex > trace_options
2813 - hexadecimal: echo funcgraph-retval-hex > trace_options
2814
2815 Example with funcgraph-retval-hex::
2816
2817 1) | cgroup_migrate() {
2818 1) 0.651 us | cgroup_migrate_add_task(); /* = 0xffff93fcfd346c00 */
2819 1) | cgroup_migrate_execute() {
2820 1) | cpu_cgroup_can_attach() {
2821 1) | cgroup_taskset_first() {
2822 1) 0.732 us | cgroup_taskset_next(); /* = 0xffff93fc8fb20000 */
2823 1) 1.232 us | } /* cgroup_taskset_first = 0xffff93fc8fb20000 */
2824 1) 0.380 us | sched_rt_can_attach(); /* = 0x0 */
2825 1) 2.335 us | } /* cpu_cgroup_can_attach = 0xffffffea */
2826 1) 4.369 us | } /* cgroup_migrate_execute = 0xffffffea */
2827 1) 7.143 us | } /* cgroup_migrate = 0xffffffea */
2828
2829At present, there are some limitations when using the funcgraph-retval
2830option, and these limitations will be eliminated in the future:
2831
2832- Even if the function return type is void, a return value will still
2833 be printed, and you can just ignore it.
2834
2835- Even if return values are stored in multiple registers, only the
2836 value contained in the first register will be recorded and printed.
2837 To illustrate, in the x86 architecture, eax and edx are used to store
2838 a 64-bit return value, with the lower 32 bits saved in eax and the
2839 upper 32 bits saved in edx. However, only the value stored in eax
2840 will be recorded and printed.
2841
2842- In certain procedure call standards, such as arm64's AAPCS64, when a
2843 type is smaller than a GPR, it is the responsibility of the consumer
2844 to perform the narrowing, and the upper bits may contain UNKNOWN values.
2845 Therefore, it is advisable to check the code for such cases. For instance,
2846 when using a u8 in a 64-bit GPR, bits [63:8] may contain arbitrary values,
2847 especially when larger types are truncated, whether explicitly or implicitly.
2848 Here are some specific cases to illustrate this point:
2849
2850 **Case One**:
2851
2852 The function narrow_to_u8 is defined as follows::
2853
2854 u8 narrow_to_u8(u64 val)
2855 {
2856 // implicitly truncated
2857 return val;
2858 }
2859
2860 It may be compiled to::
2861
2862 narrow_to_u8:
2863 < ... ftrace instrumentation ... >
2864 RET
2865
2866 If you pass 0x123456789abcdef to this function and want to narrow it,
2867 it may be recorded as 0x123456789abcdef instead of 0xef.
2868
2869 **Case Two**:
2870
2871 The function error_if_not_4g_aligned is defined as follows::
2872
2873 int error_if_not_4g_aligned(u64 val)
2874 {
2875 if (val & GENMASK(31, 0))
2876 return -EINVAL;
2877
2878 return 0;
2879 }
2880
2881 It could be compiled to::
2882
2883 error_if_not_4g_aligned:
2884 CBNZ w0, .Lnot_aligned
2885 RET // bits [31:0] are zero, bits
2886 // [63:32] are UNKNOWN
2887 .Lnot_aligned:
2888 MOV x0, #-EINVAL
2889 RET
2890
2891 When passing 0x2_0000_0000 to it, the return value may be recorded as
2892 0x2_0000_0000 instead of 0.
2893
2894You can put some comments on specific functions by using
2895trace_printk() For example, if you want to put a comment inside
2896the __might_sleep() function, you just have to include
2897<linux/ftrace.h> and call trace_printk() inside __might_sleep()::
2898
2899 trace_printk("I'm a comment!\n")
2900
2901will produce::
2902
2903 1) | __might_sleep() {
2904 1) | /* I'm a comment! */
2905 1) 1.449 us | }
2906
2907
2908You might find other useful features for this tracer in the
2909following "dynamic ftrace" section such as tracing only specific
2910functions or tasks.
2911
2912dynamic ftrace
2913--------------
2914
2915If CONFIG_DYNAMIC_FTRACE is set, the system will run with
2916virtually no overhead when function tracing is disabled. The way
2917this works is the mcount function call (placed at the start of
2918every kernel function, produced by the -pg switch in gcc),
2919starts of pointing to a simple return. (Enabling FTRACE will
2920include the -pg switch in the compiling of the kernel.)
2921
2922At compile time every C file object is run through the
2923recordmcount program (located in the scripts directory). This
2924program will parse the ELF headers in the C object to find all
2925the locations in the .text section that call mcount. Starting
2926with gcc version 4.6, the -mfentry has been added for x86, which
2927calls "__fentry__" instead of "mcount". Which is called before
2928the creation of the stack frame.
2929
2930Note, not all sections are traced. They may be prevented by either
2931a notrace, or blocked another way and all inline functions are not
2932traced. Check the "available_filter_functions" file to see what functions
2933can be traced.
2934
2935A section called "__mcount_loc" is created that holds
2936references to all the mcount/fentry call sites in the .text section.
2937The recordmcount program re-links this section back into the
2938original object. The final linking stage of the kernel will add all these
2939references into a single table.
2940
2941On boot up, before SMP is initialized, the dynamic ftrace code
2942scans this table and updates all the locations into nops. It
2943also records the locations, which are added to the
2944available_filter_functions list. Modules are processed as they
2945are loaded and before they are executed. When a module is
2946unloaded, it also removes its functions from the ftrace function
2947list. This is automatic in the module unload code, and the
2948module author does not need to worry about it.
2949
2950When tracing is enabled, the process of modifying the function
2951tracepoints is dependent on architecture. The old method is to use
2952kstop_machine to prevent races with the CPUs executing code being
2953modified (which can cause the CPU to do undesirable things, especially
2954if the modified code crosses cache (or page) boundaries), and the nops are
2955patched back to calls. But this time, they do not call mcount
2956(which is just a function stub). They now call into the ftrace
2957infrastructure.
2958
2959The new method of modifying the function tracepoints is to place
2960a breakpoint at the location to be modified, sync all CPUs, modify
2961the rest of the instruction not covered by the breakpoint. Sync
2962all CPUs again, and then remove the breakpoint with the finished
2963version to the ftrace call site.
2964
2965Some archs do not even need to monkey around with the synchronization,
2966and can just slap the new code on top of the old without any
2967problems with other CPUs executing it at the same time.
2968
2969One special side-effect to the recording of the functions being
2970traced is that we can now selectively choose which functions we
2971wish to trace and which ones we want the mcount calls to remain
2972as nops.
2973
2974Two files are used, one for enabling and one for disabling the
2975tracing of specified functions. They are:
2976
2977 set_ftrace_filter
2978
2979and
2980
2981 set_ftrace_notrace
2982
2983A list of available functions that you can add to these files is
2984listed in:
2985
2986 available_filter_functions
2987
2988::
2989
2990 # cat available_filter_functions
2991 put_prev_task_idle
2992 kmem_cache_create
2993 pick_next_task_rt
2994 cpus_read_lock
2995 pick_next_task_fair
2996 mutex_lock
2997 [...]
2998
2999If I am only interested in sys_nanosleep and hrtimer_interrupt::
3000
3001 # echo sys_nanosleep hrtimer_interrupt > set_ftrace_filter
3002 # echo function > current_tracer
3003 # echo 1 > tracing_on
3004 # usleep 1
3005 # echo 0 > tracing_on
3006 # cat trace
3007 # tracer: function
3008 #
3009 # entries-in-buffer/entries-written: 5/5 #P:4
3010 #
3011 # _-----=> irqs-off
3012 # / _----=> need-resched
3013 # | / _---=> hardirq/softirq
3014 # || / _--=> preempt-depth
3015 # ||| / delay
3016 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
3017 # | | | |||| | |
3018 usleep-2665 [001] .... 4186.475355: sys_nanosleep <-system_call_fastpath
3019 <idle>-0 [001] d.h1 4186.475409: hrtimer_interrupt <-smp_apic_timer_interrupt
3020 usleep-2665 [001] d.h1 4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt
3021 <idle>-0 [003] d.h1 4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt
3022 <idle>-0 [002] d.h1 4186.475427: hrtimer_interrupt <-smp_apic_timer_interrupt
3023
3024To see which functions are being traced, you can cat the file:
3025::
3026
3027 # cat set_ftrace_filter
3028 hrtimer_interrupt
3029 sys_nanosleep
3030
3031
3032Perhaps this is not enough. The filters also allow glob(7) matching.
3033
3034 ``<match>*``
3035 will match functions that begin with <match>
3036 ``*<match>``
3037 will match functions that end with <match>
3038 ``*<match>*``
3039 will match functions that have <match> in it
3040 ``<match1>*<match2>``
3041 will match functions that begin with <match1> and end with <match2>
3042
3043.. note::
3044 It is better to use quotes to enclose the wild cards,
3045 otherwise the shell may expand the parameters into names
3046 of files in the local directory.
3047
3048::
3049
3050 # echo 'hrtimer_*' > set_ftrace_filter
3051
3052Produces::
3053
3054 # tracer: function
3055 #
3056 # entries-in-buffer/entries-written: 897/897 #P:4
3057 #
3058 # _-----=> irqs-off
3059 # / _----=> need-resched
3060 # | / _---=> hardirq/softirq
3061 # || / _--=> preempt-depth
3062 # ||| / delay
3063 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
3064 # | | | |||| | |
3065 <idle>-0 [003] dN.1 4228.547803: hrtimer_cancel <-tick_nohz_idle_exit
3066 <idle>-0 [003] dN.1 4228.547804: hrtimer_try_to_cancel <-hrtimer_cancel
3067 <idle>-0 [003] dN.2 4228.547805: hrtimer_force_reprogram <-__remove_hrtimer
3068 <idle>-0 [003] dN.1 4228.547805: hrtimer_forward <-tick_nohz_idle_exit
3069 <idle>-0 [003] dN.1 4228.547805: hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
3070 <idle>-0 [003] d..1 4228.547858: hrtimer_get_next_event <-get_next_timer_interrupt
3071 <idle>-0 [003] d..1 4228.547859: hrtimer_start <-__tick_nohz_idle_enter
3072 <idle>-0 [003] d..2 4228.547860: hrtimer_force_reprogram <-__rem
3073
3074Notice that we lost the sys_nanosleep.
3075::
3076
3077 # cat set_ftrace_filter
3078 hrtimer_run_queues
3079 hrtimer_run_pending
3080 hrtimer_init
3081 hrtimer_cancel
3082 hrtimer_try_to_cancel
3083 hrtimer_forward
3084 hrtimer_start
3085 hrtimer_reprogram
3086 hrtimer_force_reprogram
3087 hrtimer_get_next_event
3088 hrtimer_interrupt
3089 hrtimer_nanosleep
3090 hrtimer_wakeup
3091 hrtimer_get_remaining
3092 hrtimer_get_res
3093 hrtimer_init_sleeper
3094
3095
3096This is because the '>' and '>>' act just like they do in bash.
3097To rewrite the filters, use '>'
3098To append to the filters, use '>>'
3099
3100To clear out a filter so that all functions will be recorded
3101again::
3102
3103 # echo > set_ftrace_filter
3104 # cat set_ftrace_filter
3105 #
3106
3107Again, now we want to append.
3108
3109::
3110
3111 # echo sys_nanosleep > set_ftrace_filter
3112 # cat set_ftrace_filter
3113 sys_nanosleep
3114 # echo 'hrtimer_*' >> set_ftrace_filter
3115 # cat set_ftrace_filter
3116 hrtimer_run_queues
3117 hrtimer_run_pending
3118 hrtimer_init
3119 hrtimer_cancel
3120 hrtimer_try_to_cancel
3121 hrtimer_forward
3122 hrtimer_start
3123 hrtimer_reprogram
3124 hrtimer_force_reprogram
3125 hrtimer_get_next_event
3126 hrtimer_interrupt
3127 sys_nanosleep
3128 hrtimer_nanosleep
3129 hrtimer_wakeup
3130 hrtimer_get_remaining
3131 hrtimer_get_res
3132 hrtimer_init_sleeper
3133
3134
3135The set_ftrace_notrace prevents those functions from being
3136traced.
3137::
3138
3139 # echo '*preempt*' '*lock*' > set_ftrace_notrace
3140
3141Produces::
3142
3143 # tracer: function
3144 #
3145 # entries-in-buffer/entries-written: 39608/39608 #P:4
3146 #
3147 # _-----=> irqs-off
3148 # / _----=> need-resched
3149 # | / _---=> hardirq/softirq
3150 # || / _--=> preempt-depth
3151 # ||| / delay
3152 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
3153 # | | | |||| | |
3154 bash-1994 [000] .... 4342.324896: file_ra_state_init <-do_dentry_open
3155 bash-1994 [000] .... 4342.324897: open_check_o_direct <-do_last
3156 bash-1994 [000] .... 4342.324897: ima_file_check <-do_last
3157 bash-1994 [000] .... 4342.324898: process_measurement <-ima_file_check
3158 bash-1994 [000] .... 4342.324898: ima_get_action <-process_measurement
3159 bash-1994 [000] .... 4342.324898: ima_match_policy <-ima_get_action
3160 bash-1994 [000] .... 4342.324899: do_truncate <-do_last
3161 bash-1994 [000] .... 4342.324899: setattr_should_drop_suidgid <-do_truncate
3162 bash-1994 [000] .... 4342.324899: notify_change <-do_truncate
3163 bash-1994 [000] .... 4342.324900: current_fs_time <-notify_change
3164 bash-1994 [000] .... 4342.324900: current_kernel_time <-current_fs_time
3165 bash-1994 [000] .... 4342.324900: timespec_trunc <-current_fs_time
3166
3167We can see that there's no more lock or preempt tracing.
3168
3169Selecting function filters via index
3170------------------------------------
3171
3172Because processing of strings is expensive (the address of the function
3173needs to be looked up before comparing to the string being passed in),
3174an index can be used as well to enable functions. This is useful in the
3175case of setting thousands of specific functions at a time. By passing
3176in a list of numbers, no string processing will occur. Instead, the function
3177at the specific location in the internal array (which corresponds to the
3178functions in the "available_filter_functions" file), is selected.
3179
3180::
3181
3182 # echo 1 > set_ftrace_filter
3183
3184Will select the first function listed in "available_filter_functions"
3185
3186::
3187
3188 # head -1 available_filter_functions
3189 trace_initcall_finish_cb
3190
3191 # cat set_ftrace_filter
3192 trace_initcall_finish_cb
3193
3194 # head -50 available_filter_functions | tail -1
3195 x86_pmu_commit_txn
3196
3197 # echo 1 50 > set_ftrace_filter
3198 # cat set_ftrace_filter
3199 trace_initcall_finish_cb
3200 x86_pmu_commit_txn
3201
3202Dynamic ftrace with the function graph tracer
3203---------------------------------------------
3204
3205Although what has been explained above concerns both the
3206function tracer and the function-graph-tracer, there are some
3207special features only available in the function-graph tracer.
3208
3209If you want to trace only one function and all of its children,
3210you just have to echo its name into set_graph_function::
3211
3212 echo __do_fault > set_graph_function
3213
3214will produce the following "expanded" trace of the __do_fault()
3215function::
3216
3217 0) | __do_fault() {
3218 0) | filemap_fault() {
3219 0) | find_lock_page() {
3220 0) 0.804 us | find_get_page();
3221 0) | __might_sleep() {
3222 0) 1.329 us | }
3223 0) 3.904 us | }
3224 0) 4.979 us | }
3225 0) 0.653 us | _spin_lock();
3226 0) 0.578 us | page_add_file_rmap();
3227 0) 0.525 us | native_set_pte_at();
3228 0) 0.585 us | _spin_unlock();
3229 0) | unlock_page() {
3230 0) 0.541 us | page_waitqueue();
3231 0) 0.639 us | __wake_up_bit();
3232 0) 2.786 us | }
3233 0) + 14.237 us | }
3234 0) | __do_fault() {
3235 0) | filemap_fault() {
3236 0) | find_lock_page() {
3237 0) 0.698 us | find_get_page();
3238 0) | __might_sleep() {
3239 0) 1.412 us | }
3240 0) 3.950 us | }
3241 0) 5.098 us | }
3242 0) 0.631 us | _spin_lock();
3243 0) 0.571 us | page_add_file_rmap();
3244 0) 0.526 us | native_set_pte_at();
3245 0) 0.586 us | _spin_unlock();
3246 0) | unlock_page() {
3247 0) 0.533 us | page_waitqueue();
3248 0) 0.638 us | __wake_up_bit();
3249 0) 2.793 us | }
3250 0) + 14.012 us | }
3251
3252You can also expand several functions at once::
3253
3254 echo sys_open > set_graph_function
3255 echo sys_close >> set_graph_function
3256
3257Now if you want to go back to trace all functions you can clear
3258this special filter via::
3259
3260 echo > set_graph_function
3261
3262
3263ftrace_enabled
3264--------------
3265
3266Note, the proc sysctl ftrace_enable is a big on/off switch for the
3267function tracer. By default it is enabled (when function tracing is
3268enabled in the kernel). If it is disabled, all function tracing is
3269disabled. This includes not only the function tracers for ftrace, but
3270also for any other uses (perf, kprobes, stack tracing, profiling, etc). It
3271cannot be disabled if there is a callback with FTRACE_OPS_FL_PERMANENT set
3272registered.
3273
3274Please disable this with care.
3275
3276This can be disable (and enabled) with::
3277
3278 sysctl kernel.ftrace_enabled=0
3279 sysctl kernel.ftrace_enabled=1
3280
3281 or
3282
3283 echo 0 > /proc/sys/kernel/ftrace_enabled
3284 echo 1 > /proc/sys/kernel/ftrace_enabled
3285
3286
3287Filter commands
3288---------------
3289
3290A few commands are supported by the set_ftrace_filter interface.
3291Trace commands have the following format::
3292
3293 <function>:<command>:<parameter>
3294
3295The following commands are supported:
3296
3297- mod:
3298 This command enables function filtering per module. The
3299 parameter defines the module. For example, if only the write*
3300 functions in the ext3 module are desired, run:
3301
3302 echo 'write*:mod:ext3' > set_ftrace_filter
3303
3304 This command interacts with the filter in the same way as
3305 filtering based on function names. Thus, adding more functions
3306 in a different module is accomplished by appending (>>) to the
3307 filter file. Remove specific module functions by prepending
3308 '!'::
3309
3310 echo '!writeback*:mod:ext3' >> set_ftrace_filter
3311
3312 Mod command supports module globbing. Disable tracing for all
3313 functions except a specific module::
3314
3315 echo '!*:mod:!ext3' >> set_ftrace_filter
3316
3317 Disable tracing for all modules, but still trace kernel::
3318
3319 echo '!*:mod:*' >> set_ftrace_filter
3320
3321 Enable filter only for kernel::
3322
3323 echo '*write*:mod:!*' >> set_ftrace_filter
3324
3325 Enable filter for module globbing::
3326
3327 echo '*write*:mod:*snd*' >> set_ftrace_filter
3328
3329- traceon/traceoff:
3330 These commands turn tracing on and off when the specified
3331 functions are hit. The parameter determines how many times the
3332 tracing system is turned on and off. If unspecified, there is
3333 no limit. For example, to disable tracing when a schedule bug
3334 is hit the first 5 times, run::
3335
3336 echo '__schedule_bug:traceoff:5' > set_ftrace_filter
3337
3338 To always disable tracing when __schedule_bug is hit::
3339
3340 echo '__schedule_bug:traceoff' > set_ftrace_filter
3341
3342 These commands are cumulative whether or not they are appended
3343 to set_ftrace_filter. To remove a command, prepend it by '!'
3344 and drop the parameter::
3345
3346 echo '!__schedule_bug:traceoff:0' > set_ftrace_filter
3347
3348 The above removes the traceoff command for __schedule_bug
3349 that have a counter. To remove commands without counters::
3350
3351 echo '!__schedule_bug:traceoff' > set_ftrace_filter
3352
3353- snapshot:
3354 Will cause a snapshot to be triggered when the function is hit.
3355 ::
3356
3357 echo 'native_flush_tlb_others:snapshot' > set_ftrace_filter
3358
3359 To only snapshot once:
3360 ::
3361
3362 echo 'native_flush_tlb_others:snapshot:1' > set_ftrace_filter
3363
3364 To remove the above commands::
3365
3366 echo '!native_flush_tlb_others:snapshot' > set_ftrace_filter
3367 echo '!native_flush_tlb_others:snapshot:0' > set_ftrace_filter
3368
3369- enable_event/disable_event:
3370 These commands can enable or disable a trace event. Note, because
3371 function tracing callbacks are very sensitive, when these commands
3372 are registered, the trace point is activated, but disabled in
3373 a "soft" mode. That is, the tracepoint will be called, but
3374 just will not be traced. The event tracepoint stays in this mode
3375 as long as there's a command that triggers it.
3376 ::
3377
3378 echo 'try_to_wake_up:enable_event:sched:sched_switch:2' > \
3379 set_ftrace_filter
3380
3381 The format is::
3382
3383 <function>:enable_event:<system>:<event>[:count]
3384 <function>:disable_event:<system>:<event>[:count]
3385
3386 To remove the events commands::
3387
3388 echo '!try_to_wake_up:enable_event:sched:sched_switch:0' > \
3389 set_ftrace_filter
3390 echo '!schedule:disable_event:sched:sched_switch' > \
3391 set_ftrace_filter
3392
3393- dump:
3394 When the function is hit, it will dump the contents of the ftrace
3395 ring buffer to the console. This is useful if you need to debug
3396 something, and want to dump the trace when a certain function
3397 is hit. Perhaps it's a function that is called before a triple
3398 fault happens and does not allow you to get a regular dump.
3399
3400- cpudump:
3401 When the function is hit, it will dump the contents of the ftrace
3402 ring buffer for the current CPU to the console. Unlike the "dump"
3403 command, it only prints out the contents of the ring buffer for the
3404 CPU that executed the function that triggered the dump.
3405
3406- stacktrace:
3407 When the function is hit, a stack trace is recorded.
3408
3409trace_pipe
3410----------
3411
3412The trace_pipe outputs the same content as the trace file, but
3413the effect on the tracing is different. Every read from
3414trace_pipe is consumed. This means that subsequent reads will be
3415different. The trace is live.
3416::
3417
3418 # echo function > current_tracer
3419 # cat trace_pipe > /tmp/trace.out &
3420 [1] 4153
3421 # echo 1 > tracing_on
3422 # usleep 1
3423 # echo 0 > tracing_on
3424 # cat trace
3425 # tracer: function
3426 #
3427 # entries-in-buffer/entries-written: 0/0 #P:4
3428 #
3429 # _-----=> irqs-off
3430 # / _----=> need-resched
3431 # | / _---=> hardirq/softirq
3432 # || / _--=> preempt-depth
3433 # ||| / delay
3434 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
3435 # | | | |||| | |
3436
3437 #
3438 # cat /tmp/trace.out
3439 bash-1994 [000] .... 5281.568961: mutex_unlock <-rb_simple_write
3440 bash-1994 [000] .... 5281.568963: __mutex_unlock_slowpath <-mutex_unlock
3441 bash-1994 [000] .... 5281.568963: __fsnotify_parent <-fsnotify_modify
3442 bash-1994 [000] .... 5281.568964: fsnotify <-fsnotify_modify
3443 bash-1994 [000] .... 5281.568964: __srcu_read_lock <-fsnotify
3444 bash-1994 [000] .... 5281.568964: add_preempt_count <-__srcu_read_lock
3445 bash-1994 [000] ...1 5281.568965: sub_preempt_count <-__srcu_read_lock
3446 bash-1994 [000] .... 5281.568965: __srcu_read_unlock <-fsnotify
3447 bash-1994 [000] .... 5281.568967: sys_dup2 <-system_call_fastpath
3448
3449
3450Note, reading the trace_pipe file will block until more input is
3451added. This is contrary to the trace file. If any process opened
3452the trace file for reading, it will actually disable tracing and
3453prevent new entries from being added. The trace_pipe file does
3454not have this limitation.
3455
3456trace entries
3457-------------
3458
3459Having too much or not enough data can be troublesome in
3460diagnosing an issue in the kernel. The file buffer_size_kb is
3461used to modify the size of the internal trace buffers. The
3462number listed is the number of entries that can be recorded per
3463CPU. To know the full size, multiply the number of possible CPUs
3464with the number of entries.
3465::
3466
3467 # cat buffer_size_kb
3468 1408 (units kilobytes)
3469
3470Or simply read buffer_total_size_kb
3471::
3472
3473 # cat buffer_total_size_kb
3474 5632
3475
3476To modify the buffer, simple echo in a number (in 1024 byte segments).
3477::
3478
3479 # echo 10000 > buffer_size_kb
3480 # cat buffer_size_kb
3481 10000 (units kilobytes)
3482
3483It will try to allocate as much as possible. If you allocate too
3484much, it can cause Out-Of-Memory to trigger.
3485::
3486
3487 # echo 1000000000000 > buffer_size_kb
3488 -bash: echo: write error: Cannot allocate memory
3489 # cat buffer_size_kb
3490 85
3491
3492The per_cpu buffers can be changed individually as well:
3493::
3494
3495 # echo 10000 > per_cpu/cpu0/buffer_size_kb
3496 # echo 100 > per_cpu/cpu1/buffer_size_kb
3497
3498When the per_cpu buffers are not the same, the buffer_size_kb
3499at the top level will just show an X
3500::
3501
3502 # cat buffer_size_kb
3503 X
3504
3505This is where the buffer_total_size_kb is useful:
3506::
3507
3508 # cat buffer_total_size_kb
3509 12916
3510
3511Writing to the top level buffer_size_kb will reset all the buffers
3512to be the same again.
3513
3514Snapshot
3515--------
3516CONFIG_TRACER_SNAPSHOT makes a generic snapshot feature
3517available to all non latency tracers. (Latency tracers which
3518record max latency, such as "irqsoff" or "wakeup", can't use
3519this feature, since those are already using the snapshot
3520mechanism internally.)
3521
3522Snapshot preserves a current trace buffer at a particular point
3523in time without stopping tracing. Ftrace swaps the current
3524buffer with a spare buffer, and tracing continues in the new
3525current (=previous spare) buffer.
3526
3527The following tracefs files in "tracing" are related to this
3528feature:
3529
3530 snapshot:
3531
3532 This is used to take a snapshot and to read the output
3533 of the snapshot. Echo 1 into this file to allocate a
3534 spare buffer and to take a snapshot (swap), then read
3535 the snapshot from this file in the same format as
3536 "trace" (described above in the section "The File
3537 System"). Both reads snapshot and tracing are executable
3538 in parallel. When the spare buffer is allocated, echoing
3539 0 frees it, and echoing else (positive) values clear the
3540 snapshot contents.
3541 More details are shown in the table below.
3542
3543 +--------------+------------+------------+------------+
3544 |status\\input | 0 | 1 | else |
3545 +==============+============+============+============+
3546 |not allocated |(do nothing)| alloc+swap |(do nothing)|
3547 +--------------+------------+------------+------------+
3548 |allocated | free | swap | clear |
3549 +--------------+------------+------------+------------+
3550
3551Here is an example of using the snapshot feature.
3552::
3553
3554 # echo 1 > events/sched/enable
3555 # echo 1 > snapshot
3556 # cat snapshot
3557 # tracer: nop
3558 #
3559 # entries-in-buffer/entries-written: 71/71 #P:8
3560 #
3561 # _-----=> irqs-off
3562 # / _----=> need-resched
3563 # | / _---=> hardirq/softirq
3564 # || / _--=> preempt-depth
3565 # ||| / delay
3566 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
3567 # | | | |||| | |
3568 <idle>-0 [005] d... 2440.603828: sched_switch: prev_comm=swapper/5 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2242 next_prio=120
3569 sleep-2242 [005] d... 2440.603846: sched_switch: prev_comm=snapshot-test-2 prev_pid=2242 prev_prio=120 prev_state=R ==> next_comm=kworker/5:1 next_pid=60 next_prio=120
3570 [...]
3571 <idle>-0 [002] d... 2440.707230: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2229 next_prio=120
3572
3573 # cat trace
3574 # tracer: nop
3575 #
3576 # entries-in-buffer/entries-written: 77/77 #P:8
3577 #
3578 # _-----=> irqs-off
3579 # / _----=> need-resched
3580 # | / _---=> hardirq/softirq
3581 # || / _--=> preempt-depth
3582 # ||| / delay
3583 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
3584 # | | | |||| | |
3585 <idle>-0 [007] d... 2440.707395: sched_switch: prev_comm=swapper/7 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2243 next_prio=120
3586 snapshot-test-2-2229 [002] d... 2440.707438: sched_switch: prev_comm=snapshot-test-2 prev_pid=2229 prev_prio=120 prev_state=S ==> next_comm=swapper/2 next_pid=0 next_prio=120
3587 [...]
3588
3589
3590If you try to use this snapshot feature when current tracer is
3591one of the latency tracers, you will get the following results.
3592::
3593
3594 # echo wakeup > current_tracer
3595 # echo 1 > snapshot
3596 bash: echo: write error: Device or resource busy
3597 # cat snapshot
3598 cat: snapshot: Device or resource busy
3599
3600
3601Instances
3602---------
3603In the tracefs tracing directory, there is a directory called "instances".
3604This directory can have new directories created inside of it using
3605mkdir, and removing directories with rmdir. The directory created
3606with mkdir in this directory will already contain files and other
3607directories after it is created.
3608::
3609
3610 # mkdir instances/foo
3611 # ls instances/foo
3612 buffer_size_kb buffer_total_size_kb events free_buffer per_cpu
3613 set_event snapshot trace trace_clock trace_marker trace_options
3614 trace_pipe tracing_on
3615
3616As you can see, the new directory looks similar to the tracing directory
3617itself. In fact, it is very similar, except that the buffer and
3618events are agnostic from the main directory, or from any other
3619instances that are created.
3620
3621The files in the new directory work just like the files with the
3622same name in the tracing directory except the buffer that is used
3623is a separate and new buffer. The files affect that buffer but do not
3624affect the main buffer with the exception of trace_options. Currently,
3625the trace_options affect all instances and the top level buffer
3626the same, but this may change in future releases. That is, options
3627may become specific to the instance they reside in.
3628
3629Notice that none of the function tracer files are there, nor is
3630current_tracer and available_tracers. This is because the buffers
3631can currently only have events enabled for them.
3632::
3633
3634 # mkdir instances/foo
3635 # mkdir instances/bar
3636 # mkdir instances/zoot
3637 # echo 100000 > buffer_size_kb
3638 # echo 1000 > instances/foo/buffer_size_kb
3639 # echo 5000 > instances/bar/per_cpu/cpu1/buffer_size_kb
3640 # echo function > current_trace
3641 # echo 1 > instances/foo/events/sched/sched_wakeup/enable
3642 # echo 1 > instances/foo/events/sched/sched_wakeup_new/enable
3643 # echo 1 > instances/foo/events/sched/sched_switch/enable
3644 # echo 1 > instances/bar/events/irq/enable
3645 # echo 1 > instances/zoot/events/syscalls/enable
3646 # cat trace_pipe
3647 CPU:2 [LOST 11745 EVENTS]
3648 bash-2044 [002] .... 10594.481032: _raw_spin_lock_irqsave <-get_page_from_freelist
3649 bash-2044 [002] d... 10594.481032: add_preempt_count <-_raw_spin_lock_irqsave
3650 bash-2044 [002] d..1 10594.481032: __rmqueue <-get_page_from_freelist
3651 bash-2044 [002] d..1 10594.481033: _raw_spin_unlock <-get_page_from_freelist
3652 bash-2044 [002] d..1 10594.481033: sub_preempt_count <-_raw_spin_unlock
3653 bash-2044 [002] d... 10594.481033: get_pageblock_flags_group <-get_pageblock_migratetype
3654 bash-2044 [002] d... 10594.481034: __mod_zone_page_state <-get_page_from_freelist
3655 bash-2044 [002] d... 10594.481034: zone_statistics <-get_page_from_freelist
3656 bash-2044 [002] d... 10594.481034: __inc_zone_state <-zone_statistics
3657 bash-2044 [002] d... 10594.481034: __inc_zone_state <-zone_statistics
3658 bash-2044 [002] .... 10594.481035: arch_dup_task_struct <-copy_process
3659 [...]
3660
3661 # cat instances/foo/trace_pipe
3662 bash-1998 [000] d..4 136.676759: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000
3663 bash-1998 [000] dN.4 136.676760: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000
3664 <idle>-0 [003] d.h3 136.676906: sched_wakeup: comm=rcu_preempt pid=9 prio=120 success=1 target_cpu=003
3665 <idle>-0 [003] d..3 136.676909: sched_switch: prev_comm=swapper/3 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_preempt next_pid=9 next_prio=120
3666 rcu_preempt-9 [003] d..3 136.676916: sched_switch: prev_comm=rcu_preempt prev_pid=9 prev_prio=120 prev_state=S ==> next_comm=swapper/3 next_pid=0 next_prio=120
3667 bash-1998 [000] d..4 136.677014: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000
3668 bash-1998 [000] dN.4 136.677016: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000
3669 bash-1998 [000] d..3 136.677018: sched_switch: prev_comm=bash prev_pid=1998 prev_prio=120 prev_state=R+ ==> next_comm=kworker/0:1 next_pid=59 next_prio=120
3670 kworker/0:1-59 [000] d..4 136.677022: sched_wakeup: comm=sshd pid=1995 prio=120 success=1 target_cpu=001
3671 kworker/0:1-59 [000] d..3 136.677025: sched_switch: prev_comm=kworker/0:1 prev_pid=59 prev_prio=120 prev_state=S ==> next_comm=bash next_pid=1998 next_prio=120
3672 [...]
3673
3674 # cat instances/bar/trace_pipe
3675 migration/1-14 [001] d.h3 138.732674: softirq_raise: vec=3 [action=NET_RX]
3676 <idle>-0 [001] dNh3 138.732725: softirq_raise: vec=3 [action=NET_RX]
3677 bash-1998 [000] d.h1 138.733101: softirq_raise: vec=1 [action=TIMER]
3678 bash-1998 [000] d.h1 138.733102: softirq_raise: vec=9 [action=RCU]
3679 bash-1998 [000] ..s2 138.733105: softirq_entry: vec=1 [action=TIMER]
3680 bash-1998 [000] ..s2 138.733106: softirq_exit: vec=1 [action=TIMER]
3681 bash-1998 [000] ..s2 138.733106: softirq_entry: vec=9 [action=RCU]
3682 bash-1998 [000] ..s2 138.733109: softirq_exit: vec=9 [action=RCU]
3683 sshd-1995 [001] d.h1 138.733278: irq_handler_entry: irq=21 name=uhci_hcd:usb4
3684 sshd-1995 [001] d.h1 138.733280: irq_handler_exit: irq=21 ret=unhandled
3685 sshd-1995 [001] d.h1 138.733281: irq_handler_entry: irq=21 name=eth0
3686 sshd-1995 [001] d.h1 138.733283: irq_handler_exit: irq=21 ret=handled
3687 [...]
3688
3689 # cat instances/zoot/trace
3690 # tracer: nop
3691 #
3692 # entries-in-buffer/entries-written: 18996/18996 #P:4
3693 #
3694 # _-----=> irqs-off
3695 # / _----=> need-resched
3696 # | / _---=> hardirq/softirq
3697 # || / _--=> preempt-depth
3698 # ||| / delay
3699 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
3700 # | | | |||| | |
3701 bash-1998 [000] d... 140.733501: sys_write -> 0x2
3702 bash-1998 [000] d... 140.733504: sys_dup2(oldfd: a, newfd: 1)
3703 bash-1998 [000] d... 140.733506: sys_dup2 -> 0x1
3704 bash-1998 [000] d... 140.733508: sys_fcntl(fd: a, cmd: 1, arg: 0)
3705 bash-1998 [000] d... 140.733509: sys_fcntl -> 0x1
3706 bash-1998 [000] d... 140.733510: sys_close(fd: a)
3707 bash-1998 [000] d... 140.733510: sys_close -> 0x0
3708 bash-1998 [000] d... 140.733514: sys_rt_sigprocmask(how: 0, nset: 0, oset: 6e2768, sigsetsize: 8)
3709 bash-1998 [000] d... 140.733515: sys_rt_sigprocmask -> 0x0
3710 bash-1998 [000] d... 140.733516: sys_rt_sigaction(sig: 2, act: 7fff718846f0, oact: 7fff71884650, sigsetsize: 8)
3711 bash-1998 [000] d... 140.733516: sys_rt_sigaction -> 0x0
3712
3713You can see that the trace of the top most trace buffer shows only
3714the function tracing. The foo instance displays wakeups and task
3715switches.
3716
3717To remove the instances, simply delete their directories:
3718::
3719
3720 # rmdir instances/foo
3721 # rmdir instances/bar
3722 # rmdir instances/zoot
3723
3724Note, if a process has a trace file open in one of the instance
3725directories, the rmdir will fail with EBUSY.
3726
3727
3728Stack trace
3729-----------
3730Since the kernel has a fixed sized stack, it is important not to
3731waste it in functions. A kernel developer must be conscious of
3732what they allocate on the stack. If they add too much, the system
3733can be in danger of a stack overflow, and corruption will occur,
3734usually leading to a system panic.
3735
3736There are some tools that check this, usually with interrupts
3737periodically checking usage. But if you can perform a check
3738at every function call that will become very useful. As ftrace provides
3739a function tracer, it makes it convenient to check the stack size
3740at every function call. This is enabled via the stack tracer.
3741
3742CONFIG_STACK_TRACER enables the ftrace stack tracing functionality.
3743To enable it, write a '1' into /proc/sys/kernel/stack_tracer_enabled.
3744::
3745
3746 # echo 1 > /proc/sys/kernel/stack_tracer_enabled
3747
3748You can also enable it from the kernel command line to trace
3749the stack size of the kernel during boot up, by adding "stacktrace"
3750to the kernel command line parameter.
3751
3752After running it for a few minutes, the output looks like:
3753::
3754
3755 # cat stack_max_size
3756 2928
3757
3758 # cat stack_trace
3759 Depth Size Location (18 entries)
3760 ----- ---- --------
3761 0) 2928 224 update_sd_lb_stats+0xbc/0x4ac
3762 1) 2704 160 find_busiest_group+0x31/0x1f1
3763 2) 2544 256 load_balance+0xd9/0x662
3764 3) 2288 80 idle_balance+0xbb/0x130
3765 4) 2208 128 __schedule+0x26e/0x5b9
3766 5) 2080 16 schedule+0x64/0x66
3767 6) 2064 128 schedule_timeout+0x34/0xe0
3768 7) 1936 112 wait_for_common+0x97/0xf1
3769 8) 1824 16 wait_for_completion+0x1d/0x1f
3770 9) 1808 128 flush_work+0xfe/0x119
3771 10) 1680 16 tty_flush_to_ldisc+0x1e/0x20
3772 11) 1664 48 input_available_p+0x1d/0x5c
3773 12) 1616 48 n_tty_poll+0x6d/0x134
3774 13) 1568 64 tty_poll+0x64/0x7f
3775 14) 1504 880 do_select+0x31e/0x511
3776 15) 624 400 core_sys_select+0x177/0x216
3777 16) 224 96 sys_select+0x91/0xb9
3778 17) 128 128 system_call_fastpath+0x16/0x1b
3779
3780Note, if -mfentry is being used by gcc, functions get traced before
3781they set up the stack frame. This means that leaf level functions
3782are not tested by the stack tracer when -mfentry is used.
3783
3784Currently, -mfentry is used by gcc 4.6.0 and above on x86 only.
3785
3786More
3787----
3788More details can be found in the source code, in the `kernel/trace/*.c` files.