Loading...
1perf-stat(1)
2============
3
4NAME
5----
6perf-stat - Run a command and gather performance counter statistics
7
8SYNOPSIS
9--------
10[verse]
11'perf stat' [-e <EVENT> | --event=EVENT] [-a] <command>
12'perf stat' [-e <EVENT> | --event=EVENT] [-a] -- <command> [<options>]
13'perf stat' [-e <EVENT> | --event=EVENT] [-a] record [-o file] -- <command> [<options>]
14'perf stat' report [-i file]
15
16DESCRIPTION
17-----------
18This command runs a command and gathers performance counter statistics
19from it.
20
21
22OPTIONS
23-------
24<command>...::
25 Any command you can specify in a shell.
26
27record::
28 See STAT RECORD.
29
30report::
31 See STAT REPORT.
32
33-e::
34--event=::
35 Select the PMU event. Selection can be:
36
37 - a symbolic event name (use 'perf list' to list all events)
38
39 - a raw PMU event (eventsel+umask) in the form of rNNN where NNN is a
40 hexadecimal event descriptor.
41
42 - a symbolically formed event like 'pmu/param1=0x3,param2/' where
43 param1 and param2 are defined as formats for the PMU in
44 /sys/bus/event_source/devices/<pmu>/format/*
45
46 - a symbolically formed event like 'pmu/config=M,config1=N,config2=K/'
47 where M, N, K are numbers (in decimal, hex, octal format).
48 Acceptable values for each of 'config', 'config1' and 'config2'
49 parameters are defined by corresponding entries in
50 /sys/bus/event_source/devices/<pmu>/format/*
51
52 Note that the last two syntaxes support prefix and glob matching in
53 the PMU name to simplify creation of events accross multiple instances
54 of the same type of PMU in large systems (e.g. memory controller PMUs).
55 Multiple PMU instances are typical for uncore PMUs, so the prefix
56 'uncore_' is also ignored when performing this match.
57
58
59-i::
60--no-inherit::
61 child tasks do not inherit counters
62-p::
63--pid=<pid>::
64 stat events on existing process id (comma separated list)
65
66-t::
67--tid=<tid>::
68 stat events on existing thread id (comma separated list)
69
70
71-a::
72--all-cpus::
73 system-wide collection from all CPUs (default if no target is specified)
74
75-c::
76--scale::
77 scale/normalize counter values
78
79-d::
80--detailed::
81 print more detailed statistics, can be specified up to 3 times
82
83 -d: detailed events, L1 and LLC data cache
84 -d -d: more detailed events, dTLB and iTLB events
85 -d -d -d: very detailed events, adding prefetch events
86
87-r::
88--repeat=<n>::
89 repeat command and print average + stddev (max: 100). 0 means forever.
90
91-B::
92--big-num::
93 print large numbers with thousands' separators according to locale
94
95-C::
96--cpu=::
97Count only on the list of CPUs provided. Multiple CPUs can be provided as a
98comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2.
99In per-thread mode, this option is ignored. The -a option is still necessary
100to activate system-wide monitoring. Default is to count on all CPUs.
101
102-A::
103--no-aggr::
104Do not aggregate counts across all monitored CPUs.
105
106-n::
107--null::
108 null run - don't start any counters
109
110-v::
111--verbose::
112 be more verbose (show counter open errors, etc)
113
114-x SEP::
115--field-separator SEP::
116print counts using a CSV-style output to make it easy to import directly into
117spreadsheets. Columns are separated by the string specified in SEP.
118
119-G name::
120--cgroup name::
121monitor only in the container (cgroup) called "name". This option is available only
122in per-cpu mode. The cgroup filesystem must be mounted. All threads belonging to
123container "name" are monitored when they run on the monitored CPUs. Multiple cgroups
124can be provided. Each cgroup is applied to the corresponding event, i.e., first cgroup
125to first event, second cgroup to second event and so on. It is possible to provide
126an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have
127corresponding events, i.e., they always refer to events defined earlier on the command
128line. If the user wants to track multiple events for a specific cgroup, the user can
129use '-e e1 -e e2 -G foo,foo' or just use '-e e1 -e e2 -G foo'.
130
131If wanting to monitor, say, 'cycles' for a cgroup and also for system wide, this
132command line can be used: 'perf stat -e cycles -G cgroup_name -a -e cycles'.
133
134-o file::
135--output file::
136Print the output into the designated file.
137
138--append::
139Append to the output file designated with the -o option. Ignored if -o is not specified.
140
141--log-fd::
142
143Log output to fd, instead of stderr. Complementary to --output, and mutually exclusive
144with it. --append may be used here. Examples:
145 3>results perf stat --log-fd 3 -- $cmd
146 3>>results perf stat --log-fd 3 --append -- $cmd
147
148--pre::
149--post::
150 Pre and post measurement hooks, e.g.:
151
152perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' -- make -s -j64 O=defconfig-build/ bzImage
153
154-I msecs::
155--interval-print msecs::
156Print count deltas every N milliseconds (minimum: 1ms)
157The overhead percentage could be high in some cases, for instance with small, sub 100ms intervals. Use with caution.
158 example: 'perf stat -I 1000 -e cycles -a sleep 5'
159
160--interval-count times::
161Print count deltas for fixed number of times.
162This option should be used together with "-I" option.
163 example: 'perf stat -I 1000 --interval-count 2 -e cycles -a'
164
165--timeout msecs::
166Stop the 'perf stat' session and print count deltas after N milliseconds (minimum: 10 ms).
167This option is not supported with the "-I" option.
168 example: 'perf stat --time 2000 -e cycles -a'
169
170--metric-only::
171Only print computed metrics. Print them in a single line.
172Don't show any raw values. Not supported with --per-thread.
173
174--per-socket::
175Aggregate counts per processor socket for system-wide mode measurements. This
176is a useful mode to detect imbalance between sockets. To enable this mode,
177use --per-socket in addition to -a. (system-wide). The output includes the
178socket number and the number of online processors on that socket. This is
179useful to gauge the amount of aggregation.
180
181--per-core::
182Aggregate counts per physical processor for system-wide mode measurements. This
183is a useful mode to detect imbalance between physical cores. To enable this mode,
184use --per-core in addition to -a. (system-wide). The output includes the
185core number and the number of online logical processors on that physical processor.
186
187--per-thread::
188Aggregate counts per monitored threads, when monitoring threads (-t option)
189or processes (-p option).
190
191-D msecs::
192--delay msecs::
193After starting the program, wait msecs before measuring. This is useful to
194filter out the startup phase of the program, which is often very different.
195
196-T::
197--transaction::
198
199Print statistics of transactional execution if supported.
200
201STAT RECORD
202-----------
203Stores stat data into perf data file.
204
205-o file::
206--output file::
207Output file name.
208
209STAT REPORT
210-----------
211Reads and reports stat data from perf data file.
212
213-i file::
214--input file::
215Input file name.
216
217--per-socket::
218Aggregate counts per processor socket for system-wide mode measurements.
219
220--per-core::
221Aggregate counts per physical processor for system-wide mode measurements.
222
223-M::
224--metrics::
225Print metrics or metricgroups specified in a comma separated list.
226For a group all metrics from the group are added.
227The events from the metrics are automatically measured.
228See perf list output for the possble metrics and metricgroups.
229
230-A::
231--no-aggr::
232Do not aggregate counts across all monitored CPUs.
233
234--topdown::
235Print top down level 1 metrics if supported by the CPU. This allows to
236determine bottle necks in the CPU pipeline for CPU bound workloads,
237by breaking the cycles consumed down into frontend bound, backend bound,
238bad speculation and retiring.
239
240Frontend bound means that the CPU cannot fetch and decode instructions fast
241enough. Backend bound means that computation or memory access is the bottle
242neck. Bad Speculation means that the CPU wasted cycles due to branch
243mispredictions and similar issues. Retiring means that the CPU computed without
244an apparently bottleneck. The bottleneck is only the real bottleneck
245if the workload is actually bound by the CPU and not by something else.
246
247For best results it is usually a good idea to use it with interval
248mode like -I 1000, as the bottleneck of workloads can change often.
249
250The top down metrics are collected per core instead of per
251CPU thread. Per core mode is automatically enabled
252and -a (global monitoring) is needed, requiring root rights or
253perf.perf_event_paranoid=-1.
254
255Topdown uses the full Performance Monitoring Unit, and needs
256disabling of the NMI watchdog (as root):
257echo 0 > /proc/sys/kernel/nmi_watchdog
258for best results. Otherwise the bottlenecks may be inconsistent
259on workload with changing phases.
260
261This enables --metric-only, unless overriden with --no-metric-only.
262
263To interpret the results it is usually needed to know on which
264CPUs the workload runs on. If needed the CPUs can be forced using
265taskset.
266
267--no-merge::
268Do not merge results from same PMUs.
269
270When multiple events are created from a single event specification,
271stat will, by default, aggregate the event counts and show the result
272in a single row. This option disables that behavior and shows
273the individual events and counts.
274
275Multiple events are created from a single event specification when:
2761. Prefix or glob matching is used for the PMU name.
2772. Aliases, which are listed immediately after the Kernel PMU events
278 by perf list, are used.
279
280--smi-cost::
281Measure SMI cost if msr/aperf/ and msr/smi/ events are supported.
282
283During the measurement, the /sys/device/cpu/freeze_on_smi will be set to
284freeze core counters on SMI.
285The aperf counter will not be effected by the setting.
286The cost of SMI can be measured by (aperf - unhalted core cycles).
287
288In practice, the percentages of SMI cycles is very useful for performance
289oriented analysis. --metric_only will be applied by default.
290The output is SMI cycles%, equals to (aperf - unhalted core cycles) / aperf
291
292Users who wants to get the actual value can apply --no-metric-only.
293
294EXAMPLES
295--------
296
297$ perf stat -- make -j
298
299 Performance counter stats for 'make -j':
300
301 8117.370256 task clock ticks # 11.281 CPU utilization factor
302 678 context switches # 0.000 M/sec
303 133 CPU migrations # 0.000 M/sec
304 235724 pagefaults # 0.029 M/sec
305 24821162526 CPU cycles # 3057.784 M/sec
306 18687303457 instructions # 2302.138 M/sec
307 172158895 cache references # 21.209 M/sec
308 27075259 cache misses # 3.335 M/sec
309
310 Wall-clock time elapsed: 719.554352 msecs
311
312CSV FORMAT
313----------
314
315With -x, perf stat is able to output a not-quite-CSV format output
316Commas in the output are not put into "". To make it easy to parse
317it is recommended to use a different character like -x \;
318
319The fields are in this order:
320
321 - optional usec time stamp in fractions of second (with -I xxx)
322 - optional CPU, core, or socket identifier
323 - optional number of logical CPUs aggregated
324 - counter value
325 - unit of the counter value or empty
326 - event name
327 - run time of counter
328 - percentage of measurement time the counter was running
329 - optional variance if multiple values are collected with -r
330 - optional metric value
331 - optional unit of metric
332
333Additional metrics may be printed with all earlier fields being empty.
334
335SEE ALSO
336--------
337linkperf:perf-top[1], linkperf:perf-list[1]
1perf-stat(1)
2============
3
4NAME
5----
6perf-stat - Run a command and gather performance counter statistics
7
8SYNOPSIS
9--------
10[verse]
11'perf stat' [-e <EVENT> | --event=EVENT] [-a] <command>
12'perf stat' [-e <EVENT> | --event=EVENT] [-a] -- <command> [<options>]
13'perf stat' [-e <EVENT> | --event=EVENT] [-a] record [-o file] -- <command> [<options>]
14'perf stat' report [-i file]
15
16DESCRIPTION
17-----------
18This command runs a command and gathers performance counter statistics
19from it.
20
21
22OPTIONS
23-------
24<command>...::
25 Any command you can specify in a shell.
26
27record::
28 See STAT RECORD.
29
30report::
31 See STAT REPORT.
32
33-e::
34--event=::
35 Select the PMU event. Selection can be:
36
37 - a symbolic event name (use 'perf list' to list all events)
38
39 - a raw PMU event (eventsel+umask) in the form of rNNN where NNN is a
40 hexadecimal event descriptor.
41
42 - a symbolically formed event like 'pmu/param1=0x3,param2/' where
43 param1 and param2 are defined as formats for the PMU in
44 /sys/bus/event_source/devices/<pmu>/format/*
45
46 'percore' is a event qualifier that sums up the event counts for both
47 hardware threads in a core. For example:
48 perf stat -A -a -e cpu/event,percore=1/,otherevent ...
49
50 - a symbolically formed event like 'pmu/config=M,config1=N,config2=K/'
51 where M, N, K are numbers (in decimal, hex, octal format).
52 Acceptable values for each of 'config', 'config1' and 'config2'
53 parameters are defined by corresponding entries in
54 /sys/bus/event_source/devices/<pmu>/format/*
55
56 Note that the last two syntaxes support prefix and glob matching in
57 the PMU name to simplify creation of events across multiple instances
58 of the same type of PMU in large systems (e.g. memory controller PMUs).
59 Multiple PMU instances are typical for uncore PMUs, so the prefix
60 'uncore_' is also ignored when performing this match.
61
62
63-i::
64--no-inherit::
65 child tasks do not inherit counters
66-p::
67--pid=<pid>::
68 stat events on existing process id (comma separated list)
69
70-t::
71--tid=<tid>::
72 stat events on existing thread id (comma separated list)
73
74
75-a::
76--all-cpus::
77 system-wide collection from all CPUs (default if no target is specified)
78
79--no-scale::
80 Don't scale/normalize counter values
81
82-d::
83--detailed::
84 print more detailed statistics, can be specified up to 3 times
85
86 -d: detailed events, L1 and LLC data cache
87 -d -d: more detailed events, dTLB and iTLB events
88 -d -d -d: very detailed events, adding prefetch events
89
90-r::
91--repeat=<n>::
92 repeat command and print average + stddev (max: 100). 0 means forever.
93
94-B::
95--big-num::
96 print large numbers with thousands' separators according to locale
97
98-C::
99--cpu=::
100Count only on the list of CPUs provided. Multiple CPUs can be provided as a
101comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2.
102In per-thread mode, this option is ignored. The -a option is still necessary
103to activate system-wide monitoring. Default is to count on all CPUs.
104
105-A::
106--no-aggr::
107Do not aggregate counts across all monitored CPUs.
108
109-n::
110--null::
111 null run - don't start any counters
112
113-v::
114--verbose::
115 be more verbose (show counter open errors, etc)
116
117-x SEP::
118--field-separator SEP::
119print counts using a CSV-style output to make it easy to import directly into
120spreadsheets. Columns are separated by the string specified in SEP.
121
122--table:: Display time for each run (-r option), in a table format, e.g.:
123
124 $ perf stat --null -r 5 --table perf bench sched pipe
125
126 Performance counter stats for 'perf bench sched pipe' (5 runs):
127
128 # Table of individual measurements:
129 5.189 (-0.293) #
130 5.189 (-0.294) #
131 5.186 (-0.296) #
132 5.663 (+0.181) ##
133 6.186 (+0.703) ####
134
135 # Final result:
136 5.483 +- 0.198 seconds time elapsed ( +- 3.62% )
137
138-G name::
139--cgroup name::
140monitor only in the container (cgroup) called "name". This option is available only
141in per-cpu mode. The cgroup filesystem must be mounted. All threads belonging to
142container "name" are monitored when they run on the monitored CPUs. Multiple cgroups
143can be provided. Each cgroup is applied to the corresponding event, i.e., first cgroup
144to first event, second cgroup to second event and so on. It is possible to provide
145an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have
146corresponding events, i.e., they always refer to events defined earlier on the command
147line. If the user wants to track multiple events for a specific cgroup, the user can
148use '-e e1 -e e2 -G foo,foo' or just use '-e e1 -e e2 -G foo'.
149
150If wanting to monitor, say, 'cycles' for a cgroup and also for system wide, this
151command line can be used: 'perf stat -e cycles -G cgroup_name -a -e cycles'.
152
153-o file::
154--output file::
155Print the output into the designated file.
156
157--append::
158Append to the output file designated with the -o option. Ignored if -o is not specified.
159
160--log-fd::
161
162Log output to fd, instead of stderr. Complementary to --output, and mutually exclusive
163with it. --append may be used here. Examples:
164 3>results perf stat --log-fd 3 -- $cmd
165 3>>results perf stat --log-fd 3 --append -- $cmd
166
167--pre::
168--post::
169 Pre and post measurement hooks, e.g.:
170
171perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' -- make -s -j64 O=defconfig-build/ bzImage
172
173-I msecs::
174--interval-print msecs::
175Print count deltas every N milliseconds (minimum: 1ms)
176The overhead percentage could be high in some cases, for instance with small, sub 100ms intervals. Use with caution.
177 example: 'perf stat -I 1000 -e cycles -a sleep 5'
178
179--interval-count times::
180Print count deltas for fixed number of times.
181This option should be used together with "-I" option.
182 example: 'perf stat -I 1000 --interval-count 2 -e cycles -a'
183
184--interval-clear::
185Clear the screen before next interval.
186
187--timeout msecs::
188Stop the 'perf stat' session and print count deltas after N milliseconds (minimum: 10 ms).
189This option is not supported with the "-I" option.
190 example: 'perf stat --time 2000 -e cycles -a'
191
192--metric-only::
193Only print computed metrics. Print them in a single line.
194Don't show any raw values. Not supported with --per-thread.
195
196--per-socket::
197Aggregate counts per processor socket for system-wide mode measurements. This
198is a useful mode to detect imbalance between sockets. To enable this mode,
199use --per-socket in addition to -a. (system-wide). The output includes the
200socket number and the number of online processors on that socket. This is
201useful to gauge the amount of aggregation.
202
203--per-die::
204Aggregate counts per processor die for system-wide mode measurements. This
205is a useful mode to detect imbalance between dies. To enable this mode,
206use --per-die in addition to -a. (system-wide). The output includes the
207die number and the number of online processors on that die. This is
208useful to gauge the amount of aggregation.
209
210--per-core::
211Aggregate counts per physical processor for system-wide mode measurements. This
212is a useful mode to detect imbalance between physical cores. To enable this mode,
213use --per-core in addition to -a. (system-wide). The output includes the
214core number and the number of online logical processors on that physical processor.
215
216--per-thread::
217Aggregate counts per monitored threads, when monitoring threads (-t option)
218or processes (-p option).
219
220-D msecs::
221--delay msecs::
222After starting the program, wait msecs before measuring. This is useful to
223filter out the startup phase of the program, which is often very different.
224
225-T::
226--transaction::
227
228Print statistics of transactional execution if supported.
229
230STAT RECORD
231-----------
232Stores stat data into perf data file.
233
234-o file::
235--output file::
236Output file name.
237
238STAT REPORT
239-----------
240Reads and reports stat data from perf data file.
241
242-i file::
243--input file::
244Input file name.
245
246--per-socket::
247Aggregate counts per processor socket for system-wide mode measurements.
248
249--per-die::
250Aggregate counts per processor die for system-wide mode measurements.
251
252--per-core::
253Aggregate counts per physical processor for system-wide mode measurements.
254
255-M::
256--metrics::
257Print metrics or metricgroups specified in a comma separated list.
258For a group all metrics from the group are added.
259The events from the metrics are automatically measured.
260See perf list output for the possble metrics and metricgroups.
261
262-A::
263--no-aggr::
264Do not aggregate counts across all monitored CPUs.
265
266--topdown::
267Print top down level 1 metrics if supported by the CPU. This allows to
268determine bottle necks in the CPU pipeline for CPU bound workloads,
269by breaking the cycles consumed down into frontend bound, backend bound,
270bad speculation and retiring.
271
272Frontend bound means that the CPU cannot fetch and decode instructions fast
273enough. Backend bound means that computation or memory access is the bottle
274neck. Bad Speculation means that the CPU wasted cycles due to branch
275mispredictions and similar issues. Retiring means that the CPU computed without
276an apparently bottleneck. The bottleneck is only the real bottleneck
277if the workload is actually bound by the CPU and not by something else.
278
279For best results it is usually a good idea to use it with interval
280mode like -I 1000, as the bottleneck of workloads can change often.
281
282The top down metrics are collected per core instead of per
283CPU thread. Per core mode is automatically enabled
284and -a (global monitoring) is needed, requiring root rights or
285perf.perf_event_paranoid=-1.
286
287Topdown uses the full Performance Monitoring Unit, and needs
288disabling of the NMI watchdog (as root):
289echo 0 > /proc/sys/kernel/nmi_watchdog
290for best results. Otherwise the bottlenecks may be inconsistent
291on workload with changing phases.
292
293This enables --metric-only, unless overridden with --no-metric-only.
294
295To interpret the results it is usually needed to know on which
296CPUs the workload runs on. If needed the CPUs can be forced using
297taskset.
298
299--no-merge::
300Do not merge results from same PMUs.
301
302When multiple events are created from a single event specification,
303stat will, by default, aggregate the event counts and show the result
304in a single row. This option disables that behavior and shows
305the individual events and counts.
306
307Multiple events are created from a single event specification when:
3081. Prefix or glob matching is used for the PMU name.
3092. Aliases, which are listed immediately after the Kernel PMU events
310 by perf list, are used.
311
312--smi-cost::
313Measure SMI cost if msr/aperf/ and msr/smi/ events are supported.
314
315During the measurement, the /sys/device/cpu/freeze_on_smi will be set to
316freeze core counters on SMI.
317The aperf counter will not be effected by the setting.
318The cost of SMI can be measured by (aperf - unhalted core cycles).
319
320In practice, the percentages of SMI cycles is very useful for performance
321oriented analysis. --metric_only will be applied by default.
322The output is SMI cycles%, equals to (aperf - unhalted core cycles) / aperf
323
324Users who wants to get the actual value can apply --no-metric-only.
325
326EXAMPLES
327--------
328
329$ perf stat -- make
330
331 Performance counter stats for 'make':
332
333 83723.452481 task-clock:u (msec) # 1.004 CPUs utilized
334 0 context-switches:u # 0.000 K/sec
335 0 cpu-migrations:u # 0.000 K/sec
336 3,228,188 page-faults:u # 0.039 M/sec
337 229,570,665,834 cycles:u # 2.742 GHz
338 313,163,853,778 instructions:u # 1.36 insn per cycle
339 69,704,684,856 branches:u # 832.559 M/sec
340 2,078,861,393 branch-misses:u # 2.98% of all branches
341
342 83.409183620 seconds time elapsed
343
344 74.684747000 seconds user
345 8.739217000 seconds sys
346
347TIMINGS
348-------
349As displayed in the example above we can display 3 types of timings.
350We always display the time the counters were enabled/alive:
351
352 83.409183620 seconds time elapsed
353
354For workload sessions we also display time the workloads spent in
355user/system lands:
356
357 74.684747000 seconds user
358 8.739217000 seconds sys
359
360Those times are the very same as displayed by the 'time' tool.
361
362CSV FORMAT
363----------
364
365With -x, perf stat is able to output a not-quite-CSV format output
366Commas in the output are not put into "". To make it easy to parse
367it is recommended to use a different character like -x \;
368
369The fields are in this order:
370
371 - optional usec time stamp in fractions of second (with -I xxx)
372 - optional CPU, core, or socket identifier
373 - optional number of logical CPUs aggregated
374 - counter value
375 - unit of the counter value or empty
376 - event name
377 - run time of counter
378 - percentage of measurement time the counter was running
379 - optional variance if multiple values are collected with -r
380 - optional metric value
381 - optional unit of metric
382
383Additional metrics may be printed with all earlier fields being empty.
384
385SEE ALSO
386--------
387linkperf:perf-top[1], linkperf:perf-list[1]