Loading...
1perf-c2c(1)
2===========
3
4NAME
5----
6perf-c2c - Shared Data C2C/HITM Analyzer.
7
8SYNOPSIS
9--------
10[verse]
11'perf c2c record' [<options>] <command>
12'perf c2c record' [<options>] -- [<record command options>] <command>
13'perf c2c report' [<options>]
14
15DESCRIPTION
16-----------
17C2C stands for Cache To Cache.
18
19The perf c2c tool provides means for Shared Data C2C/HITM analysis. It allows
20you to track down the cacheline contentions.
21
22On x86, the tool is based on load latency and precise store facility events
23provided by Intel CPUs. On PowerPC, the tool uses random instruction sampling
24with thresholding feature.
25
26These events provide:
27 - memory address of the access
28 - type of the access (load and store details)
29 - latency (in cycles) of the load access
30
31The c2c tool provide means to record this data and report back access details
32for cachelines with highest contention - highest number of HITM accesses.
33
34The basic workflow with this tool follows the standard record/report phase.
35User uses the record command to record events data and report command to
36display it.
37
38
39RECORD OPTIONS
40--------------
41-e::
42--event=::
43 Select the PMU event. Use 'perf c2c record -e list'
44 to list available events.
45
46-v::
47--verbose::
48 Be more verbose (show counter open errors, etc).
49
50-l::
51--ldlat::
52 Configure mem-loads latency. (x86 only)
53
54-k::
55--all-kernel::
56 Configure all used events to run in kernel space.
57
58-u::
59--all-user::
60 Configure all used events to run in user space.
61
62REPORT OPTIONS
63--------------
64-k::
65--vmlinux=<file>::
66 vmlinux pathname
67
68-v::
69--verbose::
70 Be more verbose (show counter open errors, etc).
71
72-i::
73--input::
74 Specify the input file to process.
75
76-N::
77--node-info::
78 Show extra node info in report (see NODE INFO section)
79
80-c::
81--coalesce::
82 Specify sorting fields for single cacheline display.
83 Following fields are available: tid,pid,iaddr,dso
84 (see COALESCE)
85
86-g::
87--call-graph::
88 Setup callchains parameters.
89 Please refer to perf-report man page for details.
90
91--stdio::
92 Force the stdio output (see STDIO OUTPUT)
93
94--stats::
95 Display only statistic tables and force stdio mode.
96
97--full-symbols::
98 Display full length of symbols.
99
100--no-source::
101 Do not display Source:Line column.
102
103--show-all::
104 Show all captured HITM lines, with no regard to HITM % 0.0005 limit.
105
106-f::
107--force::
108 Don't do ownership validation.
109
110-d::
111--display::
112 Switch to HITM type (rmt, lcl) to display and sort on. Total HITMs as default.
113
114--stitch-lbr::
115 Show callgraph with stitched LBRs, which may have more complete
116 callgraph. The perf.data file must have been obtained using
117 perf c2c record --call-graph lbr.
118 Disabled by default. In common cases with call stack overflows,
119 it can recreate better call stacks than the default lbr call stack
120 output. But this approach is not full proof. There can be cases
121 where it creates incorrect call stacks from incorrect matches.
122 The known limitations include exception handing such as
123 setjmp/longjmp will have calls/returns not match.
124
125C2C RECORD
126----------
127The perf c2c record command setup options related to HITM cacheline analysis
128and calls standard perf record command.
129
130Following perf record options are configured by default:
131(check perf record man page for details)
132
133 -W,-d,--phys-data,--sample-cpu
134
135Unless specified otherwise with '-e' option, following events are monitored by
136default on x86:
137
138 cpu/mem-loads,ldlat=30/P
139 cpu/mem-stores/P
140
141and following on PowerPC:
142
143 cpu/mem-loads/
144 cpu/mem-stores/
145
146User can pass any 'perf record' option behind '--' mark, like (to enable
147callchains and system wide monitoring):
148
149 $ perf c2c record -- -g -a
150
151Please check RECORD OPTIONS section for specific c2c record options.
152
153C2C REPORT
154----------
155The perf c2c report command displays shared data analysis. It comes in two
156display modes: stdio and tui (default).
157
158The report command workflow is following:
159 - sort all the data based on the cacheline address
160 - store access details for each cacheline
161 - sort all cachelines based on user settings
162 - display data
163
164In general perf report output consist of 2 basic views:
165 1) most expensive cachelines list
166 2) offsets details for each cacheline
167
168For each cacheline in the 1) list we display following data:
169(Both stdio and TUI modes follow the same fields output)
170
171 Index
172 - zero based index to identify the cacheline
173
174 Cacheline
175 - cacheline address (hex number)
176
177 Rmt/Lcl Hitm
178 - cacheline percentage of all Remote/Local HITM accesses
179
180 LLC Load Hitm - Total, LclHitm, RmtHitm
181 - count of Total/Local/Remote load HITMs
182
183 Total records
184 - sum of all cachelines accesses
185
186 Total loads
187 - sum of all load accesses
188
189 Total stores
190 - sum of all store accesses
191
192 Store Reference - L1Hit, L1Miss
193 L1Hit - store accesses that hit L1
194 L1Miss - store accesses that missed L1
195
196 Core Load Hit - FB, L1, L2
197 - count of load hits in FB (Fill Buffer), L1 and L2 cache
198
199 LLC Load Hit - LlcHit, LclHitm
200 - count of LLC load accesses, includes LLC hits and LLC HITMs
201
202 RMT Load Hit - RmtHit, RmtHitm
203 - count of remote load accesses, includes remote hits and remote HITMs
204
205 Load Dram - Lcl, Rmt
206 - count of local and remote DRAM accesses
207
208For each offset in the 2) list we display following data:
209
210 HITM - Rmt, Lcl
211 - % of Remote/Local HITM accesses for given offset within cacheline
212
213 Store Refs - L1 Hit, L1 Miss
214 - % of store accesses that hit/missed L1 for given offset within cacheline
215
216 Data address - Offset
217 - offset address
218
219 Pid
220 - pid of the process responsible for the accesses
221
222 Tid
223 - tid of the process responsible for the accesses
224
225 Code address
226 - code address responsible for the accesses
227
228 cycles - rmt hitm, lcl hitm, load
229 - sum of cycles for given accesses - Remote/Local HITM and generic load
230
231 cpu cnt
232 - number of cpus that participated on the access
233
234 Symbol
235 - code symbol related to the 'Code address' value
236
237 Shared Object
238 - shared object name related to the 'Code address' value
239
240 Source:Line
241 - source information related to the 'Code address' value
242
243 Node
244 - nodes participating on the access (see NODE INFO section)
245
246NODE INFO
247---------
248The 'Node' field displays nodes that accesses given cacheline
249offset. Its output comes in 3 flavors:
250 - node IDs separated by ','
251 - node IDs with stats for each ID, in following format:
252 Node{cpus %hitms %stores}
253 - node IDs with list of affected CPUs in following format:
254 Node{cpu list}
255
256User can switch between above flavors with -N option or
257use 'n' key to interactively switch in TUI mode.
258
259COALESCE
260--------
261User can specify how to sort offsets for cacheline.
262
263Following fields are available and governs the final
264output fields set for caheline offsets output:
265
266 tid - coalesced by process TIDs
267 pid - coalesced by process PIDs
268 iaddr - coalesced by code address, following fields are displayed:
269 Code address, Code symbol, Shared Object, Source line
270 dso - coalesced by shared object
271
272By default the coalescing is setup with 'pid,iaddr'.
273
274STDIO OUTPUT
275------------
276The stdio output displays data on standard output.
277
278Following tables are displayed:
279 Trace Event Information
280 - overall statistics of memory accesses
281
282 Global Shared Cache Line Event Information
283 - overall statistics on shared cachelines
284
285 Shared Data Cache Line Table
286 - list of most expensive cachelines
287
288 Shared Cache Line Distribution Pareto
289 - list of all accessed offsets for each cacheline
290
291TUI OUTPUT
292----------
293The TUI output provides interactive interface to navigate
294through cachelines list and to display offset details.
295
296For details please refer to the help window by pressing '?' key.
297
298CREDITS
299-------
300Although Don Zickus, Dick Fowles and Joe Mario worked together
301to get this implemented, we got lots of early help from Arnaldo
302Carvalho de Melo, Stephane Eranian, Jiri Olsa and Andi Kleen.
303
304C2C BLOG
305--------
306Check Joe's blog on c2c tool for detailed use case explanation:
307 https://joemario.github.io/blog/2016/09/01/c2c-blog/
308
309SEE ALSO
310--------
311linkperf:perf-record[1], linkperf:perf-mem[1]
1perf-c2c(1)
2===========
3
4NAME
5----
6perf-c2c - Shared Data C2C/HITM Analyzer.
7
8SYNOPSIS
9--------
10[verse]
11'perf c2c record' [<options>] <command>
12'perf c2c record' [<options>] -- [<record command options>] <command>
13'perf c2c report' [<options>]
14
15DESCRIPTION
16-----------
17C2C stands for Cache To Cache.
18
19The perf c2c tool provides means for Shared Data C2C/HITM analysis. It allows
20you to track down the cacheline contentions.
21
22On x86, the tool is based on load latency and precise store facility events
23provided by Intel CPUs. On PowerPC, the tool uses random instruction sampling
24with thresholding feature.
25
26These events provide:
27 - memory address of the access
28 - type of the access (load and store details)
29 - latency (in cycles) of the load access
30
31The c2c tool provide means to record this data and report back access details
32for cachelines with highest contention - highest number of HITM accesses.
33
34The basic workflow with this tool follows the standard record/report phase.
35User uses the record command to record events data and report command to
36display it.
37
38
39RECORD OPTIONS
40--------------
41-e::
42--event=::
43 Select the PMU event. Use 'perf mem record -e list'
44 to list available events.
45
46-v::
47--verbose::
48 Be more verbose (show counter open errors, etc).
49
50-l::
51--ldlat::
52 Configure mem-loads latency. (x86 only)
53
54-k::
55--all-kernel::
56 Configure all used events to run in kernel space.
57
58-u::
59--all-user::
60 Configure all used events to run in user space.
61
62REPORT OPTIONS
63--------------
64-k::
65--vmlinux=<file>::
66 vmlinux pathname
67
68-v::
69--verbose::
70 Be more verbose (show counter open errors, etc).
71
72-i::
73--input::
74 Specify the input file to process.
75
76-N::
77--node-info::
78 Show extra node info in report (see NODE INFO section)
79
80-c::
81--coalesce::
82 Specify sorting fields for single cacheline display.
83 Following fields are available: tid,pid,iaddr,dso
84 (see COALESCE)
85
86-g::
87--call-graph::
88 Setup callchains parameters.
89 Please refer to perf-report man page for details.
90
91--stdio::
92 Force the stdio output (see STDIO OUTPUT)
93
94--stats::
95 Display only statistic tables and force stdio mode.
96
97--full-symbols::
98 Display full length of symbols.
99
100--no-source::
101 Do not display Source:Line column.
102
103--show-all::
104 Show all captured HITM lines, with no regard to HITM % 0.0005 limit.
105
106-f::
107--force::
108 Don't do ownership validation.
109
110-d::
111--display::
112 Switch to HITM type (rmt, lcl) to display and sort on. Total HITMs as default.
113
114C2C RECORD
115----------
116The perf c2c record command setup options related to HITM cacheline analysis
117and calls standard perf record command.
118
119Following perf record options are configured by default:
120(check perf record man page for details)
121
122 -W,-d,--phys-data,--sample-cpu
123
124Unless specified otherwise with '-e' option, following events are monitored by
125default on x86:
126
127 cpu/mem-loads,ldlat=30/P
128 cpu/mem-stores/P
129
130and following on PowerPC:
131
132 cpu/mem-loads/
133 cpu/mem-stores/
134
135User can pass any 'perf record' option behind '--' mark, like (to enable
136callchains and system wide monitoring):
137
138 $ perf c2c record -- -g -a
139
140Please check RECORD OPTIONS section for specific c2c record options.
141
142C2C REPORT
143----------
144The perf c2c report command displays shared data analysis. It comes in two
145display modes: stdio and tui (default).
146
147The report command workflow is following:
148 - sort all the data based on the cacheline address
149 - store access details for each cacheline
150 - sort all cachelines based on user settings
151 - display data
152
153In general perf report output consist of 2 basic views:
154 1) most expensive cachelines list
155 2) offsets details for each cacheline
156
157For each cacheline in the 1) list we display following data:
158(Both stdio and TUI modes follow the same fields output)
159
160 Index
161 - zero based index to identify the cacheline
162
163 Cacheline
164 - cacheline address (hex number)
165
166 Total records
167 - sum of all cachelines accesses
168
169 Rmt/Lcl Hitm
170 - cacheline percentage of all Remote/Local HITM accesses
171
172 LLC Load Hitm - Total, Lcl, Rmt
173 - count of Total/Local/Remote load HITMs
174
175 Store Reference - Total, L1Hit, L1Miss
176 Total - all store accesses
177 L1Hit - store accesses that hit L1
178 L1Hit - store accesses that missed L1
179
180 Load Dram
181 - count of local and remote DRAM accesses
182
183 LLC Ld Miss
184 - count of all accesses that missed LLC
185
186 Total Loads
187 - sum of all load accesses
188
189 Core Load Hit - FB, L1, L2
190 - count of load hits in FB (Fill Buffer), L1 and L2 cache
191
192 LLC Load Hit - Llc, Rmt
193 - count of LLC and Remote load hits
194
195For each offset in the 2) list we display following data:
196
197 HITM - Rmt, Lcl
198 - % of Remote/Local HITM accesses for given offset within cacheline
199
200 Store Refs - L1 Hit, L1 Miss
201 - % of store accesses that hit/missed L1 for given offset within cacheline
202
203 Data address - Offset
204 - offset address
205
206 Pid
207 - pid of the process responsible for the accesses
208
209 Tid
210 - tid of the process responsible for the accesses
211
212 Code address
213 - code address responsible for the accesses
214
215 cycles - rmt hitm, lcl hitm, load
216 - sum of cycles for given accesses - Remote/Local HITM and generic load
217
218 cpu cnt
219 - number of cpus that participated on the access
220
221 Symbol
222 - code symbol related to the 'Code address' value
223
224 Shared Object
225 - shared object name related to the 'Code address' value
226
227 Source:Line
228 - source information related to the 'Code address' value
229
230 Node
231 - nodes participating on the access (see NODE INFO section)
232
233NODE INFO
234---------
235The 'Node' field displays nodes that accesses given cacheline
236offset. Its output comes in 3 flavors:
237 - node IDs separated by ','
238 - node IDs with stats for each ID, in following format:
239 Node{cpus %hitms %stores}
240 - node IDs with list of affected CPUs in following format:
241 Node{cpu list}
242
243User can switch between above flavors with -N option or
244use 'n' key to interactively switch in TUI mode.
245
246COALESCE
247--------
248User can specify how to sort offsets for cacheline.
249
250Following fields are available and governs the final
251output fields set for caheline offsets output:
252
253 tid - coalesced by process TIDs
254 pid - coalesced by process PIDs
255 iaddr - coalesced by code address, following fields are displayed:
256 Code address, Code symbol, Shared Object, Source line
257 dso - coalesced by shared object
258
259By default the coalescing is setup with 'pid,iaddr'.
260
261STDIO OUTPUT
262------------
263The stdio output displays data on standard output.
264
265Following tables are displayed:
266 Trace Event Information
267 - overall statistics of memory accesses
268
269 Global Shared Cache Line Event Information
270 - overall statistics on shared cachelines
271
272 Shared Data Cache Line Table
273 - list of most expensive cachelines
274
275 Shared Cache Line Distribution Pareto
276 - list of all accessed offsets for each cacheline
277
278TUI OUTPUT
279----------
280The TUI output provides interactive interface to navigate
281through cachelines list and to display offset details.
282
283For details please refer to the help window by pressing '?' key.
284
285CREDITS
286-------
287Although Don Zickus, Dick Fowles and Joe Mario worked together
288to get this implemented, we got lots of early help from Arnaldo
289Carvalho de Melo, Stephane Eranian, Jiri Olsa and Andi Kleen.
290
291C2C BLOG
292--------
293Check Joe's blog on c2c tool for detailed use case explanation:
294 https://joemario.github.io/blog/2016/09/01/c2c-blog/
295
296SEE ALSO
297--------
298linkperf:perf-record[1], linkperf:perf-mem[1]