Loading...
1Using TopDown metrics in user space
2-----------------------------------
3
4Intel CPUs (since Sandy Bridge and Silvermont) support a TopDown
5methology to break down CPU pipeline execution into 4 bottlenecks:
6frontend bound, backend bound, bad speculation, retiring.
7
8For more details on Topdown see [1][5]
9
10Traditionally this was implemented by events in generic counters
11and specific formulas to compute the bottlenecks.
12
13perf stat --topdown implements this.
14
15Full Top Down includes more levels that can break down the
16bottlenecks further. This is not directly implemented in perf,
17but available in other tools that can run on top of perf,
18such as toplev[2] or vtune[3]
19
20New Topdown features in Ice Lake
21===============================
22
23With Ice Lake CPUs the TopDown metrics are directly available as
24fixed counters and do not require generic counters. This allows
25to collect TopDown always in addition to other events.
26
27% perf stat -a --topdown -I1000
28# time retiring bad speculation frontend bound backend bound
29 1.001281330 23.0% 15.3% 29.6% 32.1%
30 2.003009005 5.0% 6.8% 46.6% 41.6%
31 3.004646182 6.7% 6.7% 46.0% 40.6%
32 4.006326375 5.0% 6.4% 47.6% 41.0%
33 5.007991804 5.1% 6.3% 46.3% 42.3%
34 6.009626773 6.2% 7.1% 47.3% 39.3%
35 7.011296356 4.7% 6.7% 46.2% 42.4%
36 8.012951831 4.7% 6.7% 47.5% 41.1%
37...
38
39This also enables measuring TopDown per thread/process instead
40of only per core.
41
42Using TopDown through RDPMC in applications on Ice Lake
43======================================================
44
45For more fine grained measurements it can be useful to
46access the new directly from user space. This is more complicated,
47but drastically lowers overhead.
48
49On Ice Lake, there is a new fixed counter 3: SLOTS, which reports
50"pipeline SLOTS" (cycles multiplied by core issue width) and a
51metric register that reports slots ratios for the different bottleneck
52categories.
53
54The metrics counter is CPU model specific and is not available on older
55CPUs.
56
57Example code
58============
59
60Library functions to do the functionality described below
61is also available in libjevents [4]
62
63The application opens a group with fixed counter 3 (SLOTS) and any
64metric event, and allow user programs to read the performance counters.
65
66Fixed counter 3 is mapped to a pseudo event event=0x00, umask=04,
67so the perf_event_attr structure should be initialized with
68{ .config = 0x0400, .type = PERF_TYPE_RAW }
69The metric events are mapped to the pseudo event event=0x00, umask=0x8X.
70For example, the perf_event_attr structure can be initialized with
71{ .config = 0x8000, .type = PERF_TYPE_RAW } for Retiring metric event
72The Fixed counter 3 must be the leader of the group.
73
74#include <linux/perf_event.h>
75#include <sys/mman.h>
76#include <sys/syscall.h>
77#include <unistd.h>
78
79/* Provide own perf_event_open stub because glibc doesn't */
80__attribute__((weak))
81int perf_event_open(struct perf_event_attr *attr, pid_t pid,
82 int cpu, int group_fd, unsigned long flags)
83{
84 return syscall(__NR_perf_event_open, attr, pid, cpu, group_fd, flags);
85}
86
87/* Open slots counter file descriptor for current task. */
88struct perf_event_attr slots = {
89 .type = PERF_TYPE_RAW,
90 .size = sizeof(struct perf_event_attr),
91 .config = 0x400,
92 .exclude_kernel = 1,
93};
94
95int slots_fd = perf_event_open(&slots, 0, -1, -1, 0);
96if (slots_fd < 0)
97 ... error ...
98
99/* Memory mapping the fd permits _rdpmc calls from userspace */
100void *slots_p = mmap(0, getpagesize(), PROT_READ, MAP_SHARED, slots_fd, 0);
101if (!slot_p)
102 .... error ...
103
104/*
105 * Open metrics event file descriptor for current task.
106 * Set slots event as the leader of the group.
107 */
108struct perf_event_attr metrics = {
109 .type = PERF_TYPE_RAW,
110 .size = sizeof(struct perf_event_attr),
111 .config = 0x8000,
112 .exclude_kernel = 1,
113};
114
115int metrics_fd = perf_event_open(&metrics, 0, -1, slots_fd, 0);
116if (metrics_fd < 0)
117 ... error ...
118
119/* Memory mapping the fd permits _rdpmc calls from userspace */
120void *metrics_p = mmap(0, getpagesize(), PROT_READ, MAP_SHARED, metrics_fd, 0);
121if (!metrics_p)
122 ... error ...
123
124Note: the file descriptors returned by the perf_event_open calls must be memory
125mapped to permit calls to the _rdpmd instruction. Permission may also be granted
126by writing the /sys/devices/cpu/rdpmc sysfs node.
127
128The RDPMC instruction (or _rdpmc compiler intrinsic) can now be used
129to read slots and the topdown metrics at different points of the program:
130
131#include <stdint.h>
132#include <x86intrin.h>
133
134#define RDPMC_FIXED (1 << 30) /* return fixed counters */
135#define RDPMC_METRIC (1 << 29) /* return metric counters */
136
137#define FIXED_COUNTER_SLOTS 3
138#define METRIC_COUNTER_TOPDOWN_L1_L2 0
139
140static inline uint64_t read_slots(void)
141{
142 return _rdpmc(RDPMC_FIXED | FIXED_COUNTER_SLOTS);
143}
144
145static inline uint64_t read_metrics(void)
146{
147 return _rdpmc(RDPMC_METRIC | METRIC_COUNTER_TOPDOWN_L1_L2);
148}
149
150Then the program can be instrumented to read these metrics at different
151points.
152
153It's not a good idea to do this with too short code regions,
154as the parallelism and overlap in the CPU program execution will
155cause too much measurement inaccuracy. For example instrumenting
156individual basic blocks is definitely too fine grained.
157
158_rdpmc calls should not be mixed with reading the metrics and slots counters
159through system calls, as the kernel will reset these counters after each system
160call.
161
162Decoding metrics values
163=======================
164
165The value reported by read_metrics() contains four 8 bit fields
166that represent a scaled ratio that represent the Level 1 bottleneck.
167All four fields add up to 0xff (= 100%)
168
169The binary ratios in the metric value can be converted to float ratios:
170
171#define GET_METRIC(m, i) (((m) >> (i*8)) & 0xff)
172
173/* L1 Topdown metric events */
174#define TOPDOWN_RETIRING(val) ((float)GET_METRIC(val, 0) / 0xff)
175#define TOPDOWN_BAD_SPEC(val) ((float)GET_METRIC(val, 1) / 0xff)
176#define TOPDOWN_FE_BOUND(val) ((float)GET_METRIC(val, 2) / 0xff)
177#define TOPDOWN_BE_BOUND(val) ((float)GET_METRIC(val, 3) / 0xff)
178
179/*
180 * L2 Topdown metric events.
181 * Available on Sapphire Rapids and later platforms.
182 */
183#define TOPDOWN_HEAVY_OPS(val) ((float)GET_METRIC(val, 4) / 0xff)
184#define TOPDOWN_BR_MISPREDICT(val) ((float)GET_METRIC(val, 5) / 0xff)
185#define TOPDOWN_FETCH_LAT(val) ((float)GET_METRIC(val, 6) / 0xff)
186#define TOPDOWN_MEM_BOUND(val) ((float)GET_METRIC(val, 7) / 0xff)
187
188and then converted to percent for printing.
189
190The ratios in the metric accumulate for the time when the counter
191is enabled. For measuring programs it is often useful to measure
192specific sections. For this it is needed to deltas on metrics.
193
194This can be done by scaling the metrics with the slots counter
195read at the same time.
196
197Then it's possible to take deltas of these slots counts
198measured at different points, and determine the metrics
199for that time period.
200
201 slots_a = read_slots();
202 metric_a = read_metrics();
203
204 ... larger code region ...
205
206 slots_b = read_slots()
207 metric_b = read_metrics()
208
209 # compute scaled metrics for measurement a
210 retiring_slots_a = GET_METRIC(metric_a, 0) * slots_a
211 bad_spec_slots_a = GET_METRIC(metric_a, 1) * slots_a
212 fe_bound_slots_a = GET_METRIC(metric_a, 2) * slots_a
213 be_bound_slots_a = GET_METRIC(metric_a, 3) * slots_a
214
215 # compute delta scaled metrics between b and a
216 retiring_slots = GET_METRIC(metric_b, 0) * slots_b - retiring_slots_a
217 bad_spec_slots = GET_METRIC(metric_b, 1) * slots_b - bad_spec_slots_a
218 fe_bound_slots = GET_METRIC(metric_b, 2) * slots_b - fe_bound_slots_a
219 be_bound_slots = GET_METRIC(metric_b, 3) * slots_b - be_bound_slots_a
220
221Later the individual ratios of L1 metric events for the measurement period can
222be recreated from these counts.
223
224 slots_delta = slots_b - slots_a
225 retiring_ratio = (float)retiring_slots / slots_delta
226 bad_spec_ratio = (float)bad_spec_slots / slots_delta
227 fe_bound_ratio = (float)fe_bound_slots / slots_delta
228 be_bound_ratio = (float)be_bound_slots / slota_delta
229
230 printf("Retiring %.2f%% Bad Speculation %.2f%% FE Bound %.2f%% BE Bound %.2f%%\n",
231 retiring_ratio * 100.,
232 bad_spec_ratio * 100.,
233 fe_bound_ratio * 100.,
234 be_bound_ratio * 100.);
235
236The individual ratios of L2 metric events for the measurement period can be
237recreated from L1 and L2 metric counters. (Available on Sapphire Rapids and
238later platforms)
239
240 # compute scaled metrics for measurement a
241 heavy_ops_slots_a = GET_METRIC(metric_a, 4) * slots_a
242 br_mispredict_slots_a = GET_METRIC(metric_a, 5) * slots_a
243 fetch_lat_slots_a = GET_METRIC(metric_a, 6) * slots_a
244 mem_bound_slots_a = GET_METRIC(metric_a, 7) * slots_a
245
246 # compute delta scaled metrics between b and a
247 heavy_ops_slots = GET_METRIC(metric_b, 4) * slots_b - heavy_ops_slots_a
248 br_mispredict_slots = GET_METRIC(metric_b, 5) * slots_b - br_mispredict_slots_a
249 fetch_lat_slots = GET_METRIC(metric_b, 6) * slots_b - fetch_lat_slots_a
250 mem_bound_slots = GET_METRIC(metric_b, 7) * slots_b - mem_bound_slots_a
251
252 slots_delta = slots_b - slots_a
253 heavy_ops_ratio = (float)heavy_ops_slots / slots_delta
254 light_ops_ratio = retiring_ratio - heavy_ops_ratio;
255
256 br_mispredict_ratio = (float)br_mispredict_slots / slots_delta
257 machine_clears_ratio = bad_spec_ratio - br_mispredict_ratio;
258
259 fetch_lat_ratio = (float)fetch_lat_slots / slots_delta
260 fetch_bw_ratio = fe_bound_ratio - fetch_lat_ratio;
261
262 mem_bound_ratio = (float)mem_bound_slots / slota_delta
263 core_bound_ratio = be_bound_ratio - mem_bound_ratio;
264
265 printf("Heavy Operations %.2f%% Light Operations %.2f%% "
266 "Branch Mispredict %.2f%% Machine Clears %.2f%% "
267 "Fetch Latency %.2f%% Fetch Bandwidth %.2f%% "
268 "Mem Bound %.2f%% Core Bound %.2f%%\n",
269 heavy_ops_ratio * 100.,
270 light_ops_ratio * 100.,
271 br_mispredict_ratio * 100.,
272 machine_clears_ratio * 100.,
273 fetch_lat_ratio * 100.,
274 fetch_bw_ratio * 100.,
275 mem_bound_ratio * 100.,
276 core_bound_ratio * 100.);
277
278Resetting metrics counters
279==========================
280
281Since the individual metrics are only 8bit they lose precision for
282short regions over time because the number of cycles covered by each
283fraction bit shrinks. So the counters need to be reset regularly.
284
285When using the kernel perf API the kernel resets on every read.
286So as long as the reading is at reasonable intervals (every few
287seconds) the precision is good.
288
289When using perf stat it is recommended to always use the -I option,
290with no longer interval than a few seconds
291
292 perf stat -I 1000 --topdown ...
293
294For user programs using RDPMC directly the counter can
295be reset explicitly using ioctl:
296
297 ioctl(perf_fd, PERF_EVENT_IOC_RESET, 0);
298
299This "opens" a new measurement period.
300
301A program using RDPMC for TopDown should schedule such a reset
302regularly, as in every few seconds.
303
304Limits on Ice Lake
305==================
306
307Four pseudo TopDown metric events are exposed for the end-users,
308topdown-retiring, topdown-bad-spec, topdown-fe-bound and topdown-be-bound.
309They can be used to collect the TopDown value under the following
310rules:
311- All the TopDown metric events must be in a group with the SLOTS event.
312- The SLOTS event must be the leader of the group.
313- The PERF_FORMAT_GROUP flag must be applied for each TopDown metric
314 events
315
316The SLOTS event and the TopDown metric events can be counting members of
317a sampling read group. Since the SLOTS event must be the leader of a TopDown
318group, the second event of the group is the sampling event.
319For example, perf record -e '{slots, $sampling_event, topdown-retiring}:S'
320
321Extension on Sapphire Rapids Server
322===================================
323The metrics counter is extended to support TMA method level 2 metrics.
324The lower half of the register is the TMA level 1 metrics (legacy).
325The upper half is also divided into four 8-bit fields for the new level 2
326metrics. Four more TopDown metric events are exposed for the end-users,
327topdown-heavy-ops, topdown-br-mispredict, topdown-fetch-lat and
328topdown-mem-bound.
329
330Each of the new level 2 metrics in the upper half is a subset of the
331corresponding level 1 metric in the lower half. Software can deduce the
332other four level 2 metrics by subtracting corresponding metrics as below.
333
334 Light_Operations = Retiring - Heavy_Operations
335 Machine_Clears = Bad_Speculation - Branch_Mispredicts
336 Fetch_Bandwidth = Frontend_Bound - Fetch_Latency
337 Core_Bound = Backend_Bound - Memory_Bound
338
339
340[1] https://software.intel.com/en-us/top-down-microarchitecture-analysis-method-win
341[2] https://github.com/andikleen/pmu-tools/wiki/toplev-manual
342[3] https://software.intel.com/en-us/intel-vtune-amplifier-xe
343[4] https://github.com/andikleen/pmu-tools/tree/master/jevents
344[5] https://sites.google.com/site/analysismethods/yasin-pubs
1Using TopDown metrics
2---------------------
3
4TopDown metrics break apart performance bottlenecks. Starting at level
51 it is typical to get metrics on retiring, bad speculation, frontend
6bound, and backend bound. Higher levels provide more detail in to the
7level 1 bottlenecks, such as at level 2: core bound, memory bound,
8heavy operations, light operations, branch mispredicts, machine
9clears, fetch latency and fetch bandwidth. For more details see [1][2][3].
10
11perf stat --topdown implements this using available metrics that vary
12per architecture.
13
14% perf stat -a --topdown -I1000
15# time % tma_retiring % tma_backend_bound % tma_frontend_bound % tma_bad_speculation
16 1.001141351 11.5 34.9 46.9 6.7
17 2.006141972 13.4 28.1 50.4 8.1
18 3.010162040 12.9 28.1 51.1 8.0
19 4.014009311 12.5 28.6 51.8 7.2
20 5.017838554 11.8 33.0 48.0 7.2
21 5.704818971 14.0 27.5 51.3 7.3
22...
23
24New Topdown features in Intel Ice Lake
25======================================
26
27With Ice Lake CPUs the TopDown metrics are directly available as
28fixed counters and do not require generic counters. This allows
29to collect TopDown always in addition to other events.
30
31Using TopDown through RDPMC in applications on Intel Ice Lake
32=============================================================
33
34For more fine grained measurements it can be useful to
35access the new directly from user space. This is more complicated,
36but drastically lowers overhead.
37
38On Ice Lake, there is a new fixed counter 3: SLOTS, which reports
39"pipeline SLOTS" (cycles multiplied by core issue width) and a
40metric register that reports slots ratios for the different bottleneck
41categories.
42
43The metrics counter is CPU model specific and is not available on older
44CPUs.
45
46Example code
47============
48
49Library functions to do the functionality described below
50is also available in libjevents [4]
51
52The application opens a group with fixed counter 3 (SLOTS) and any
53metric event, and allow user programs to read the performance counters.
54
55Fixed counter 3 is mapped to a pseudo event event=0x00, umask=04,
56so the perf_event_attr structure should be initialized with
57{ .config = 0x0400, .type = PERF_TYPE_RAW }
58The metric events are mapped to the pseudo event event=0x00, umask=0x8X.
59For example, the perf_event_attr structure can be initialized with
60{ .config = 0x8000, .type = PERF_TYPE_RAW } for Retiring metric event
61The Fixed counter 3 must be the leader of the group.
62
63#include <linux/perf_event.h>
64#include <sys/mman.h>
65#include <sys/syscall.h>
66#include <unistd.h>
67
68/* Provide own perf_event_open stub because glibc doesn't */
69__attribute__((weak))
70int perf_event_open(struct perf_event_attr *attr, pid_t pid,
71 int cpu, int group_fd, unsigned long flags)
72{
73 return syscall(__NR_perf_event_open, attr, pid, cpu, group_fd, flags);
74}
75
76/* Open slots counter file descriptor for current task. */
77struct perf_event_attr slots = {
78 .type = PERF_TYPE_RAW,
79 .size = sizeof(struct perf_event_attr),
80 .config = 0x400,
81 .exclude_kernel = 1,
82};
83
84int slots_fd = perf_event_open(&slots, 0, -1, -1, 0);
85if (slots_fd < 0)
86 ... error ...
87
88/* Memory mapping the fd permits _rdpmc calls from userspace */
89void *slots_p = mmap(0, getpagesize(), PROT_READ, MAP_SHARED, slots_fd, 0);
90if (!slot_p)
91 .... error ...
92
93/*
94 * Open metrics event file descriptor for current task.
95 * Set slots event as the leader of the group.
96 */
97struct perf_event_attr metrics = {
98 .type = PERF_TYPE_RAW,
99 .size = sizeof(struct perf_event_attr),
100 .config = 0x8000,
101 .exclude_kernel = 1,
102};
103
104int metrics_fd = perf_event_open(&metrics, 0, -1, slots_fd, 0);
105if (metrics_fd < 0)
106 ... error ...
107
108/* Memory mapping the fd permits _rdpmc calls from userspace */
109void *metrics_p = mmap(0, getpagesize(), PROT_READ, MAP_SHARED, metrics_fd, 0);
110if (!metrics_p)
111 ... error ...
112
113Note: the file descriptors returned by the perf_event_open calls must be memory
114mapped to permit calls to the _rdpmd instruction. Permission may also be granted
115by writing the /sys/devices/cpu/rdpmc sysfs node.
116
117The RDPMC instruction (or _rdpmc compiler intrinsic) can now be used
118to read slots and the topdown metrics at different points of the program:
119
120#include <stdint.h>
121#include <x86intrin.h>
122
123#define RDPMC_FIXED (1 << 30) /* return fixed counters */
124#define RDPMC_METRIC (1 << 29) /* return metric counters */
125
126#define FIXED_COUNTER_SLOTS 3
127#define METRIC_COUNTER_TOPDOWN_L1_L2 0
128
129static inline uint64_t read_slots(void)
130{
131 return _rdpmc(RDPMC_FIXED | FIXED_COUNTER_SLOTS);
132}
133
134static inline uint64_t read_metrics(void)
135{
136 return _rdpmc(RDPMC_METRIC | METRIC_COUNTER_TOPDOWN_L1_L2);
137}
138
139Then the program can be instrumented to read these metrics at different
140points.
141
142It's not a good idea to do this with too short code regions,
143as the parallelism and overlap in the CPU program execution will
144cause too much measurement inaccuracy. For example instrumenting
145individual basic blocks is definitely too fine grained.
146
147_rdpmc calls should not be mixed with reading the metrics and slots counters
148through system calls, as the kernel will reset these counters after each system
149call.
150
151Decoding metrics values
152=======================
153
154The value reported by read_metrics() contains four 8 bit fields
155that represent a scaled ratio that represent the Level 1 bottleneck.
156All four fields add up to 0xff (= 100%)
157
158The binary ratios in the metric value can be converted to float ratios:
159
160#define GET_METRIC(m, i) (((m) >> (i*8)) & 0xff)
161
162/* L1 Topdown metric events */
163#define TOPDOWN_RETIRING(val) ((float)GET_METRIC(val, 0) / 0xff)
164#define TOPDOWN_BAD_SPEC(val) ((float)GET_METRIC(val, 1) / 0xff)
165#define TOPDOWN_FE_BOUND(val) ((float)GET_METRIC(val, 2) / 0xff)
166#define TOPDOWN_BE_BOUND(val) ((float)GET_METRIC(val, 3) / 0xff)
167
168/*
169 * L2 Topdown metric events.
170 * Available on Sapphire Rapids and later platforms.
171 */
172#define TOPDOWN_HEAVY_OPS(val) ((float)GET_METRIC(val, 4) / 0xff)
173#define TOPDOWN_BR_MISPREDICT(val) ((float)GET_METRIC(val, 5) / 0xff)
174#define TOPDOWN_FETCH_LAT(val) ((float)GET_METRIC(val, 6) / 0xff)
175#define TOPDOWN_MEM_BOUND(val) ((float)GET_METRIC(val, 7) / 0xff)
176
177and then converted to percent for printing.
178
179The ratios in the metric accumulate for the time when the counter
180is enabled. For measuring programs it is often useful to measure
181specific sections. For this it is needed to deltas on metrics.
182
183This can be done by scaling the metrics with the slots counter
184read at the same time.
185
186Then it's possible to take deltas of these slots counts
187measured at different points, and determine the metrics
188for that time period.
189
190 slots_a = read_slots();
191 metric_a = read_metrics();
192
193 ... larger code region ...
194
195 slots_b = read_slots()
196 metric_b = read_metrics()
197
198 # compute scaled metrics for measurement a
199 retiring_slots_a = GET_METRIC(metric_a, 0) * slots_a
200 bad_spec_slots_a = GET_METRIC(metric_a, 1) * slots_a
201 fe_bound_slots_a = GET_METRIC(metric_a, 2) * slots_a
202 be_bound_slots_a = GET_METRIC(metric_a, 3) * slots_a
203
204 # compute delta scaled metrics between b and a
205 retiring_slots = GET_METRIC(metric_b, 0) * slots_b - retiring_slots_a
206 bad_spec_slots = GET_METRIC(metric_b, 1) * slots_b - bad_spec_slots_a
207 fe_bound_slots = GET_METRIC(metric_b, 2) * slots_b - fe_bound_slots_a
208 be_bound_slots = GET_METRIC(metric_b, 3) * slots_b - be_bound_slots_a
209
210Later the individual ratios of L1 metric events for the measurement period can
211be recreated from these counts.
212
213 slots_delta = slots_b - slots_a
214 retiring_ratio = (float)retiring_slots / slots_delta
215 bad_spec_ratio = (float)bad_spec_slots / slots_delta
216 fe_bound_ratio = (float)fe_bound_slots / slots_delta
217 be_bound_ratio = (float)be_bound_slots / slota_delta
218
219 printf("Retiring %.2f%% Bad Speculation %.2f%% FE Bound %.2f%% BE Bound %.2f%%\n",
220 retiring_ratio * 100.,
221 bad_spec_ratio * 100.,
222 fe_bound_ratio * 100.,
223 be_bound_ratio * 100.);
224
225The individual ratios of L2 metric events for the measurement period can be
226recreated from L1 and L2 metric counters. (Available on Sapphire Rapids and
227later platforms)
228
229 # compute scaled metrics for measurement a
230 heavy_ops_slots_a = GET_METRIC(metric_a, 4) * slots_a
231 br_mispredict_slots_a = GET_METRIC(metric_a, 5) * slots_a
232 fetch_lat_slots_a = GET_METRIC(metric_a, 6) * slots_a
233 mem_bound_slots_a = GET_METRIC(metric_a, 7) * slots_a
234
235 # compute delta scaled metrics between b and a
236 heavy_ops_slots = GET_METRIC(metric_b, 4) * slots_b - heavy_ops_slots_a
237 br_mispredict_slots = GET_METRIC(metric_b, 5) * slots_b - br_mispredict_slots_a
238 fetch_lat_slots = GET_METRIC(metric_b, 6) * slots_b - fetch_lat_slots_a
239 mem_bound_slots = GET_METRIC(metric_b, 7) * slots_b - mem_bound_slots_a
240
241 slots_delta = slots_b - slots_a
242 heavy_ops_ratio = (float)heavy_ops_slots / slots_delta
243 light_ops_ratio = retiring_ratio - heavy_ops_ratio;
244
245 br_mispredict_ratio = (float)br_mispredict_slots / slots_delta
246 machine_clears_ratio = bad_spec_ratio - br_mispredict_ratio;
247
248 fetch_lat_ratio = (float)fetch_lat_slots / slots_delta
249 fetch_bw_ratio = fe_bound_ratio - fetch_lat_ratio;
250
251 mem_bound_ratio = (float)mem_bound_slots / slota_delta
252 core_bound_ratio = be_bound_ratio - mem_bound_ratio;
253
254 printf("Heavy Operations %.2f%% Light Operations %.2f%% "
255 "Branch Mispredict %.2f%% Machine Clears %.2f%% "
256 "Fetch Latency %.2f%% Fetch Bandwidth %.2f%% "
257 "Mem Bound %.2f%% Core Bound %.2f%%\n",
258 heavy_ops_ratio * 100.,
259 light_ops_ratio * 100.,
260 br_mispredict_ratio * 100.,
261 machine_clears_ratio * 100.,
262 fetch_lat_ratio * 100.,
263 fetch_bw_ratio * 100.,
264 mem_bound_ratio * 100.,
265 core_bound_ratio * 100.);
266
267Resetting metrics counters
268==========================
269
270Since the individual metrics are only 8bit they lose precision for
271short regions over time because the number of cycles covered by each
272fraction bit shrinks. So the counters need to be reset regularly.
273
274When using the kernel perf API the kernel resets on every read.
275So as long as the reading is at reasonable intervals (every few
276seconds) the precision is good.
277
278When using perf stat it is recommended to always use the -I option,
279with no longer interval than a few seconds
280
281 perf stat -I 1000 --topdown ...
282
283For user programs using RDPMC directly the counter can
284be reset explicitly using ioctl:
285
286 ioctl(perf_fd, PERF_EVENT_IOC_RESET, 0);
287
288This "opens" a new measurement period.
289
290A program using RDPMC for TopDown should schedule such a reset
291regularly, as in every few seconds.
292
293Limits on Intel Ice Lake
294========================
295
296Four pseudo TopDown metric events are exposed for the end-users,
297topdown-retiring, topdown-bad-spec, topdown-fe-bound and topdown-be-bound.
298They can be used to collect the TopDown value under the following
299rules:
300- All the TopDown metric events must be in a group with the SLOTS event.
301- The SLOTS event must be the leader of the group.
302- The PERF_FORMAT_GROUP flag must be applied for each TopDown metric
303 events
304
305The SLOTS event and the TopDown metric events can be counting members of
306a sampling read group. Since the SLOTS event must be the leader of a TopDown
307group, the second event of the group is the sampling event.
308For example, perf record -e '{slots, $sampling_event, topdown-retiring}:S'
309
310Extension on Intel Sapphire Rapids Server
311=========================================
312The metrics counter is extended to support TMA method level 2 metrics.
313The lower half of the register is the TMA level 1 metrics (legacy).
314The upper half is also divided into four 8-bit fields for the new level 2
315metrics. Four more TopDown metric events are exposed for the end-users,
316topdown-heavy-ops, topdown-br-mispredict, topdown-fetch-lat and
317topdown-mem-bound.
318
319Each of the new level 2 metrics in the upper half is a subset of the
320corresponding level 1 metric in the lower half. Software can deduce the
321other four level 2 metrics by subtracting corresponding metrics as below.
322
323 Light_Operations = Retiring - Heavy_Operations
324 Machine_Clears = Bad_Speculation - Branch_Mispredicts
325 Fetch_Bandwidth = Frontend_Bound - Fetch_Latency
326 Core_Bound = Backend_Bound - Memory_Bound
327
328
329[1] https://software.intel.com/en-us/top-down-microarchitecture-analysis-method-win
330[2] https://sites.google.com/site/analysismethods/yasin-pubs
331[3] https://perf.wiki.kernel.org/index.php/Top-Down_Analysis
332[4] https://github.com/andikleen/pmu-tools/tree/master/jevents