Linux Audio

Check our new training course

Loading...
Note: File does not exist in v6.13.7.
   1.. SPDX-License-Identifier: GPL-2.0
   2.. include:: <isonum.txt>
   3
   4===========================================
   5User Interface for Resource Control feature
   6===========================================
   7
   8:Copyright: |copy| 2016 Intel Corporation
   9:Authors: - Fenghua Yu <fenghua.yu@intel.com>
  10          - Tony Luck <tony.luck@intel.com>
  11          - Vikas Shivappa <vikas.shivappa@intel.com>
  12
  13
  14Intel refers to this feature as Intel Resource Director Technology(Intel(R) RDT).
  15AMD refers to this feature as AMD Platform Quality of Service(AMD QoS).
  16
  17This feature is enabled by the CONFIG_X86_CPU_RESCTRL and the x86 /proc/cpuinfo
  18flag bits:
  19
  20=============================================	================================
  21RDT (Resource Director Technology) Allocation	"rdt_a"
  22CAT (Cache Allocation Technology)		"cat_l3", "cat_l2"
  23CDP (Code and Data Prioritization)		"cdp_l3", "cdp_l2"
  24CQM (Cache QoS Monitoring)			"cqm_llc", "cqm_occup_llc"
  25MBM (Memory Bandwidth Monitoring)		"cqm_mbm_total", "cqm_mbm_local"
  26MBA (Memory Bandwidth Allocation)		"mba"
  27=============================================	================================
  28
  29To use the feature mount the file system::
  30
  31 # mount -t resctrl resctrl [-o cdp[,cdpl2][,mba_MBps]] /sys/fs/resctrl
  32
  33mount options are:
  34
  35"cdp":
  36	Enable code/data prioritization in L3 cache allocations.
  37"cdpl2":
  38	Enable code/data prioritization in L2 cache allocations.
  39"mba_MBps":
  40	Enable the MBA Software Controller(mba_sc) to specify MBA
  41	bandwidth in MBps
  42
  43L2 and L3 CDP are controlled separately.
  44
  45RDT features are orthogonal. A particular system may support only
  46monitoring, only control, or both monitoring and control.  Cache
  47pseudo-locking is a unique way of using cache control to "pin" or
  48"lock" data in the cache. Details can be found in
  49"Cache Pseudo-Locking".
  50
  51
  52The mount succeeds if either of allocation or monitoring is present, but
  53only those files and directories supported by the system will be created.
  54For more details on the behavior of the interface during monitoring
  55and allocation, see the "Resource alloc and monitor groups" section.
  56
  57Info directory
  58==============
  59
  60The 'info' directory contains information about the enabled
  61resources. Each resource has its own subdirectory. The subdirectory
  62names reflect the resource names.
  63
  64Each subdirectory contains the following files with respect to
  65allocation:
  66
  67Cache resource(L3/L2)  subdirectory contains the following files
  68related to allocation:
  69
  70"num_closids":
  71		The number of CLOSIDs which are valid for this
  72		resource. The kernel uses the smallest number of
  73		CLOSIDs of all enabled resources as limit.
  74"cbm_mask":
  75		The bitmask which is valid for this resource.
  76		This mask is equivalent to 100%.
  77"min_cbm_bits":
  78		The minimum number of consecutive bits which
  79		must be set when writing a mask.
  80
  81"shareable_bits":
  82		Bitmask of shareable resource with other executing
  83		entities (e.g. I/O). User can use this when
  84		setting up exclusive cache partitions. Note that
  85		some platforms support devices that have their
  86		own settings for cache use which can over-ride
  87		these bits.
  88"bit_usage":
  89		Annotated capacity bitmasks showing how all
  90		instances of the resource are used. The legend is:
  91
  92			"0":
  93			      Corresponding region is unused. When the system's
  94			      resources have been allocated and a "0" is found
  95			      in "bit_usage" it is a sign that resources are
  96			      wasted.
  97
  98			"H":
  99			      Corresponding region is used by hardware only
 100			      but available for software use. If a resource
 101			      has bits set in "shareable_bits" but not all
 102			      of these bits appear in the resource groups'
 103			      schematas then the bits appearing in
 104			      "shareable_bits" but no resource group will
 105			      be marked as "H".
 106			"X":
 107			      Corresponding region is available for sharing and
 108			      used by hardware and software. These are the
 109			      bits that appear in "shareable_bits" as
 110			      well as a resource group's allocation.
 111			"S":
 112			      Corresponding region is used by software
 113			      and available for sharing.
 114			"E":
 115			      Corresponding region is used exclusively by
 116			      one resource group. No sharing allowed.
 117			"P":
 118			      Corresponding region is pseudo-locked. No
 119			      sharing allowed.
 120
 121Memory bandwidth(MB) subdirectory contains the following files
 122with respect to allocation:
 123
 124"min_bandwidth":
 125		The minimum memory bandwidth percentage which
 126		user can request.
 127
 128"bandwidth_gran":
 129		The granularity in which the memory bandwidth
 130		percentage is allocated. The allocated
 131		b/w percentage is rounded off to the next
 132		control step available on the hardware. The
 133		available bandwidth control steps are:
 134		min_bandwidth + N * bandwidth_gran.
 135
 136"delay_linear":
 137		Indicates if the delay scale is linear or
 138		non-linear. This field is purely informational
 139		only.
 140
 141If RDT monitoring is available there will be an "L3_MON" directory
 142with the following files:
 143
 144"num_rmids":
 145		The number of RMIDs available. This is the
 146		upper bound for how many "CTRL_MON" + "MON"
 147		groups can be created.
 148
 149"mon_features":
 150		Lists the monitoring events if
 151		monitoring is enabled for the resource.
 152
 153"max_threshold_occupancy":
 154		Read/write file provides the largest value (in
 155		bytes) at which a previously used LLC_occupancy
 156		counter can be considered for re-use.
 157
 158Finally, in the top level of the "info" directory there is a file
 159named "last_cmd_status". This is reset with every "command" issued
 160via the file system (making new directories or writing to any of the
 161control files). If the command was successful, it will read as "ok".
 162If the command failed, it will provide more information that can be
 163conveyed in the error returns from file operations. E.g.
 164::
 165
 166	# echo L3:0=f7 > schemata
 167	bash: echo: write error: Invalid argument
 168	# cat info/last_cmd_status
 169	mask f7 has non-consecutive 1-bits
 170
 171Resource alloc and monitor groups
 172=================================
 173
 174Resource groups are represented as directories in the resctrl file
 175system.  The default group is the root directory which, immediately
 176after mounting, owns all the tasks and cpus in the system and can make
 177full use of all resources.
 178
 179On a system with RDT control features additional directories can be
 180created in the root directory that specify different amounts of each
 181resource (see "schemata" below). The root and these additional top level
 182directories are referred to as "CTRL_MON" groups below.
 183
 184On a system with RDT monitoring the root directory and other top level
 185directories contain a directory named "mon_groups" in which additional
 186directories can be created to monitor subsets of tasks in the CTRL_MON
 187group that is their ancestor. These are called "MON" groups in the rest
 188of this document.
 189
 190Removing a directory will move all tasks and cpus owned by the group it
 191represents to the parent. Removing one of the created CTRL_MON groups
 192will automatically remove all MON groups below it.
 193
 194All groups contain the following files:
 195
 196"tasks":
 197	Reading this file shows the list of all tasks that belong to
 198	this group. Writing a task id to the file will add a task to the
 199	group. If the group is a CTRL_MON group the task is removed from
 200	whichever previous CTRL_MON group owned the task and also from
 201	any MON group that owned the task. If the group is a MON group,
 202	then the task must already belong to the CTRL_MON parent of this
 203	group. The task is removed from any previous MON group.
 204
 205
 206"cpus":
 207	Reading this file shows a bitmask of the logical CPUs owned by
 208	this group. Writing a mask to this file will add and remove
 209	CPUs to/from this group. As with the tasks file a hierarchy is
 210	maintained where MON groups may only include CPUs owned by the
 211	parent CTRL_MON group.
 212	When the resource group is in pseudo-locked mode this file will
 213	only be readable, reflecting the CPUs associated with the
 214	pseudo-locked region.
 215
 216
 217"cpus_list":
 218	Just like "cpus", only using ranges of CPUs instead of bitmasks.
 219
 220
 221When control is enabled all CTRL_MON groups will also contain:
 222
 223"schemata":
 224	A list of all the resources available to this group.
 225	Each resource has its own line and format - see below for details.
 226
 227"size":
 228	Mirrors the display of the "schemata" file to display the size in
 229	bytes of each allocation instead of the bits representing the
 230	allocation.
 231
 232"mode":
 233	The "mode" of the resource group dictates the sharing of its
 234	allocations. A "shareable" resource group allows sharing of its
 235	allocations while an "exclusive" resource group does not. A
 236	cache pseudo-locked region is created by first writing
 237	"pseudo-locksetup" to the "mode" file before writing the cache
 238	pseudo-locked region's schemata to the resource group's "schemata"
 239	file. On successful pseudo-locked region creation the mode will
 240	automatically change to "pseudo-locked".
 241
 242When monitoring is enabled all MON groups will also contain:
 243
 244"mon_data":
 245	This contains a set of files organized by L3 domain and by
 246	RDT event. E.g. on a system with two L3 domains there will
 247	be subdirectories "mon_L3_00" and "mon_L3_01".	Each of these
 248	directories have one file per event (e.g. "llc_occupancy",
 249	"mbm_total_bytes", and "mbm_local_bytes"). In a MON group these
 250	files provide a read out of the current value of the event for
 251	all tasks in the group. In CTRL_MON groups these files provide
 252	the sum for all tasks in the CTRL_MON group and all tasks in
 253	MON groups. Please see example section for more details on usage.
 254
 255Resource allocation rules
 256-------------------------
 257
 258When a task is running the following rules define which resources are
 259available to it:
 260
 2611) If the task is a member of a non-default group, then the schemata
 262   for that group is used.
 263
 2642) Else if the task belongs to the default group, but is running on a
 265   CPU that is assigned to some specific group, then the schemata for the
 266   CPU's group is used.
 267
 2683) Otherwise the schemata for the default group is used.
 269
 270Resource monitoring rules
 271-------------------------
 2721) If a task is a member of a MON group, or non-default CTRL_MON group
 273   then RDT events for the task will be reported in that group.
 274
 2752) If a task is a member of the default CTRL_MON group, but is running
 276   on a CPU that is assigned to some specific group, then the RDT events
 277   for the task will be reported in that group.
 278
 2793) Otherwise RDT events for the task will be reported in the root level
 280   "mon_data" group.
 281
 282
 283Notes on cache occupancy monitoring and control
 284===============================================
 285When moving a task from one group to another you should remember that
 286this only affects *new* cache allocations by the task. E.g. you may have
 287a task in a monitor group showing 3 MB of cache occupancy. If you move
 288to a new group and immediately check the occupancy of the old and new
 289groups you will likely see that the old group is still showing 3 MB and
 290the new group zero. When the task accesses locations still in cache from
 291before the move, the h/w does not update any counters. On a busy system
 292you will likely see the occupancy in the old group go down as cache lines
 293are evicted and re-used while the occupancy in the new group rises as
 294the task accesses memory and loads into the cache are counted based on
 295membership in the new group.
 296
 297The same applies to cache allocation control. Moving a task to a group
 298with a smaller cache partition will not evict any cache lines. The
 299process may continue to use them from the old partition.
 300
 301Hardware uses CLOSid(Class of service ID) and an RMID(Resource monitoring ID)
 302to identify a control group and a monitoring group respectively. Each of
 303the resource groups are mapped to these IDs based on the kind of group. The
 304number of CLOSid and RMID are limited by the hardware and hence the creation of
 305a "CTRL_MON" directory may fail if we run out of either CLOSID or RMID
 306and creation of "MON" group may fail if we run out of RMIDs.
 307
 308max_threshold_occupancy - generic concepts
 309------------------------------------------
 310
 311Note that an RMID once freed may not be immediately available for use as
 312the RMID is still tagged the cache lines of the previous user of RMID.
 313Hence such RMIDs are placed on limbo list and checked back if the cache
 314occupancy has gone down. If there is a time when system has a lot of
 315limbo RMIDs but which are not ready to be used, user may see an -EBUSY
 316during mkdir.
 317
 318max_threshold_occupancy is a user configurable value to determine the
 319occupancy at which an RMID can be freed.
 320
 321Schemata files - general concepts
 322---------------------------------
 323Each line in the file describes one resource. The line starts with
 324the name of the resource, followed by specific values to be applied
 325in each of the instances of that resource on the system.
 326
 327Cache IDs
 328---------
 329On current generation systems there is one L3 cache per socket and L2
 330caches are generally just shared by the hyperthreads on a core, but this
 331isn't an architectural requirement. We could have multiple separate L3
 332caches on a socket, multiple cores could share an L2 cache. So instead
 333of using "socket" or "core" to define the set of logical cpus sharing
 334a resource we use a "Cache ID". At a given cache level this will be a
 335unique number across the whole system (but it isn't guaranteed to be a
 336contiguous sequence, there may be gaps).  To find the ID for each logical
 337CPU look in /sys/devices/system/cpu/cpu*/cache/index*/id
 338
 339Cache Bit Masks (CBM)
 340---------------------
 341For cache resources we describe the portion of the cache that is available
 342for allocation using a bitmask. The maximum value of the mask is defined
 343by each cpu model (and may be different for different cache levels). It
 344is found using CPUID, but is also provided in the "info" directory of
 345the resctrl file system in "info/{resource}/cbm_mask". Intel hardware
 346requires that these masks have all the '1' bits in a contiguous block. So
 3470x3, 0x6 and 0xC are legal 4-bit masks with two bits set, but 0x5, 0x9
 348and 0xA are not.  On a system with a 20-bit mask each bit represents 5%
 349of the capacity of the cache. You could partition the cache into four
 350equal parts with masks: 0x1f, 0x3e0, 0x7c00, 0xf8000.
 351
 352Memory bandwidth Allocation and monitoring
 353==========================================
 354
 355For Memory bandwidth resource, by default the user controls the resource
 356by indicating the percentage of total memory bandwidth.
 357
 358The minimum bandwidth percentage value for each cpu model is predefined
 359and can be looked up through "info/MB/min_bandwidth". The bandwidth
 360granularity that is allocated is also dependent on the cpu model and can
 361be looked up at "info/MB/bandwidth_gran". The available bandwidth
 362control steps are: min_bw + N * bw_gran. Intermediate values are rounded
 363to the next control step available on the hardware.
 364
 365The bandwidth throttling is a core specific mechanism on some of Intel
 366SKUs. Using a high bandwidth and a low bandwidth setting on two threads
 367sharing a core will result in both threads being throttled to use the
 368low bandwidth. The fact that Memory bandwidth allocation(MBA) is a core
 369specific mechanism where as memory bandwidth monitoring(MBM) is done at
 370the package level may lead to confusion when users try to apply control
 371via the MBA and then monitor the bandwidth to see if the controls are
 372effective. Below are such scenarios:
 373
 3741. User may *not* see increase in actual bandwidth when percentage
 375   values are increased:
 376
 377This can occur when aggregate L2 external bandwidth is more than L3
 378external bandwidth. Consider an SKL SKU with 24 cores on a package and
 379where L2 external  is 10GBps (hence aggregate L2 external bandwidth is
 380240GBps) and L3 external bandwidth is 100GBps. Now a workload with '20
 381threads, having 50% bandwidth, each consuming 5GBps' consumes the max L3
 382bandwidth of 100GBps although the percentage value specified is only 50%
 383<< 100%. Hence increasing the bandwidth percentage will not yield any
 384more bandwidth. This is because although the L2 external bandwidth still
 385has capacity, the L3 external bandwidth is fully used. Also note that
 386this would be dependent on number of cores the benchmark is run on.
 387
 3882. Same bandwidth percentage may mean different actual bandwidth
 389   depending on # of threads:
 390
 391For the same SKU in #1, a 'single thread, with 10% bandwidth' and '4
 392thread, with 10% bandwidth' can consume upto 10GBps and 40GBps although
 393they have same percentage bandwidth of 10%. This is simply because as
 394threads start using more cores in an rdtgroup, the actual bandwidth may
 395increase or vary although user specified bandwidth percentage is same.
 396
 397In order to mitigate this and make the interface more user friendly,
 398resctrl added support for specifying the bandwidth in MBps as well.  The
 399kernel underneath would use a software feedback mechanism or a "Software
 400Controller(mba_sc)" which reads the actual bandwidth using MBM counters
 401and adjust the memory bandwidth percentages to ensure::
 402
 403	"actual bandwidth < user specified bandwidth".
 404
 405By default, the schemata would take the bandwidth percentage values
 406where as user can switch to the "MBA software controller" mode using
 407a mount option 'mba_MBps'. The schemata format is specified in the below
 408sections.
 409
 410L3 schemata file details (code and data prioritization disabled)
 411----------------------------------------------------------------
 412With CDP disabled the L3 schemata format is::
 413
 414	L3:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
 415
 416L3 schemata file details (CDP enabled via mount option to resctrl)
 417------------------------------------------------------------------
 418When CDP is enabled L3 control is split into two separate resources
 419so you can specify independent masks for code and data like this::
 420
 421	L3DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
 422	L3CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
 423
 424L2 schemata file details
 425------------------------
 426CDP is supported at L2 using the 'cdpl2' mount option. The schemata
 427format is either::
 428
 429	L2:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
 430
 431or
 432
 433	L2DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
 434	L2CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
 435
 436
 437Memory bandwidth Allocation (default mode)
 438------------------------------------------
 439
 440Memory b/w domain is L3 cache.
 441::
 442
 443	MB:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;...
 444
 445Memory bandwidth Allocation specified in MBps
 446---------------------------------------------
 447
 448Memory bandwidth domain is L3 cache.
 449::
 450
 451	MB:<cache_id0>=bw_MBps0;<cache_id1>=bw_MBps1;...
 452
 453Reading/writing the schemata file
 454---------------------------------
 455Reading the schemata file will show the state of all resources
 456on all domains. When writing you only need to specify those values
 457which you wish to change.  E.g.
 458::
 459
 460  # cat schemata
 461  L3DATA:0=fffff;1=fffff;2=fffff;3=fffff
 462  L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
 463  # echo "L3DATA:2=3c0;" > schemata
 464  # cat schemata
 465  L3DATA:0=fffff;1=fffff;2=3c0;3=fffff
 466  L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
 467
 468Cache Pseudo-Locking
 469====================
 470CAT enables a user to specify the amount of cache space that an
 471application can fill. Cache pseudo-locking builds on the fact that a
 472CPU can still read and write data pre-allocated outside its current
 473allocated area on a cache hit. With cache pseudo-locking, data can be
 474preloaded into a reserved portion of cache that no application can
 475fill, and from that point on will only serve cache hits. The cache
 476pseudo-locked memory is made accessible to user space where an
 477application can map it into its virtual address space and thus have
 478a region of memory with reduced average read latency.
 479
 480The creation of a cache pseudo-locked region is triggered by a request
 481from the user to do so that is accompanied by a schemata of the region
 482to be pseudo-locked. The cache pseudo-locked region is created as follows:
 483
 484- Create a CAT allocation CLOSNEW with a CBM matching the schemata
 485  from the user of the cache region that will contain the pseudo-locked
 486  memory. This region must not overlap with any current CAT allocation/CLOS
 487  on the system and no future overlap with this cache region is allowed
 488  while the pseudo-locked region exists.
 489- Create a contiguous region of memory of the same size as the cache
 490  region.
 491- Flush the cache, disable hardware prefetchers, disable preemption.
 492- Make CLOSNEW the active CLOS and touch the allocated memory to load
 493  it into the cache.
 494- Set the previous CLOS as active.
 495- At this point the closid CLOSNEW can be released - the cache
 496  pseudo-locked region is protected as long as its CBM does not appear in
 497  any CAT allocation. Even though the cache pseudo-locked region will from
 498  this point on not appear in any CBM of any CLOS an application running with
 499  any CLOS will be able to access the memory in the pseudo-locked region since
 500  the region continues to serve cache hits.
 501- The contiguous region of memory loaded into the cache is exposed to
 502  user-space as a character device.
 503
 504Cache pseudo-locking increases the probability that data will remain
 505in the cache via carefully configuring the CAT feature and controlling
 506application behavior. There is no guarantee that data is placed in
 507cache. Instructions like INVD, WBINVD, CLFLUSH, etc. can still evict
 508“locked” data from cache. Power management C-states may shrink or
 509power off cache. Deeper C-states will automatically be restricted on
 510pseudo-locked region creation.
 511
 512It is required that an application using a pseudo-locked region runs
 513with affinity to the cores (or a subset of the cores) associated
 514with the cache on which the pseudo-locked region resides. A sanity check
 515within the code will not allow an application to map pseudo-locked memory
 516unless it runs with affinity to cores associated with the cache on which the
 517pseudo-locked region resides. The sanity check is only done during the
 518initial mmap() handling, there is no enforcement afterwards and the
 519application self needs to ensure it remains affine to the correct cores.
 520
 521Pseudo-locking is accomplished in two stages:
 522
 5231) During the first stage the system administrator allocates a portion
 524   of cache that should be dedicated to pseudo-locking. At this time an
 525   equivalent portion of memory is allocated, loaded into allocated
 526   cache portion, and exposed as a character device.
 5272) During the second stage a user-space application maps (mmap()) the
 528   pseudo-locked memory into its address space.
 529
 530Cache Pseudo-Locking Interface
 531------------------------------
 532A pseudo-locked region is created using the resctrl interface as follows:
 533
 5341) Create a new resource group by creating a new directory in /sys/fs/resctrl.
 5352) Change the new resource group's mode to "pseudo-locksetup" by writing
 536   "pseudo-locksetup" to the "mode" file.
 5373) Write the schemata of the pseudo-locked region to the "schemata" file. All
 538   bits within the schemata should be "unused" according to the "bit_usage"
 539   file.
 540
 541On successful pseudo-locked region creation the "mode" file will contain
 542"pseudo-locked" and a new character device with the same name as the resource
 543group will exist in /dev/pseudo_lock. This character device can be mmap()'ed
 544by user space in order to obtain access to the pseudo-locked memory region.
 545
 546An example of cache pseudo-locked region creation and usage can be found below.
 547
 548Cache Pseudo-Locking Debugging Interface
 549----------------------------------------
 550The pseudo-locking debugging interface is enabled by default (if
 551CONFIG_DEBUG_FS is enabled) and can be found in /sys/kernel/debug/resctrl.
 552
 553There is no explicit way for the kernel to test if a provided memory
 554location is present in the cache. The pseudo-locking debugging interface uses
 555the tracing infrastructure to provide two ways to measure cache residency of
 556the pseudo-locked region:
 557
 5581) Memory access latency using the pseudo_lock_mem_latency tracepoint. Data
 559   from these measurements are best visualized using a hist trigger (see
 560   example below). In this test the pseudo-locked region is traversed at
 561   a stride of 32 bytes while hardware prefetchers and preemption
 562   are disabled. This also provides a substitute visualization of cache
 563   hits and misses.
 5642) Cache hit and miss measurements using model specific precision counters if
 565   available. Depending on the levels of cache on the system the pseudo_lock_l2
 566   and pseudo_lock_l3 tracepoints are available.
 567
 568When a pseudo-locked region is created a new debugfs directory is created for
 569it in debugfs as /sys/kernel/debug/resctrl/<newdir>. A single
 570write-only file, pseudo_lock_measure, is present in this directory. The
 571measurement of the pseudo-locked region depends on the number written to this
 572debugfs file:
 573
 5741:
 575     writing "1" to the pseudo_lock_measure file will trigger the latency
 576     measurement captured in the pseudo_lock_mem_latency tracepoint. See
 577     example below.
 5782:
 579     writing "2" to the pseudo_lock_measure file will trigger the L2 cache
 580     residency (cache hits and misses) measurement captured in the
 581     pseudo_lock_l2 tracepoint. See example below.
 5823:
 583     writing "3" to the pseudo_lock_measure file will trigger the L3 cache
 584     residency (cache hits and misses) measurement captured in the
 585     pseudo_lock_l3 tracepoint.
 586
 587All measurements are recorded with the tracing infrastructure. This requires
 588the relevant tracepoints to be enabled before the measurement is triggered.
 589
 590Example of latency debugging interface
 591~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 592In this example a pseudo-locked region named "newlock" was created. Here is
 593how we can measure the latency in cycles of reading from this region and
 594visualize this data with a histogram that is available if CONFIG_HIST_TRIGGERS
 595is set::
 596
 597  # :> /sys/kernel/debug/tracing/trace
 598  # echo 'hist:keys=latency' > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/trigger
 599  # echo 1 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/enable
 600  # echo 1 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure
 601  # echo 0 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/enable
 602  # cat /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_mem_latency/hist
 603
 604  # event histogram
 605  #
 606  # trigger info: hist:keys=latency:vals=hitcount:sort=hitcount:size=2048 [active]
 607  #
 608
 609  { latency:        456 } hitcount:          1
 610  { latency:         50 } hitcount:         83
 611  { latency:         36 } hitcount:         96
 612  { latency:         44 } hitcount:        174
 613  { latency:         48 } hitcount:        195
 614  { latency:         46 } hitcount:        262
 615  { latency:         42 } hitcount:        693
 616  { latency:         40 } hitcount:       3204
 617  { latency:         38 } hitcount:       3484
 618
 619  Totals:
 620      Hits: 8192
 621      Entries: 9
 622    Dropped: 0
 623
 624Example of cache hits/misses debugging
 625~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 626In this example a pseudo-locked region named "newlock" was created on the L2
 627cache of a platform. Here is how we can obtain details of the cache hits
 628and misses using the platform's precision counters.
 629::
 630
 631  # :> /sys/kernel/debug/tracing/trace
 632  # echo 1 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_l2/enable
 633  # echo 2 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure
 634  # echo 0 > /sys/kernel/debug/tracing/events/resctrl/pseudo_lock_l2/enable
 635  # cat /sys/kernel/debug/tracing/trace
 636
 637  # tracer: nop
 638  #
 639  #                              _-----=> irqs-off
 640  #                             / _----=> need-resched
 641  #                            | / _---=> hardirq/softirq
 642  #                            || / _--=> preempt-depth
 643  #                            ||| /     delay
 644  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
 645  #              | |       |   ||||       |         |
 646  pseudo_lock_mea-1672  [002] ....  3132.860500: pseudo_lock_l2: hits=4097 miss=0
 647
 648
 649Examples for RDT allocation usage
 650~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 651
 6521) Example 1
 653
 654On a two socket machine (one L3 cache per socket) with just four bits
 655for cache bit masks, minimum b/w of 10% with a memory bandwidth
 656granularity of 10%.
 657::
 658
 659  # mount -t resctrl resctrl /sys/fs/resctrl
 660  # cd /sys/fs/resctrl
 661  # mkdir p0 p1
 662  # echo "L3:0=3;1=c\nMB:0=50;1=50" > /sys/fs/resctrl/p0/schemata
 663  # echo "L3:0=3;1=3\nMB:0=50;1=50" > /sys/fs/resctrl/p1/schemata
 664
 665The default resource group is unmodified, so we have access to all parts
 666of all caches (its schemata file reads "L3:0=f;1=f").
 667
 668Tasks that are under the control of group "p0" may only allocate from the
 669"lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
 670Tasks in group "p1" use the "lower" 50% of cache on both sockets.
 671
 672Similarly, tasks that are under the control of group "p0" may use a
 673maximum memory b/w of 50% on socket0 and 50% on socket 1.
 674Tasks in group "p1" may also use 50% memory b/w on both sockets.
 675Note that unlike cache masks, memory b/w cannot specify whether these
 676allocations can overlap or not. The allocations specifies the maximum
 677b/w that the group may be able to use and the system admin can configure
 678the b/w accordingly.
 679
 680If resctrl is using the software controller (mba_sc) then user can enter the
 681max b/w in MB rather than the percentage values.
 682::
 683
 684  # echo "L3:0=3;1=c\nMB:0=1024;1=500" > /sys/fs/resctrl/p0/schemata
 685  # echo "L3:0=3;1=3\nMB:0=1024;1=500" > /sys/fs/resctrl/p1/schemata
 686
 687In the above example the tasks in "p1" and "p0" on socket 0 would use a max b/w
 688of 1024MB where as on socket 1 they would use 500MB.
 689
 6902) Example 2
 691
 692Again two sockets, but this time with a more realistic 20-bit mask.
 693
 694Two real time tasks pid=1234 running on processor 0 and pid=5678 running on
 695processor 1 on socket 0 on a 2-socket and dual core machine. To avoid noisy
 696neighbors, each of the two real-time tasks exclusively occupies one quarter
 697of L3 cache on socket 0.
 698::
 699
 700  # mount -t resctrl resctrl /sys/fs/resctrl
 701  # cd /sys/fs/resctrl
 702
 703First we reset the schemata for the default group so that the "upper"
 70450% of the L3 cache on socket 0 and 50% of memory b/w cannot be used by
 705ordinary tasks::
 706
 707  # echo "L3:0=3ff;1=fffff\nMB:0=50;1=100" > schemata
 708
 709Next we make a resource group for our first real time task and give
 710it access to the "top" 25% of the cache on socket 0.
 711::
 712
 713  # mkdir p0
 714  # echo "L3:0=f8000;1=fffff" > p0/schemata
 715
 716Finally we move our first real time task into this resource group. We
 717also use taskset(1) to ensure the task always runs on a dedicated CPU
 718on socket 0. Most uses of resource groups will also constrain which
 719processors tasks run on.
 720::
 721
 722  # echo 1234 > p0/tasks
 723  # taskset -cp 1 1234
 724
 725Ditto for the second real time task (with the remaining 25% of cache)::
 726
 727  # mkdir p1
 728  # echo "L3:0=7c00;1=fffff" > p1/schemata
 729  # echo 5678 > p1/tasks
 730  # taskset -cp 2 5678
 731
 732For the same 2 socket system with memory b/w resource and CAT L3 the
 733schemata would look like(Assume min_bandwidth 10 and bandwidth_gran is
 73410):
 735
 736For our first real time task this would request 20% memory b/w on socket 0.
 737::
 738
 739  # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
 740
 741For our second real time task this would request an other 20% memory b/w
 742on socket 0.
 743::
 744
 745  # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
 746
 7473) Example 3
 748
 749A single socket system which has real-time tasks running on core 4-7 and
 750non real-time workload assigned to core 0-3. The real-time tasks share text
 751and data, so a per task association is not required and due to interaction
 752with the kernel it's desired that the kernel on these cores shares L3 with
 753the tasks.
 754::
 755
 756  # mount -t resctrl resctrl /sys/fs/resctrl
 757  # cd /sys/fs/resctrl
 758
 759First we reset the schemata for the default group so that the "upper"
 76050% of the L3 cache on socket 0, and 50% of memory bandwidth on socket 0
 761cannot be used by ordinary tasks::
 762
 763  # echo "L3:0=3ff\nMB:0=50" > schemata
 764
 765Next we make a resource group for our real time cores and give it access
 766to the "top" 50% of the cache on socket 0 and 50% of memory bandwidth on
 767socket 0.
 768::
 769
 770  # mkdir p0
 771  # echo "L3:0=ffc00\nMB:0=50" > p0/schemata
 772
 773Finally we move core 4-7 over to the new group and make sure that the
 774kernel and the tasks running there get 50% of the cache. They should
 775also get 50% of memory bandwidth assuming that the cores 4-7 are SMT
 776siblings and only the real time threads are scheduled on the cores 4-7.
 777::
 778
 779  # echo F0 > p0/cpus
 780
 7814) Example 4
 782
 783The resource groups in previous examples were all in the default "shareable"
 784mode allowing sharing of their cache allocations. If one resource group
 785configures a cache allocation then nothing prevents another resource group
 786to overlap with that allocation.
 787
 788In this example a new exclusive resource group will be created on a L2 CAT
 789system with two L2 cache instances that can be configured with an 8-bit
 790capacity bitmask. The new exclusive resource group will be configured to use
 79125% of each cache instance.
 792::
 793
 794  # mount -t resctrl resctrl /sys/fs/resctrl/
 795  # cd /sys/fs/resctrl
 796
 797First, we observe that the default group is configured to allocate to all L2
 798cache::
 799
 800  # cat schemata
 801  L2:0=ff;1=ff
 802
 803We could attempt to create the new resource group at this point, but it will
 804fail because of the overlap with the schemata of the default group::
 805
 806  # mkdir p0
 807  # echo 'L2:0=0x3;1=0x3' > p0/schemata
 808  # cat p0/mode
 809  shareable
 810  # echo exclusive > p0/mode
 811  -sh: echo: write error: Invalid argument
 812  # cat info/last_cmd_status
 813  schemata overlaps
 814
 815To ensure that there is no overlap with another resource group the default
 816resource group's schemata has to change, making it possible for the new
 817resource group to become exclusive.
 818::
 819
 820  # echo 'L2:0=0xfc;1=0xfc' > schemata
 821  # echo exclusive > p0/mode
 822  # grep . p0/*
 823  p0/cpus:0
 824  p0/mode:exclusive
 825  p0/schemata:L2:0=03;1=03
 826  p0/size:L2:0=262144;1=262144
 827
 828A new resource group will on creation not overlap with an exclusive resource
 829group::
 830
 831  # mkdir p1
 832  # grep . p1/*
 833  p1/cpus:0
 834  p1/mode:shareable
 835  p1/schemata:L2:0=fc;1=fc
 836  p1/size:L2:0=786432;1=786432
 837
 838The bit_usage will reflect how the cache is used::
 839
 840  # cat info/L2/bit_usage
 841  0=SSSSSSEE;1=SSSSSSEE
 842
 843A resource group cannot be forced to overlap with an exclusive resource group::
 844
 845  # echo 'L2:0=0x1;1=0x1' > p1/schemata
 846  -sh: echo: write error: Invalid argument
 847  # cat info/last_cmd_status
 848  overlaps with exclusive group
 849
 850Example of Cache Pseudo-Locking
 851~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 852Lock portion of L2 cache from cache id 1 using CBM 0x3. Pseudo-locked
 853region is exposed at /dev/pseudo_lock/newlock that can be provided to
 854application for argument to mmap().
 855::
 856
 857  # mount -t resctrl resctrl /sys/fs/resctrl/
 858  # cd /sys/fs/resctrl
 859
 860Ensure that there are bits available that can be pseudo-locked, since only
 861unused bits can be pseudo-locked the bits to be pseudo-locked needs to be
 862removed from the default resource group's schemata::
 863
 864  # cat info/L2/bit_usage
 865  0=SSSSSSSS;1=SSSSSSSS
 866  # echo 'L2:1=0xfc' > schemata
 867  # cat info/L2/bit_usage
 868  0=SSSSSSSS;1=SSSSSS00
 869
 870Create a new resource group that will be associated with the pseudo-locked
 871region, indicate that it will be used for a pseudo-locked region, and
 872configure the requested pseudo-locked region capacity bitmask::
 873
 874  # mkdir newlock
 875  # echo pseudo-locksetup > newlock/mode
 876  # echo 'L2:1=0x3' > newlock/schemata
 877
 878On success the resource group's mode will change to pseudo-locked, the
 879bit_usage will reflect the pseudo-locked region, and the character device
 880exposing the pseudo-locked region will exist::
 881
 882  # cat newlock/mode
 883  pseudo-locked
 884  # cat info/L2/bit_usage
 885  0=SSSSSSSS;1=SSSSSSPP
 886  # ls -l /dev/pseudo_lock/newlock
 887  crw------- 1 root root 243, 0 Apr  3 05:01 /dev/pseudo_lock/newlock
 888
 889::
 890
 891  /*
 892  * Example code to access one page of pseudo-locked cache region
 893  * from user space.
 894  */
 895  #define _GNU_SOURCE
 896  #include <fcntl.h>
 897  #include <sched.h>
 898  #include <stdio.h>
 899  #include <stdlib.h>
 900  #include <unistd.h>
 901  #include <sys/mman.h>
 902
 903  /*
 904  * It is required that the application runs with affinity to only
 905  * cores associated with the pseudo-locked region. Here the cpu
 906  * is hardcoded for convenience of example.
 907  */
 908  static int cpuid = 2;
 909
 910  int main(int argc, char *argv[])
 911  {
 912    cpu_set_t cpuset;
 913    long page_size;
 914    void *mapping;
 915    int dev_fd;
 916    int ret;
 917
 918    page_size = sysconf(_SC_PAGESIZE);
 919
 920    CPU_ZERO(&cpuset);
 921    CPU_SET(cpuid, &cpuset);
 922    ret = sched_setaffinity(0, sizeof(cpuset), &cpuset);
 923    if (ret < 0) {
 924      perror("sched_setaffinity");
 925      exit(EXIT_FAILURE);
 926    }
 927
 928    dev_fd = open("/dev/pseudo_lock/newlock", O_RDWR);
 929    if (dev_fd < 0) {
 930      perror("open");
 931      exit(EXIT_FAILURE);
 932    }
 933
 934    mapping = mmap(0, page_size, PROT_READ | PROT_WRITE, MAP_SHARED,
 935            dev_fd, 0);
 936    if (mapping == MAP_FAILED) {
 937      perror("mmap");
 938      close(dev_fd);
 939      exit(EXIT_FAILURE);
 940    }
 941
 942    /* Application interacts with pseudo-locked memory @mapping */
 943
 944    ret = munmap(mapping, page_size);
 945    if (ret < 0) {
 946      perror("munmap");
 947      close(dev_fd);
 948      exit(EXIT_FAILURE);
 949    }
 950
 951    close(dev_fd);
 952    exit(EXIT_SUCCESS);
 953  }
 954
 955Locking between applications
 956----------------------------
 957
 958Certain operations on the resctrl filesystem, composed of read/writes
 959to/from multiple files, must be atomic.
 960
 961As an example, the allocation of an exclusive reservation of L3 cache
 962involves:
 963
 964  1. Read the cbmmasks from each directory or the per-resource "bit_usage"
 965  2. Find a contiguous set of bits in the global CBM bitmask that is clear
 966     in any of the directory cbmmasks
 967  3. Create a new directory
 968  4. Set the bits found in step 2 to the new directory "schemata" file
 969
 970If two applications attempt to allocate space concurrently then they can
 971end up allocating the same bits so the reservations are shared instead of
 972exclusive.
 973
 974To coordinate atomic operations on the resctrlfs and to avoid the problem
 975above, the following locking procedure is recommended:
 976
 977Locking is based on flock, which is available in libc and also as a shell
 978script command
 979
 980Write lock:
 981
 982 A) Take flock(LOCK_EX) on /sys/fs/resctrl
 983 B) Read/write the directory structure.
 984 C) funlock
 985
 986Read lock:
 987
 988 A) Take flock(LOCK_SH) on /sys/fs/resctrl
 989 B) If success read the directory structure.
 990 C) funlock
 991
 992Example with bash::
 993
 994  # Atomically read directory structure
 995  $ flock -s /sys/fs/resctrl/ find /sys/fs/resctrl
 996
 997  # Read directory contents and create new subdirectory
 998
 999  $ cat create-dir.sh
1000  find /sys/fs/resctrl/ > output.txt
1001  mask = function-of(output.txt)
1002  mkdir /sys/fs/resctrl/newres/
1003  echo mask > /sys/fs/resctrl/newres/schemata
1004
1005  $ flock /sys/fs/resctrl/ ./create-dir.sh
1006
1007Example with C::
1008
1009  /*
1010  * Example code do take advisory locks
1011  * before accessing resctrl filesystem
1012  */
1013  #include <sys/file.h>
1014  #include <stdlib.h>
1015
1016  void resctrl_take_shared_lock(int fd)
1017  {
1018    int ret;
1019
1020    /* take shared lock on resctrl filesystem */
1021    ret = flock(fd, LOCK_SH);
1022    if (ret) {
1023      perror("flock");
1024      exit(-1);
1025    }
1026  }
1027
1028  void resctrl_take_exclusive_lock(int fd)
1029  {
1030    int ret;
1031
1032    /* release lock on resctrl filesystem */
1033    ret = flock(fd, LOCK_EX);
1034    if (ret) {
1035      perror("flock");
1036      exit(-1);
1037    }
1038  }
1039
1040  void resctrl_release_lock(int fd)
1041  {
1042    int ret;
1043
1044    /* take shared lock on resctrl filesystem */
1045    ret = flock(fd, LOCK_UN);
1046    if (ret) {
1047      perror("flock");
1048      exit(-1);
1049    }
1050  }
1051
1052  void main(void)
1053  {
1054    int fd, ret;
1055
1056    fd = open("/sys/fs/resctrl", O_DIRECTORY);
1057    if (fd == -1) {
1058      perror("open");
1059      exit(-1);
1060    }
1061    resctrl_take_shared_lock(fd);
1062    /* code to read directory contents */
1063    resctrl_release_lock(fd);
1064
1065    resctrl_take_exclusive_lock(fd);
1066    /* code to read and write directory contents */
1067    resctrl_release_lock(fd);
1068  }
1069
1070Examples for RDT Monitoring along with allocation usage
1071=======================================================
1072Reading monitored data
1073----------------------
1074Reading an event file (for ex: mon_data/mon_L3_00/llc_occupancy) would
1075show the current snapshot of LLC occupancy of the corresponding MON
1076group or CTRL_MON group.
1077
1078
1079Example 1 (Monitor CTRL_MON group and subset of tasks in CTRL_MON group)
1080------------------------------------------------------------------------
1081On a two socket machine (one L3 cache per socket) with just four bits
1082for cache bit masks::
1083
1084  # mount -t resctrl resctrl /sys/fs/resctrl
1085  # cd /sys/fs/resctrl
1086  # mkdir p0 p1
1087  # echo "L3:0=3;1=c" > /sys/fs/resctrl/p0/schemata
1088  # echo "L3:0=3;1=3" > /sys/fs/resctrl/p1/schemata
1089  # echo 5678 > p1/tasks
1090  # echo 5679 > p1/tasks
1091
1092The default resource group is unmodified, so we have access to all parts
1093of all caches (its schemata file reads "L3:0=f;1=f").
1094
1095Tasks that are under the control of group "p0" may only allocate from the
1096"lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
1097Tasks in group "p1" use the "lower" 50% of cache on both sockets.
1098
1099Create monitor groups and assign a subset of tasks to each monitor group.
1100::
1101
1102  # cd /sys/fs/resctrl/p1/mon_groups
1103  # mkdir m11 m12
1104  # echo 5678 > m11/tasks
1105  # echo 5679 > m12/tasks
1106
1107fetch data (data shown in bytes)
1108::
1109
1110  # cat m11/mon_data/mon_L3_00/llc_occupancy
1111  16234000
1112  # cat m11/mon_data/mon_L3_01/llc_occupancy
1113  14789000
1114  # cat m12/mon_data/mon_L3_00/llc_occupancy
1115  16789000
1116
1117The parent ctrl_mon group shows the aggregated data.
1118::
1119
1120  # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy
1121  31234000
1122
1123Example 2 (Monitor a task from its creation)
1124--------------------------------------------
1125On a two socket machine (one L3 cache per socket)::
1126
1127  # mount -t resctrl resctrl /sys/fs/resctrl
1128  # cd /sys/fs/resctrl
1129  # mkdir p0 p1
1130
1131An RMID is allocated to the group once its created and hence the <cmd>
1132below is monitored from its creation.
1133::
1134
1135  # echo $$ > /sys/fs/resctrl/p1/tasks
1136  # <cmd>
1137
1138Fetch the data::
1139
1140  # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy
1141  31789000
1142
1143Example 3 (Monitor without CAT support or before creating CAT groups)
1144---------------------------------------------------------------------
1145
1146Assume a system like HSW has only CQM and no CAT support. In this case
1147the resctrl will still mount but cannot create CTRL_MON directories.
1148But user can create different MON groups within the root group thereby
1149able to monitor all tasks including kernel threads.
1150
1151This can also be used to profile jobs cache size footprint before being
1152able to allocate them to different allocation groups.
1153::
1154
1155  # mount -t resctrl resctrl /sys/fs/resctrl
1156  # cd /sys/fs/resctrl
1157  # mkdir mon_groups/m01
1158  # mkdir mon_groups/m02
1159
1160  # echo 3478 > /sys/fs/resctrl/mon_groups/m01/tasks
1161  # echo 2467 > /sys/fs/resctrl/mon_groups/m02/tasks
1162
1163Monitor the groups separately and also get per domain data. From the
1164below its apparent that the tasks are mostly doing work on
1165domain(socket) 0.
1166::
1167
1168  # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_00/llc_occupancy
1169  31234000
1170  # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_01/llc_occupancy
1171  34555
1172  # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_00/llc_occupancy
1173  31234000
1174  # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_01/llc_occupancy
1175  32789
1176
1177
1178Example 4 (Monitor real time tasks)
1179-----------------------------------
1180
1181A single socket system which has real time tasks running on cores 4-7
1182and non real time tasks on other cpus. We want to monitor the cache
1183occupancy of the real time threads on these cores.
1184::
1185
1186  # mount -t resctrl resctrl /sys/fs/resctrl
1187  # cd /sys/fs/resctrl
1188  # mkdir p1
1189
1190Move the cpus 4-7 over to p1::
1191
1192  # echo f0 > p1/cpus
1193
1194View the llc occupancy snapshot::
1195
1196  # cat /sys/fs/resctrl/p1/mon_data/mon_L3_00/llc_occupancy
1197  11234000