Linux Audio

Check our new training course

Loading...
v3.1
  1What:		/sys/devices/system/cpu/
  2Date:		pre-git history
  3Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
  4Description:
  5		A collection of both global and individual CPU attributes
  6
  7		Individual CPU attributes are contained in subdirectories
  8		named by the kernel's logical CPU number, e.g.:
  9
 10		/sys/devices/system/cpu/cpu#/
 11
 12What:		/sys/devices/system/cpu/sched_mc_power_savings
 13		/sys/devices/system/cpu/sched_smt_power_savings
 14Date:		June 2006
 15Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
 16Description:	Discover and adjust the kernel's multi-core scheduler support.
 17
 18		Possible values are:
 19
 20		0 - No power saving load balance (default value)
 21		1 - Fill one thread/core/package first for long running threads
 22		2 - Also bias task wakeups to semi-idle cpu package for power
 23		    savings
 24
 25		sched_mc_power_savings is dependent upon SCHED_MC, which is
 26		itself architecture dependent.
 27
 28		sched_smt_power_savings is dependent upon SCHED_SMT, which
 29		is itself architecture dependent.
 30
 31		The two files are independent of each other. It is possible
 32		that one file may be present without the other.
 33
 34		Introduced by git commit 5c45bf27.
 35
 36
 37What:		/sys/devices/system/cpu/kernel_max
 38		/sys/devices/system/cpu/offline
 39		/sys/devices/system/cpu/online
 40		/sys/devices/system/cpu/possible
 41		/sys/devices/system/cpu/present
 42Date:		December 2008
 43Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
 44Description:	CPU topology files that describe kernel limits related to
 45		hotplug. Briefly:
 46
 47		kernel_max: the maximum cpu index allowed by the kernel
 48		configuration.
 49
 50		offline: cpus that are not online because they have been
 51		HOTPLUGGED off or exceed the limit of cpus allowed by the
 52		kernel configuration (kernel_max above).
 53
 54		online: cpus that are online and being scheduled.
 55
 56		possible: cpus that have been allocated resources and can be
 57		brought online if they are present.
 58
 59		present: cpus that have been identified as being present in
 60		the system.
 61
 62		See Documentation/cputopology.txt for more information.
 63
 64
 65What:		/sys/devices/system/cpu/probe
 66		/sys/devices/system/cpu/release
 67Date:		November 2009
 68Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
 69Description:	Dynamic addition and removal of CPU's.  This is not hotplug
 70		removal, this is meant complete removal/addition of the CPU
 71		from the system.
 72
 73		probe: writes to this file will dynamically add a CPU to the
 74		system.  Information written to the file to add CPU's is
 75		architecture specific.
 76
 77		release: writes to this file dynamically remove a CPU from
 78		the system.  Information writtento the file to remove CPU's
 79		is architecture specific.
 80
 81What:		/sys/devices/system/cpu/cpu#/node
 82Date:		October 2009
 83Contact:	Linux memory management mailing list <linux-mm@kvack.org>
 84Description:	Discover NUMA node a CPU belongs to
 85
 86		When CONFIG_NUMA is enabled, a symbolic link that points
 87		to the corresponding NUMA node directory.
 88
 89		For example, the following symlink is created for cpu42
 90		in NUMA node 2:
 91
 92		/sys/devices/system/cpu/cpu42/node2 -> ../../node/node2
 93
 94
 95What:		/sys/devices/system/cpu/cpu#/node
 96Date:		October 2009
 97Contact:	Linux memory management mailing list <linux-mm@kvack.org>
 98Description:	Discover NUMA node a CPU belongs to
 99
100		When CONFIG_NUMA is enabled, a symbolic link that points
101		to the corresponding NUMA node directory.
102
103		For example, the following symlink is created for cpu42
104		in NUMA node 2:
105
106		/sys/devices/system/cpu/cpu42/node2 -> ../../node/node2
107
108
109What:		/sys/devices/system/cpu/cpu#/topology/core_id
110		/sys/devices/system/cpu/cpu#/topology/core_siblings
111		/sys/devices/system/cpu/cpu#/topology/core_siblings_list
112		/sys/devices/system/cpu/cpu#/topology/physical_package_id
113		/sys/devices/system/cpu/cpu#/topology/thread_siblings
114		/sys/devices/system/cpu/cpu#/topology/thread_siblings_list
115Date:		December 2008
116Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
117Description:	CPU topology files that describe a logical CPU's relationship
118		to other cores and threads in the same physical package.
119
120		One cpu# directory is created per logical CPU in the system,
121		e.g. /sys/devices/system/cpu/cpu42/.
122
123		Briefly, the files above are:
124
125		core_id: the CPU core ID of cpu#. Typically it is the
126		hardware platform's identifier (rather than the kernel's).
127		The actual value is architecture and platform dependent.
128
129		core_siblings: internal kernel map of cpu#'s hardware threads
130		within the same physical_package_id.
131
132		core_siblings_list: human-readable list of the logical CPU
133		numbers within the same physical_package_id as cpu#.
134
135		physical_package_id: physical package id of cpu#. Typically
136		corresponds to a physical socket number, but the actual value
137		is architecture and platform dependent.
138
139		thread_siblings: internel kernel map of cpu#'s hardware
140		threads within the same core as cpu#
141
142		thread_siblings_list: human-readable list of cpu#'s hardware
143		threads within the same core as cpu#
144
145		See Documentation/cputopology.txt for more information.
146
147
148What:		/sys/devices/system/cpu/cpuidle/current_driver
149		/sys/devices/system/cpu/cpuidle/current_governer_ro
150Date:		September 2007
151Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
152Description:	Discover cpuidle policy and mechanism
153
154		Various CPUs today support multiple idle levels that are
155		differentiated by varying exit latencies and power
156		consumption during idle.
157
158		Idle policy (governor) is differentiated from idle mechanism
159		(driver)
160
161		current_driver: displays current idle mechanism
162
163		current_governor_ro: displays current idle policy
164
165		See files in Documentation/cpuidle/ for more information.
166
167
168What:		/sys/devices/system/cpu/cpu#/cpufreq/*
169Date:		pre-git history
170Contact:	cpufreq@vger.kernel.org
171Description:	Discover and change clock speed of CPUs
172
173		Clock scaling allows you to change the clock speed of the
174		CPUs on the fly. This is a nice method to save battery
175		power, because the lower the clock speed, the less power
176		the CPU consumes.
177
178		There are many knobs to tweak in this directory.
179
180		See files in Documentation/cpu-freq/ for more information.
181
182		In particular, read Documentation/cpu-freq/user-guide.txt
183		to learn how to control the knobs.
184
185
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
186What:		/sys/devices/system/cpu/cpu*/cache/index3/cache_disable_{0,1}
187Date:		August 2008
188KernelVersion:	2.6.27
189Contact:	discuss@x86-64.org
190Description:	Disable L3 cache indices
191
192		These files exist in every CPU's cache/index3 directory. Each
193		cache_disable_{0,1} file corresponds to one disable slot which
194		can be used to disable a cache index. Reading from these files
195		on a processor with this functionality will return the currently
196		disabled index for that node. There is one L3 structure per
197		node, or per internal node on MCM machines. Writing a valid
198		index to one of these files will cause the specificed cache
199		index to be disabled.
200
201		All AMD processors with L3 caches provide this functionality.
202		For details, see BKDGs at
203		http://developer.amd.com/documentation/guides/Pages/default.aspx
v3.15
  1What:		/sys/devices/system/cpu/
  2Date:		pre-git history
  3Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
  4Description:
  5		A collection of both global and individual CPU attributes
  6
  7		Individual CPU attributes are contained in subdirectories
  8		named by the kernel's logical CPU number, e.g.:
  9
 10		/sys/devices/system/cpu/cpu#/
 11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 12What:		/sys/devices/system/cpu/kernel_max
 13		/sys/devices/system/cpu/offline
 14		/sys/devices/system/cpu/online
 15		/sys/devices/system/cpu/possible
 16		/sys/devices/system/cpu/present
 17Date:		December 2008
 18Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
 19Description:	CPU topology files that describe kernel limits related to
 20		hotplug. Briefly:
 21
 22		kernel_max: the maximum cpu index allowed by the kernel
 23		configuration.
 24
 25		offline: cpus that are not online because they have been
 26		HOTPLUGGED off or exceed the limit of cpus allowed by the
 27		kernel configuration (kernel_max above).
 28
 29		online: cpus that are online and being scheduled.
 30
 31		possible: cpus that have been allocated resources and can be
 32		brought online if they are present.
 33
 34		present: cpus that have been identified as being present in
 35		the system.
 36
 37		See Documentation/cputopology.txt for more information.
 38
 39
 40What:		/sys/devices/system/cpu/probe
 41		/sys/devices/system/cpu/release
 42Date:		November 2009
 43Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
 44Description:	Dynamic addition and removal of CPU's.  This is not hotplug
 45		removal, this is meant complete removal/addition of the CPU
 46		from the system.
 47
 48		probe: writes to this file will dynamically add a CPU to the
 49		system.  Information written to the file to add CPU's is
 50		architecture specific.
 51
 52		release: writes to this file dynamically remove a CPU from
 53		the system.  Information writtento the file to remove CPU's
 54		is architecture specific.
 55
 56What:		/sys/devices/system/cpu/cpu#/node
 57Date:		October 2009
 58Contact:	Linux memory management mailing list <linux-mm@kvack.org>
 59Description:	Discover NUMA node a CPU belongs to
 60
 61		When CONFIG_NUMA is enabled, a symbolic link that points
 62		to the corresponding NUMA node directory.
 63
 64		For example, the following symlink is created for cpu42
 65		in NUMA node 2:
 66
 67		/sys/devices/system/cpu/cpu42/node2 -> ../../node/node2
 68
 69
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 70What:		/sys/devices/system/cpu/cpu#/topology/core_id
 71		/sys/devices/system/cpu/cpu#/topology/core_siblings
 72		/sys/devices/system/cpu/cpu#/topology/core_siblings_list
 73		/sys/devices/system/cpu/cpu#/topology/physical_package_id
 74		/sys/devices/system/cpu/cpu#/topology/thread_siblings
 75		/sys/devices/system/cpu/cpu#/topology/thread_siblings_list
 76Date:		December 2008
 77Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
 78Description:	CPU topology files that describe a logical CPU's relationship
 79		to other cores and threads in the same physical package.
 80
 81		One cpu# directory is created per logical CPU in the system,
 82		e.g. /sys/devices/system/cpu/cpu42/.
 83
 84		Briefly, the files above are:
 85
 86		core_id: the CPU core ID of cpu#. Typically it is the
 87		hardware platform's identifier (rather than the kernel's).
 88		The actual value is architecture and platform dependent.
 89
 90		core_siblings: internal kernel map of cpu#'s hardware threads
 91		within the same physical_package_id.
 92
 93		core_siblings_list: human-readable list of the logical CPU
 94		numbers within the same physical_package_id as cpu#.
 95
 96		physical_package_id: physical package id of cpu#. Typically
 97		corresponds to a physical socket number, but the actual value
 98		is architecture and platform dependent.
 99
100		thread_siblings: internel kernel map of cpu#'s hardware
101		threads within the same core as cpu#
102
103		thread_siblings_list: human-readable list of cpu#'s hardware
104		threads within the same core as cpu#
105
106		See Documentation/cputopology.txt for more information.
107
108
109What:		/sys/devices/system/cpu/cpuidle/current_driver
110		/sys/devices/system/cpu/cpuidle/current_governer_ro
111Date:		September 2007
112Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
113Description:	Discover cpuidle policy and mechanism
114
115		Various CPUs today support multiple idle levels that are
116		differentiated by varying exit latencies and power
117		consumption during idle.
118
119		Idle policy (governor) is differentiated from idle mechanism
120		(driver)
121
122		current_driver: displays current idle mechanism
123
124		current_governor_ro: displays current idle policy
125
126		See files in Documentation/cpuidle/ for more information.
127
128
129What:		/sys/devices/system/cpu/cpu#/cpufreq/*
130Date:		pre-git history
131Contact:	cpufreq@vger.kernel.org
132Description:	Discover and change clock speed of CPUs
133
134		Clock scaling allows you to change the clock speed of the
135		CPUs on the fly. This is a nice method to save battery
136		power, because the lower the clock speed, the less power
137		the CPU consumes.
138
139		There are many knobs to tweak in this directory.
140
141		See files in Documentation/cpu-freq/ for more information.
142
143		In particular, read Documentation/cpu-freq/user-guide.txt
144		to learn how to control the knobs.
145
146
147What:		/sys/devices/system/cpu/cpu#/cpufreq/freqdomain_cpus
148Date:		June 2013
149Contact:	cpufreq@vger.kernel.org
150Description:	Discover CPUs in the same CPU frequency coordination domain
151
152		freqdomain_cpus is the list of CPUs (online+offline) that share
153		the same clock/freq domain (possibly at the hardware level).
154		That information may be hidden from the cpufreq core and the
155		value of related_cpus may be different from freqdomain_cpus. This
156		attribute is useful for user space DVFS controllers to get better
157		power/performance results for platforms using acpi-cpufreq.
158
159		This file is only present if the acpi-cpufreq driver is in use.
160
161
162What:		/sys/devices/system/cpu/cpu*/cache/index3/cache_disable_{0,1}
163Date:		August 2008
164KernelVersion:	2.6.27
165Contact:	discuss@x86-64.org
166Description:	Disable L3 cache indices
167
168		These files exist in every CPU's cache/index3 directory. Each
169		cache_disable_{0,1} file corresponds to one disable slot which
170		can be used to disable a cache index. Reading from these files
171		on a processor with this functionality will return the currently
172		disabled index for that node. There is one L3 structure per
173		node, or per internal node on MCM machines. Writing a valid
174		index to one of these files will cause the specificed cache
175		index to be disabled.
176
177		All AMD processors with L3 caches provide this functionality.
178		For details, see BKDGs at
179		http://developer.amd.com/documentation/guides/Pages/default.aspx
180
181
182What:		/sys/devices/system/cpu/cpufreq/boost
183Date:		August 2012
184Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
185Description:	Processor frequency boosting control
186
187		This switch controls the boost setting for the whole system.
188		Boosting allows the CPU and the firmware to run at a frequency
189		beyound it's nominal limit.
190		More details can be found in Documentation/cpu-freq/boost.txt
191
192
193What:		/sys/devices/system/cpu/cpu#/crash_notes
194		/sys/devices/system/cpu/cpu#/crash_notes_size
195Date:		April 2013
196Contact:	kexec@lists.infradead.org
197Description:	address and size of the percpu note.
198
199		crash_notes: the physical address of the memory that holds the
200		note of cpu#.
201
202		crash_notes_size: size of the note of cpu#.
203
204
205What:		/sys/devices/system/cpu/intel_pstate/max_perf_pct
206		/sys/devices/system/cpu/intel_pstate/min_perf_pct
207		/sys/devices/system/cpu/intel_pstate/no_turbo
208Date:		February 2013
209Contact:	linux-pm@vger.kernel.org
210Description:	Parameters for the Intel P-state driver
211
212		Logic for selecting the current P-state in Intel
213		Sandybridge+ processors. The three knobs control
214		limits for the P-state that will be requested by the
215		driver.
216
217		max_perf_pct: limits the maximum P state that will be requested by
218		the driver stated as a percentage of the available performance.
219
220		min_perf_pct: limits the minimum P state that will be requested by
221		the driver stated as a percentage of the available performance.
222
223		no_turbo: limits the driver to selecting P states below the turbo
224		frequency range.
225
226		More details can be found in Documentation/cpu-freq/intel-pstate.txt