]> git.ipfire.org Git - thirdparty/kernel/stable.git/blob - Documentation/ABI/testing/sysfs-devices-system-cpu
2fdfbf6337e7b835877c740932839388c40393be
[thirdparty/kernel/stable.git] / Documentation / ABI / testing / sysfs-devices-system-cpu
1 What: /sys/devices/system/cpu/
2 Date: pre-git history
3 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
4 Description:
5 A collection of both global and individual CPU attributes
6
7 Individual CPU attributes are contained in subdirectories
8 named by the kernel's logical CPU number, e.g.:
9
10 /sys/devices/system/cpu/cpu#/
11
12 What: /sys/devices/system/cpu/kernel_max
13 /sys/devices/system/cpu/offline
14 /sys/devices/system/cpu/online
15 /sys/devices/system/cpu/possible
16 /sys/devices/system/cpu/present
17 Date: December 2008
18 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
19 Description: CPU topology files that describe kernel limits related to
20 hotplug. Briefly:
21
22 kernel_max: the maximum cpu index allowed by the kernel
23 configuration.
24
25 offline: cpus that are not online because they have been
26 HOTPLUGGED off or exceed the limit of cpus allowed by the
27 kernel configuration (kernel_max above).
28
29 online: cpus that are online and being scheduled.
30
31 possible: cpus that have been allocated resources and can be
32 brought online if they are present.
33
34 present: cpus that have been identified as being present in
35 the system.
36
37 See Documentation/cputopology.txt for more information.
38
39
40 What: /sys/devices/system/cpu/probe
41 /sys/devices/system/cpu/release
42 Date: November 2009
43 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
44 Description: Dynamic addition and removal of CPU's. This is not hotplug
45 removal, this is meant complete removal/addition of the CPU
46 from the system.
47
48 probe: writes to this file will dynamically add a CPU to the
49 system. Information written to the file to add CPU's is
50 architecture specific.
51
52 release: writes to this file dynamically remove a CPU from
53 the system. Information writtento the file to remove CPU's
54 is architecture specific.
55
56 What: /sys/devices/system/cpu/cpu#/node
57 Date: October 2009
58 Contact: Linux memory management mailing list <linux-mm@kvack.org>
59 Description: Discover NUMA node a CPU belongs to
60
61 When CONFIG_NUMA is enabled, a symbolic link that points
62 to the corresponding NUMA node directory.
63
64 For example, the following symlink is created for cpu42
65 in NUMA node 2:
66
67 /sys/devices/system/cpu/cpu42/node2 -> ../../node/node2
68
69
70 What: /sys/devices/system/cpu/cpu#/topology/core_id
71 /sys/devices/system/cpu/cpu#/topology/core_siblings
72 /sys/devices/system/cpu/cpu#/topology/core_siblings_list
73 /sys/devices/system/cpu/cpu#/topology/physical_package_id
74 /sys/devices/system/cpu/cpu#/topology/thread_siblings
75 /sys/devices/system/cpu/cpu#/topology/thread_siblings_list
76 Date: December 2008
77 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
78 Description: CPU topology files that describe a logical CPU's relationship
79 to other cores and threads in the same physical package.
80
81 One cpu# directory is created per logical CPU in the system,
82 e.g. /sys/devices/system/cpu/cpu42/.
83
84 Briefly, the files above are:
85
86 core_id: the CPU core ID of cpu#. Typically it is the
87 hardware platform's identifier (rather than the kernel's).
88 The actual value is architecture and platform dependent.
89
90 core_siblings: internal kernel map of cpu#'s hardware threads
91 within the same physical_package_id.
92
93 core_siblings_list: human-readable list of the logical CPU
94 numbers within the same physical_package_id as cpu#.
95
96 physical_package_id: physical package id of cpu#. Typically
97 corresponds to a physical socket number, but the actual value
98 is architecture and platform dependent.
99
100 thread_siblings: internel kernel map of cpu#'s hardware
101 threads within the same core as cpu#
102
103 thread_siblings_list: human-readable list of cpu#'s hardware
104 threads within the same core as cpu#
105
106 See Documentation/cputopology.txt for more information.
107
108
109 What: /sys/devices/system/cpu/cpuidle/current_driver
110 /sys/devices/system/cpu/cpuidle/current_governer_ro
111 Date: September 2007
112 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
113 Description: Discover cpuidle policy and mechanism
114
115 Various CPUs today support multiple idle levels that are
116 differentiated by varying exit latencies and power
117 consumption during idle.
118
119 Idle policy (governor) is differentiated from idle mechanism
120 (driver)
121
122 current_driver: displays current idle mechanism
123
124 current_governor_ro: displays current idle policy
125
126 See files in Documentation/cpuidle/ for more information.
127
128
129 What: /sys/devices/system/cpu/cpu#/cpufreq/*
130 Date: pre-git history
131 Contact: linux-pm@vger.kernel.org
132 Description: Discover and change clock speed of CPUs
133
134 Clock scaling allows you to change the clock speed of the
135 CPUs on the fly. This is a nice method to save battery
136 power, because the lower the clock speed, the less power
137 the CPU consumes.
138
139 There are many knobs to tweak in this directory.
140
141 See files in Documentation/cpu-freq/ for more information.
142
143 In particular, read Documentation/cpu-freq/user-guide.txt
144 to learn how to control the knobs.
145
146
147 What: /sys/devices/system/cpu/cpu#/cpufreq/freqdomain_cpus
148 Date: June 2013
149 Contact: linux-pm@vger.kernel.org
150 Description: Discover CPUs in the same CPU frequency coordination domain
151
152 freqdomain_cpus is the list of CPUs (online+offline) that share
153 the same clock/freq domain (possibly at the hardware level).
154 That information may be hidden from the cpufreq core and the
155 value of related_cpus may be different from freqdomain_cpus. This
156 attribute is useful for user space DVFS controllers to get better
157 power/performance results for platforms using acpi-cpufreq.
158
159 This file is only present if the acpi-cpufreq driver is in use.
160
161
162 What: /sys/devices/system/cpu/cpu*/cache/index3/cache_disable_{0,1}
163 Date: August 2008
164 KernelVersion: 2.6.27
165 Contact: discuss@x86-64.org
166 Description: Disable L3 cache indices
167
168 These files exist in every CPU's cache/index3 directory. Each
169 cache_disable_{0,1} file corresponds to one disable slot which
170 can be used to disable a cache index. Reading from these files
171 on a processor with this functionality will return the currently
172 disabled index for that node. There is one L3 structure per
173 node, or per internal node on MCM machines. Writing a valid
174 index to one of these files will cause the specificed cache
175 index to be disabled.
176
177 All AMD processors with L3 caches provide this functionality.
178 For details, see BKDGs at
179 http://developer.amd.com/documentation/guides/Pages/default.aspx
180
181
182 What: /sys/devices/system/cpu/cpufreq/boost
183 Date: August 2012
184 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
185 Description: Processor frequency boosting control
186
187 This switch controls the boost setting for the whole system.
188 Boosting allows the CPU and the firmware to run at a frequency
189 beyound it's nominal limit.
190 More details can be found in Documentation/cpu-freq/boost.txt
191
192
193 What: /sys/devices/system/cpu/cpu#/crash_notes
194 /sys/devices/system/cpu/cpu#/crash_notes_size
195 Date: April 2013
196 Contact: kexec@lists.infradead.org
197 Description: address and size of the percpu note.
198
199 crash_notes: the physical address of the memory that holds the
200 note of cpu#.
201
202 crash_notes_size: size of the note of cpu#.
203
204
205 What: /sys/devices/system/cpu/intel_pstate/max_perf_pct
206 /sys/devices/system/cpu/intel_pstate/min_perf_pct
207 /sys/devices/system/cpu/intel_pstate/no_turbo
208 Date: February 2013
209 Contact: linux-pm@vger.kernel.org
210 Description: Parameters for the Intel P-state driver
211
212 Logic for selecting the current P-state in Intel
213 Sandybridge+ processors. The three knobs control
214 limits for the P-state that will be requested by the
215 driver.
216
217 max_perf_pct: limits the maximum P state that will be requested by
218 the driver stated as a percentage of the available performance.
219
220 min_perf_pct: limits the minimum P state that will be requested by
221 the driver stated as a percentage of the available performance.
222
223 no_turbo: limits the driver to selecting P states below the turbo
224 frequency range.
225
226 More details can be found in Documentation/cpu-freq/intel-pstate.txt
227
228 What: /sys/devices/system/cpu/vulnerabilities
229 /sys/devices/system/cpu/vulnerabilities/meltdown
230 /sys/devices/system/cpu/vulnerabilities/spectre_v1
231 /sys/devices/system/cpu/vulnerabilities/spectre_v2
232 /sys/devices/system/cpu/vulnerabilities/spec_store_bypass
233 /sys/devices/system/cpu/vulnerabilities/l1tf
234 /sys/devices/system/cpu/vulnerabilities/mds
235 /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
236 /sys/devices/system/cpu/vulnerabilities/itlb_multihit
237 Date: January 2018
238 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
239 Description: Information about CPU vulnerabilities
240
241 The files are named after the code names of CPU
242 vulnerabilities. The output of those files reflects the
243 state of the CPUs in the system. Possible output values:
244
245 "Not affected" CPU is not affected by the vulnerability
246 "Vulnerable" CPU is affected and no mitigation in effect
247 "Mitigation: $M" CPU is affected and mitigation $M is in effect