Linux Audio

Check our new training course

Open-source upstreaming

Need help get the support for your hardware in upstream Linux?
Loading...
Note: File does not exist in v6.13.7.
  1Kernel-provided User Helpers
  2============================
  3
  4These are segment of kernel provided user code reachable from user space
  5at a fixed address in kernel memory.  This is used to provide user space
  6with some operations which require kernel help because of unimplemented
  7native feature and/or instructions in many ARM CPUs. The idea is for this
  8code to be executed directly in user mode for best efficiency but which is
  9too intimate with the kernel counter part to be left to user libraries.
 10In fact this code might even differ from one CPU to another depending on
 11the available instruction set, or whether it is a SMP systems. In other
 12words, the kernel reserves the right to change this code as needed without
 13warning. Only the entry points and their results as documented here are
 14guaranteed to be stable.
 15
 16This is different from (but doesn't preclude) a full blown VDSO
 17implementation, however a VDSO would prevent some assembly tricks with
 18constants that allows for efficient branching to those code segments. And
 19since those code segments only use a few cycles before returning to user
 20code, the overhead of a VDSO indirect far call would add a measurable
 21overhead to such minimalistic operations.
 22
 23User space is expected to bypass those helpers and implement those things
 24inline (either in the code emitted directly by the compiler, or part of
 25the implementation of a library call) when optimizing for a recent enough
 26processor that has the necessary native support, but only if resulting
 27binaries are already to be incompatible with earlier ARM processors due to
 28usage of similar native instructions for other things.  In other words
 29don't make binaries unable to run on earlier processors just for the sake
 30of not using these kernel helpers if your compiled code is not going to
 31use new instructions for other purpose.
 32
 33New helpers may be added over time, so an older kernel may be missing some
 34helpers present in a newer kernel.  For this reason, programs must check
 35the value of __kuser_helper_version (see below) before assuming that it is
 36safe to call any particular helper.  This check should ideally be
 37performed only once at process startup time, and execution aborted early
 38if the required helpers are not provided by the kernel version that
 39process is running on.
 40
 41kuser_helper_version
 42--------------------
 43
 44Location:	0xffff0ffc
 45
 46Reference declaration:
 47
 48  extern int32_t __kuser_helper_version;
 49
 50Definition:
 51
 52  This field contains the number of helpers being implemented by the
 53  running kernel.  User space may read this to determine the availability
 54  of a particular helper.
 55
 56Usage example:
 57
 58#define __kuser_helper_version (*(int32_t *)0xffff0ffc)
 59
 60void check_kuser_version(void)
 61{
 62	if (__kuser_helper_version < 2) {
 63		fprintf(stderr, "can't do atomic operations, kernel too old\n");
 64		abort();
 65	}
 66}
 67
 68Notes:
 69
 70  User space may assume that the value of this field never changes
 71  during the lifetime of any single process.  This means that this
 72  field can be read once during the initialisation of a library or
 73  startup phase of a program.
 74
 75kuser_get_tls
 76-------------
 77
 78Location:	0xffff0fe0
 79
 80Reference prototype:
 81
 82  void * __kuser_get_tls(void);
 83
 84Input:
 85
 86  lr = return address
 87
 88Output:
 89
 90  r0 = TLS value
 91
 92Clobbered registers:
 93
 94  none
 95
 96Definition:
 97
 98  Get the TLS value as previously set via the __ARM_NR_set_tls syscall.
 99
100Usage example:
101
102typedef void * (__kuser_get_tls_t)(void);
103#define __kuser_get_tls (*(__kuser_get_tls_t *)0xffff0fe0)
104
105void foo()
106{
107	void *tls = __kuser_get_tls();
108	printf("TLS = %p\n", tls);
109}
110
111Notes:
112
113  - Valid only if __kuser_helper_version >= 1 (from kernel version 2.6.12).
114
115kuser_cmpxchg
116-------------
117
118Location:	0xffff0fc0
119
120Reference prototype:
121
122  int __kuser_cmpxchg(int32_t oldval, int32_t newval, volatile int32_t *ptr);
123
124Input:
125
126  r0 = oldval
127  r1 = newval
128  r2 = ptr
129  lr = return address
130
131Output:
132
133  r0 = success code (zero or non-zero)
134  C flag = set if r0 == 0, clear if r0 != 0
135
136Clobbered registers:
137
138  r3, ip, flags
139
140Definition:
141
142  Atomically store newval in *ptr only if *ptr is equal to oldval.
143  Return zero if *ptr was changed or non-zero if no exchange happened.
144  The C flag is also set if *ptr was changed to allow for assembly
145  optimization in the calling code.
146
147Usage example:
148
149typedef int (__kuser_cmpxchg_t)(int oldval, int newval, volatile int *ptr);
150#define __kuser_cmpxchg (*(__kuser_cmpxchg_t *)0xffff0fc0)
151
152int atomic_add(volatile int *ptr, int val)
153{
154	int old, new;
155
156	do {
157		old = *ptr;
158		new = old + val;
159	} while(__kuser_cmpxchg(old, new, ptr));
160
161	return new;
162}
163
164Notes:
165
166  - This routine already includes memory barriers as needed.
167
168  - Valid only if __kuser_helper_version >= 2 (from kernel version 2.6.12).
169
170kuser_memory_barrier
171--------------------
172
173Location:	0xffff0fa0
174
175Reference prototype:
176
177  void __kuser_memory_barrier(void);
178
179Input:
180
181  lr = return address
182
183Output:
184
185  none
186
187Clobbered registers:
188
189  none
190
191Definition:
192
193  Apply any needed memory barrier to preserve consistency with data modified
194  manually and __kuser_cmpxchg usage.
195
196Usage example:
197
198typedef void (__kuser_dmb_t)(void);
199#define __kuser_dmb (*(__kuser_dmb_t *)0xffff0fa0)
200
201Notes:
202
203  - Valid only if __kuser_helper_version >= 3 (from kernel version 2.6.15).
204
205kuser_cmpxchg64
206---------------
207
208Location:	0xffff0f60
209
210Reference prototype:
211
212  int __kuser_cmpxchg64(const int64_t *oldval,
213                        const int64_t *newval,
214                        volatile int64_t *ptr);
215
216Input:
217
218  r0 = pointer to oldval
219  r1 = pointer to newval
220  r2 = pointer to target value
221  lr = return address
222
223Output:
224
225  r0 = success code (zero or non-zero)
226  C flag = set if r0 == 0, clear if r0 != 0
227
228Clobbered registers:
229
230  r3, lr, flags
231
232Definition:
233
234  Atomically store the 64-bit value pointed by *newval in *ptr only if *ptr
235  is equal to the 64-bit value pointed by *oldval.  Return zero if *ptr was
236  changed or non-zero if no exchange happened.
237
238  The C flag is also set if *ptr was changed to allow for assembly
239  optimization in the calling code.
240
241Usage example:
242
243typedef int (__kuser_cmpxchg64_t)(const int64_t *oldval,
244                                  const int64_t *newval,
245                                  volatile int64_t *ptr);
246#define __kuser_cmpxchg64 (*(__kuser_cmpxchg64_t *)0xffff0f60)
247
248int64_t atomic_add64(volatile int64_t *ptr, int64_t val)
249{
250	int64_t old, new;
251
252	do {
253		old = *ptr;
254		new = old + val;
255	} while(__kuser_cmpxchg64(&old, &new, ptr));
256
257	return new;
258}
259
260Notes:
261
262  - This routine already includes memory barriers as needed.
263
264  - Due to the length of this sequence, this spans 2 conventional kuser
265    "slots", therefore 0xffff0f80 is not used as a valid entry point.
266
267  - Valid only if __kuser_helper_version >= 5 (from kernel version 3.1).