Loading...
Note: File does not exist in v3.1.
1=======================================================
2Semantics and Behavior of Atomic and Bitmask Operations
3=======================================================
4
5:Author: David S. Miller
6
7This document is intended to serve as a guide to Linux port
8maintainers on how to implement atomic counter, bitops, and spinlock
9interfaces properly.
10
11Atomic Type And Operations
12==========================
13
14The atomic_t type should be defined as a signed integer and
15the atomic_long_t type as a signed long integer. Also, they should
16be made opaque such that any kind of cast to a normal C integer type
17will fail. Something like the following should suffice::
18
19 typedef struct { int counter; } atomic_t;
20 typedef struct { long counter; } atomic_long_t;
21
22Historically, counter has been declared volatile. This is now discouraged.
23See :ref:`Documentation/process/volatile-considered-harmful.rst
24<volatile_considered_harmful>` for the complete rationale.
25
26local_t is very similar to atomic_t. If the counter is per CPU and only
27updated by one CPU, local_t is probably more appropriate. Please see
28:ref:`Documentation/core-api/local_ops.rst <local_ops>` for the semantics of
29local_t.
30
31The first operations to implement for atomic_t's are the initializers and
32plain reads. ::
33
34 #define ATOMIC_INIT(i) { (i) }
35 #define atomic_set(v, i) ((v)->counter = (i))
36
37The first macro is used in definitions, such as::
38
39 static atomic_t my_counter = ATOMIC_INIT(1);
40
41The initializer is atomic in that the return values of the atomic operations
42are guaranteed to be correct reflecting the initialized value if the
43initializer is used before runtime. If the initializer is used at runtime, a
44proper implicit or explicit read memory barrier is needed before reading the
45value with atomic_read from another thread.
46
47As with all of the ``atomic_`` interfaces, replace the leading ``atomic_``
48with ``atomic_long_`` to operate on atomic_long_t.
49
50The second interface can be used at runtime, as in::
51
52 struct foo { atomic_t counter; };
53 ...
54
55 struct foo *k;
56
57 k = kmalloc(sizeof(*k), GFP_KERNEL);
58 if (!k)
59 return -ENOMEM;
60 atomic_set(&k->counter, 0);
61
62The setting is atomic in that the return values of the atomic operations by
63all threads are guaranteed to be correct reflecting either the value that has
64been set with this operation or set with another operation. A proper implicit
65or explicit memory barrier is needed before the value set with the operation
66is guaranteed to be readable with atomic_read from another thread.
67
68Next, we have::
69
70 #define atomic_read(v) ((v)->counter)
71
72which simply reads the counter value currently visible to the calling thread.
73The read is atomic in that the return value is guaranteed to be one of the
74values initialized or modified with the interface operations if a proper
75implicit or explicit memory barrier is used after possible runtime
76initialization by any other thread and the value is modified only with the
77interface operations. atomic_read does not guarantee that the runtime
78initialization by any other thread is visible yet, so the user of the
79interface must take care of that with a proper implicit or explicit memory
80barrier.
81
82.. warning::
83
84 ``atomic_read()`` and ``atomic_set()`` DO NOT IMPLY BARRIERS!
85
86 Some architectures may choose to use the volatile keyword, barriers, or
87 inline assembly to guarantee some degree of immediacy for atomic_read()
88 and atomic_set(). This is not uniformly guaranteed, and may change in
89 the future, so all users of atomic_t should treat atomic_read() and
90 atomic_set() as simple C statements that may be reordered or optimized
91 away entirely by the compiler or processor, and explicitly invoke the
92 appropriate compiler and/or memory barrier for each use case. Failure
93 to do so will result in code that may suddenly break when used with
94 different architectures or compiler optimizations, or even changes in
95 unrelated code which changes how the compiler optimizes the section
96 accessing atomic_t variables.
97
98Properly aligned pointers, longs, ints, and chars (and unsigned
99equivalents) may be atomically loaded from and stored to in the same
100sense as described for atomic_read() and atomic_set(). The READ_ONCE()
101and WRITE_ONCE() macros should be used to prevent the compiler from using
102optimizations that might otherwise optimize accesses out of existence on
103the one hand, or that might create unsolicited accesses on the other.
104
105For example consider the following code::
106
107 while (a > 0)
108 do_something();
109
110If the compiler can prove that do_something() does not store to the
111variable a, then the compiler is within its rights transforming this to
112the following::
113
114 tmp = a;
115 if (a > 0)
116 for (;;)
117 do_something();
118
119If you don't want the compiler to do this (and you probably don't), then
120you should use something like the following::
121
122 while (READ_ONCE(a) < 0)
123 do_something();
124
125Alternatively, you could place a barrier() call in the loop.
126
127For another example, consider the following code::
128
129 tmp_a = a;
130 do_something_with(tmp_a);
131 do_something_else_with(tmp_a);
132
133If the compiler can prove that do_something_with() does not store to the
134variable a, then the compiler is within its rights to manufacture an
135additional load as follows::
136
137 tmp_a = a;
138 do_something_with(tmp_a);
139 tmp_a = a;
140 do_something_else_with(tmp_a);
141
142This could fatally confuse your code if it expected the same value
143to be passed to do_something_with() and do_something_else_with().
144
145The compiler would be likely to manufacture this additional load if
146do_something_with() was an inline function that made very heavy use
147of registers: reloading from variable a could save a flush to the
148stack and later reload. To prevent the compiler from attacking your
149code in this manner, write the following::
150
151 tmp_a = READ_ONCE(a);
152 do_something_with(tmp_a);
153 do_something_else_with(tmp_a);
154
155For a final example, consider the following code, assuming that the
156variable a is set at boot time before the second CPU is brought online
157and never changed later, so that memory barriers are not needed::
158
159 if (a)
160 b = 9;
161 else
162 b = 42;
163
164The compiler is within its rights to manufacture an additional store
165by transforming the above code into the following::
166
167 b = 42;
168 if (a)
169 b = 9;
170
171This could come as a fatal surprise to other code running concurrently
172that expected b to never have the value 42 if a was zero. To prevent
173the compiler from doing this, write something like::
174
175 if (a)
176 WRITE_ONCE(b, 9);
177 else
178 WRITE_ONCE(b, 42);
179
180Don't even -think- about doing this without proper use of memory barriers,
181locks, or atomic operations if variable a can change at runtime!
182
183.. warning::
184
185 ``READ_ONCE()`` OR ``WRITE_ONCE()`` DO NOT IMPLY A BARRIER!
186
187Now, we move onto the atomic operation interfaces typically implemented with
188the help of assembly code. ::
189
190 void atomic_add(int i, atomic_t *v);
191 void atomic_sub(int i, atomic_t *v);
192 void atomic_inc(atomic_t *v);
193 void atomic_dec(atomic_t *v);
194
195These four routines add and subtract integral values to/from the given
196atomic_t value. The first two routines pass explicit integers by
197which to make the adjustment, whereas the latter two use an implicit
198adjustment value of "1".
199
200One very important aspect of these two routines is that they DO NOT
201require any explicit memory barriers. They need only perform the
202atomic_t counter update in an SMP safe manner.
203
204Next, we have::
205
206 int atomic_inc_return(atomic_t *v);
207 int atomic_dec_return(atomic_t *v);
208
209These routines add 1 and subtract 1, respectively, from the given
210atomic_t and return the new counter value after the operation is
211performed.
212
213Unlike the above routines, it is required that these primitives
214include explicit memory barriers that are performed before and after
215the operation. It must be done such that all memory operations before
216and after the atomic operation calls are strongly ordered with respect
217to the atomic operation itself.
218
219For example, it should behave as if a smp_mb() call existed both
220before and after the atomic operation.
221
222If the atomic instructions used in an implementation provide explicit
223memory barrier semantics which satisfy the above requirements, that is
224fine as well.
225
226Let's move on::
227
228 int atomic_add_return(int i, atomic_t *v);
229 int atomic_sub_return(int i, atomic_t *v);
230
231These behave just like atomic_{inc,dec}_return() except that an
232explicit counter adjustment is given instead of the implicit "1".
233This means that like atomic_{inc,dec}_return(), the memory barrier
234semantics are required.
235
236Next::
237
238 int atomic_inc_and_test(atomic_t *v);
239 int atomic_dec_and_test(atomic_t *v);
240
241These two routines increment and decrement by 1, respectively, the
242given atomic counter. They return a boolean indicating whether the
243resulting counter value was zero or not.
244
245Again, these primitives provide explicit memory barrier semantics around
246the atomic operation::
247
248 int atomic_sub_and_test(int i, atomic_t *v);
249
250This is identical to atomic_dec_and_test() except that an explicit
251decrement is given instead of the implicit "1". This primitive must
252provide explicit memory barrier semantics around the operation::
253
254 int atomic_add_negative(int i, atomic_t *v);
255
256The given increment is added to the given atomic counter value. A boolean
257is return which indicates whether the resulting counter value is negative.
258This primitive must provide explicit memory barrier semantics around
259the operation.
260
261Then::
262
263 int atomic_xchg(atomic_t *v, int new);
264
265This performs an atomic exchange operation on the atomic variable v, setting
266the given new value. It returns the old value that the atomic variable v had
267just before the operation.
268
269atomic_xchg must provide explicit memory barriers around the operation. ::
270
271 int atomic_cmpxchg(atomic_t *v, int old, int new);
272
273This performs an atomic compare exchange operation on the atomic value v,
274with the given old and new values. Like all atomic_xxx operations,
275atomic_cmpxchg will only satisfy its atomicity semantics as long as all
276other accesses of \*v are performed through atomic_xxx operations.
277
278atomic_cmpxchg must provide explicit memory barriers around the operation,
279although if the comparison fails then no memory ordering guarantees are
280required.
281
282The semantics for atomic_cmpxchg are the same as those defined for 'cas'
283below.
284
285Finally::
286
287 int atomic_add_unless(atomic_t *v, int a, int u);
288
289If the atomic value v is not equal to u, this function adds a to v, and
290returns non zero. If v is equal to u then it returns zero. This is done as
291an atomic operation.
292
293atomic_add_unless must provide explicit memory barriers around the
294operation unless it fails (returns 0).
295
296atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0)
297
298
299If a caller requires memory barrier semantics around an atomic_t
300operation which does not return a value, a set of interfaces are
301defined which accomplish this::
302
303 void smp_mb__before_atomic(void);
304 void smp_mb__after_atomic(void);
305
306For example, smp_mb__before_atomic() can be used like so::
307
308 obj->dead = 1;
309 smp_mb__before_atomic();
310 atomic_dec(&obj->ref_count);
311
312It makes sure that all memory operations preceding the atomic_dec()
313call are strongly ordered with respect to the atomic counter
314operation. In the above example, it guarantees that the assignment of
315"1" to obj->dead will be globally visible to other cpus before the
316atomic counter decrement.
317
318Without the explicit smp_mb__before_atomic() call, the
319implementation could legally allow the atomic counter update visible
320to other cpus before the "obj->dead = 1;" assignment.
321
322A missing memory barrier in the cases where they are required by the
323atomic_t implementation above can have disastrous results. Here is
324an example, which follows a pattern occurring frequently in the Linux
325kernel. It is the use of atomic counters to implement reference
326counting, and it works such that once the counter falls to zero it can
327be guaranteed that no other entity can be accessing the object::
328
329 static void obj_list_add(struct obj *obj, struct list_head *head)
330 {
331 obj->active = 1;
332 list_add(&obj->list, head);
333 }
334
335 static void obj_list_del(struct obj *obj)
336 {
337 list_del(&obj->list);
338 obj->active = 0;
339 }
340
341 static void obj_destroy(struct obj *obj)
342 {
343 BUG_ON(obj->active);
344 kfree(obj);
345 }
346
347 struct obj *obj_list_peek(struct list_head *head)
348 {
349 if (!list_empty(head)) {
350 struct obj *obj;
351
352 obj = list_entry(head->next, struct obj, list);
353 atomic_inc(&obj->refcnt);
354 return obj;
355 }
356 return NULL;
357 }
358
359 void obj_poke(void)
360 {
361 struct obj *obj;
362
363 spin_lock(&global_list_lock);
364 obj = obj_list_peek(&global_list);
365 spin_unlock(&global_list_lock);
366
367 if (obj) {
368 obj->ops->poke(obj);
369 if (atomic_dec_and_test(&obj->refcnt))
370 obj_destroy(obj);
371 }
372 }
373
374 void obj_timeout(struct obj *obj)
375 {
376 spin_lock(&global_list_lock);
377 obj_list_del(obj);
378 spin_unlock(&global_list_lock);
379
380 if (atomic_dec_and_test(&obj->refcnt))
381 obj_destroy(obj);
382 }
383
384.. note::
385
386 This is a simplification of the ARP queue management in the generic
387 neighbour discover code of the networking. Olaf Kirch found a bug wrt.
388 memory barriers in kfree_skb() that exposed the atomic_t memory barrier
389 requirements quite clearly.
390
391Given the above scheme, it must be the case that the obj->active
392update done by the obj list deletion be visible to other processors
393before the atomic counter decrement is performed.
394
395Otherwise, the counter could fall to zero, yet obj->active would still
396be set, thus triggering the assertion in obj_destroy(). The error
397sequence looks like this::
398
399 cpu 0 cpu 1
400 obj_poke() obj_timeout()
401 obj = obj_list_peek();
402 ... gains ref to obj, refcnt=2
403 obj_list_del(obj);
404 obj->active = 0 ...
405 ... visibility delayed ...
406 atomic_dec_and_test()
407 ... refcnt drops to 1 ...
408 atomic_dec_and_test()
409 ... refcount drops to 0 ...
410 obj_destroy()
411 BUG() triggers since obj->active
412 still seen as one
413 obj->active update visibility occurs
414
415With the memory barrier semantics required of the atomic_t operations
416which return values, the above sequence of memory visibility can never
417happen. Specifically, in the above case the atomic_dec_and_test()
418counter decrement would not become globally visible until the
419obj->active update does.
420
421As a historical note, 32-bit Sparc used to only allow usage of
42224-bits of its atomic_t type. This was because it used 8 bits
423as a spinlock for SMP safety. Sparc32 lacked a "compare and swap"
424type instruction. However, 32-bit Sparc has since been moved over
425to a "hash table of spinlocks" scheme, that allows the full 32-bit
426counter to be realized. Essentially, an array of spinlocks are
427indexed into based upon the address of the atomic_t being operated
428on, and that lock protects the atomic operation. Parisc uses the
429same scheme.
430
431Another note is that the atomic_t operations returning values are
432extremely slow on an old 386.
433
434
435Atomic Bitmask
436==============
437
438We will now cover the atomic bitmask operations. You will find that
439their SMP and memory barrier semantics are similar in shape and scope
440to the atomic_t ops above.
441
442Native atomic bit operations are defined to operate on objects aligned
443to the size of an "unsigned long" C data type, and are least of that
444size. The endianness of the bits within each "unsigned long" are the
445native endianness of the cpu. ::
446
447 void set_bit(unsigned long nr, volatile unsigned long *addr);
448 void clear_bit(unsigned long nr, volatile unsigned long *addr);
449 void change_bit(unsigned long nr, volatile unsigned long *addr);
450
451These routines set, clear, and change, respectively, the bit number
452indicated by "nr" on the bit mask pointed to by "ADDR".
453
454They must execute atomically, yet there are no implicit memory barrier
455semantics required of these interfaces. ::
456
457 int test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
458 int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
459 int test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
460
461Like the above, except that these routines return a boolean which
462indicates whether the changed bit was set _BEFORE_ the atomic bit
463operation.
464
465WARNING! It is incredibly important that the value be a boolean,
466ie. "0" or "1". Do not try to be fancy and save a few instructions by
467declaring the above to return "long" and just returning something like
468"old_val & mask" because that will not work.
469
470For one thing, this return value gets truncated to int in many code
471paths using these interfaces, so on 64-bit if the bit is set in the
472upper 32-bits then testers will never see that.
473
474One great example of where this problem crops up are the thread_info
475flag operations. Routines such as test_and_set_ti_thread_flag() chop
476the return value into an int. There are other places where things
477like this occur as well.
478
479These routines, like the atomic_t counter operations returning values,
480must provide explicit memory barrier semantics around their execution.
481All memory operations before the atomic bit operation call must be
482made visible globally before the atomic bit operation is made visible.
483Likewise, the atomic bit operation must be visible globally before any
484subsequent memory operation is made visible. For example::
485
486 obj->dead = 1;
487 if (test_and_set_bit(0, &obj->flags))
488 /* ... */;
489 obj->killed = 1;
490
491The implementation of test_and_set_bit() must guarantee that
492"obj->dead = 1;" is visible to cpus before the atomic memory operation
493done by test_and_set_bit() becomes visible. Likewise, the atomic
494memory operation done by test_and_set_bit() must become visible before
495"obj->killed = 1;" is visible.
496
497Finally there is the basic operation::
498
499 int test_bit(unsigned long nr, __const__ volatile unsigned long *addr);
500
501Which returns a boolean indicating if bit "nr" is set in the bitmask
502pointed to by "addr".
503
504If explicit memory barriers are required around {set,clear}_bit() (which do
505not return a value, and thus does not need to provide memory barrier
506semantics), two interfaces are provided::
507
508 void smp_mb__before_atomic(void);
509 void smp_mb__after_atomic(void);
510
511They are used as follows, and are akin to their atomic_t operation
512brothers::
513
514 /* All memory operations before this call will
515 * be globally visible before the clear_bit().
516 */
517 smp_mb__before_atomic();
518 clear_bit( ... );
519
520 /* The clear_bit() will be visible before all
521 * subsequent memory operations.
522 */
523 smp_mb__after_atomic();
524
525There are two special bitops with lock barrier semantics (acquire/release,
526same as spinlocks). These operate in the same way as their non-_lock/unlock
527postfixed variants, except that they are to provide acquire/release semantics,
528respectively. This means they can be used for bit_spin_trylock and
529bit_spin_unlock type operations without specifying any more barriers. ::
530
531 int test_and_set_bit_lock(unsigned long nr, unsigned long *addr);
532 void clear_bit_unlock(unsigned long nr, unsigned long *addr);
533 void __clear_bit_unlock(unsigned long nr, unsigned long *addr);
534
535The __clear_bit_unlock version is non-atomic, however it still implements
536unlock barrier semantics. This can be useful if the lock itself is protecting
537the other bits in the word.
538
539Finally, there are non-atomic versions of the bitmask operations
540provided. They are used in contexts where some other higher-level SMP
541locking scheme is being used to protect the bitmask, and thus less
542expensive non-atomic operations may be used in the implementation.
543They have names similar to the above bitmask operation interfaces,
544except that two underscores are prefixed to the interface name. ::
545
546 void __set_bit(unsigned long nr, volatile unsigned long *addr);
547 void __clear_bit(unsigned long nr, volatile unsigned long *addr);
548 void __change_bit(unsigned long nr, volatile unsigned long *addr);
549 int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
550 int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
551 int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
552
553These non-atomic variants also do not require any special memory
554barrier semantics.
555
556The routines xchg() and cmpxchg() must provide the same exact
557memory-barrier semantics as the atomic and bit operations returning
558values.
559
560.. note::
561
562 If someone wants to use xchg(), cmpxchg() and their variants,
563 linux/atomic.h should be included rather than asm/cmpxchg.h, unless the
564 code is in arch/* and can take care of itself.
565
566Spinlocks and rwlocks have memory barrier expectations as well.
567The rule to follow is simple:
568
5691) When acquiring a lock, the implementation must make it globally
570 visible before any subsequent memory operation.
571
5722) When releasing a lock, the implementation must make it such that
573 all previous memory operations are globally visible before the
574 lock release.
575
576Which finally brings us to _atomic_dec_and_lock(). There is an
577architecture-neutral version implemented in lib/dec_and_lock.c,
578but most platforms will wish to optimize this in assembler. ::
579
580 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
581
582Atomically decrement the given counter, and if will drop to zero
583atomically acquire the given spinlock and perform the decrement
584of the counter to zero. If it does not drop to zero, do nothing
585with the spinlock.
586
587It is actually pretty simple to get the memory barrier correct.
588Simply satisfy the spinlock grab requirements, which is make
589sure the spinlock operation is globally visible before any
590subsequent memory operation.
591
592We can demonstrate this operation more clearly if we define
593an abstract atomic operation::
594
595 long cas(long *mem, long old, long new);
596
597"cas" stands for "compare and swap". It atomically:
598
5991) Compares "old" with the value currently at "mem".
6002) If they are equal, "new" is written to "mem".
6013) Regardless, the current value at "mem" is returned.
602
603As an example usage, here is what an atomic counter update
604might look like::
605
606 void example_atomic_inc(long *counter)
607 {
608 long old, new, ret;
609
610 while (1) {
611 old = *counter;
612 new = old + 1;
613
614 ret = cas(counter, old, new);
615 if (ret == old)
616 break;
617 }
618 }
619
620Let's use cas() in order to build a pseudo-C atomic_dec_and_lock()::
621
622 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
623 {
624 long old, new, ret;
625 int went_to_zero;
626
627 went_to_zero = 0;
628 while (1) {
629 old = atomic_read(atomic);
630 new = old - 1;
631 if (new == 0) {
632 went_to_zero = 1;
633 spin_lock(lock);
634 }
635 ret = cas(atomic, old, new);
636 if (ret == old)
637 break;
638 if (went_to_zero) {
639 spin_unlock(lock);
640 went_to_zero = 0;
641 }
642 }
643
644 return went_to_zero;
645 }
646
647Now, as far as memory barriers go, as long as spin_lock()
648strictly orders all subsequent memory operations (including
649the cas()) with respect to itself, things will be fine.
650
651Said another way, _atomic_dec_and_lock() must guarantee that
652a counter dropping to zero is never made visible before the
653spinlock being acquired.
654
655.. note::
656
657 Note that this also means that for the case where the counter is not
658 dropping to zero, there are no memory ordering requirements.