Linux Audio

Check our new training course

Linux debugging, profiling, tracing and performance analysis training

Mar 24-27, 2025, special US time zones
Register
Loading...
Note: File does not exist in v4.6.
   1.. _kernel_hacking_lock:
   2
   3===========================
   4Unreliable Guide To Locking
   5===========================
   6
   7:Author: Rusty Russell
   8
   9Introduction
  10============
  11
  12Welcome, to Rusty's Remarkably Unreliable Guide to Kernel Locking
  13issues. This document describes the locking systems in the Linux Kernel
  14in 2.6.
  15
  16With the wide availability of HyperThreading, and preemption in the
  17Linux Kernel, everyone hacking on the kernel needs to know the
  18fundamentals of concurrency and locking for SMP.
  19
  20The Problem With Concurrency
  21============================
  22
  23(Skip this if you know what a Race Condition is).
  24
  25In a normal program, you can increment a counter like so:
  26
  27::
  28
  29          very_important_count++;
  30
  31
  32This is what they would expect to happen:
  33
  34
  35.. table:: Expected Results
  36
  37  +------------------------------------+------------------------------------+
  38  | Instance 1                         | Instance 2                         |
  39  +====================================+====================================+
  40  | read very_important_count (5)      |                                    |
  41  +------------------------------------+------------------------------------+
  42  | add 1 (6)                          |                                    |
  43  +------------------------------------+------------------------------------+
  44  | write very_important_count (6)     |                                    |
  45  +------------------------------------+------------------------------------+
  46  |                                    | read very_important_count (6)      |
  47  +------------------------------------+------------------------------------+
  48  |                                    | add 1 (7)                          |
  49  +------------------------------------+------------------------------------+
  50  |                                    | write very_important_count (7)     |
  51  +------------------------------------+------------------------------------+
  52
  53This is what might happen:
  54
  55.. table:: Possible Results
  56
  57  +------------------------------------+------------------------------------+
  58  | Instance 1                         | Instance 2                         |
  59  +====================================+====================================+
  60  | read very_important_count (5)      |                                    |
  61  +------------------------------------+------------------------------------+
  62  |                                    | read very_important_count (5)      |
  63  +------------------------------------+------------------------------------+
  64  | add 1 (6)                          |                                    |
  65  +------------------------------------+------------------------------------+
  66  |                                    | add 1 (6)                          |
  67  +------------------------------------+------------------------------------+
  68  | write very_important_count (6)     |                                    |
  69  +------------------------------------+------------------------------------+
  70  |                                    | write very_important_count (6)     |
  71  +------------------------------------+------------------------------------+
  72
  73
  74Race Conditions and Critical Regions
  75------------------------------------
  76
  77This overlap, where the result depends on the relative timing of
  78multiple tasks, is called a race condition. The piece of code containing
  79the concurrency issue is called a critical region. And especially since
  80Linux starting running on SMP machines, they became one of the major
  81issues in kernel design and implementation.
  82
  83Preemption can have the same effect, even if there is only one CPU: by
  84preempting one task during the critical region, we have exactly the same
  85race condition. In this case the thread which preempts might run the
  86critical region itself.
  87
  88The solution is to recognize when these simultaneous accesses occur, and
  89use locks to make sure that only one instance can enter the critical
  90region at any time. There are many friendly primitives in the Linux
  91kernel to help you do this. And then there are the unfriendly
  92primitives, but I'll pretend they don't exist.
  93
  94Locking in the Linux Kernel
  95===========================
  96
  97If I could give you one piece of advice on locking: **keep it simple**.
  98
  99Be reluctant to introduce new locks.
 100
 101Two Main Types of Kernel Locks: Spinlocks and Mutexes
 102-----------------------------------------------------
 103
 104There are two main types of kernel locks. The fundamental type is the
 105spinlock (``include/asm/spinlock.h``), which is a very simple
 106single-holder lock: if you can't get the spinlock, you keep trying
 107(spinning) until you can. Spinlocks are very small and fast, and can be
 108used anywhere.
 109
 110The second type is a mutex (``include/linux/mutex.h``): it is like a
 111spinlock, but you may block holding a mutex. If you can't lock a mutex,
 112your task will suspend itself, and be woken up when the mutex is
 113released. This means the CPU can do something else while you are
 114waiting. There are many cases when you simply can't sleep (see
 115`What Functions Are Safe To Call From Interrupts?`_),
 116and so have to use a spinlock instead.
 117
 118Neither type of lock is recursive: see
 119`Deadlock: Simple and Advanced`_.
 120
 121Locks and Uniprocessor Kernels
 122------------------------------
 123
 124For kernels compiled without ``CONFIG_SMP``, and without
 125``CONFIG_PREEMPT`` spinlocks do not exist at all. This is an excellent
 126design decision: when no-one else can run at the same time, there is no
 127reason to have a lock.
 128
 129If the kernel is compiled without ``CONFIG_SMP``, but ``CONFIG_PREEMPT``
 130is set, then spinlocks simply disable preemption, which is sufficient to
 131prevent any races. For most purposes, we can think of preemption as
 132equivalent to SMP, and not worry about it separately.
 133
 134You should always test your locking code with ``CONFIG_SMP`` and
 135``CONFIG_PREEMPT`` enabled, even if you don't have an SMP test box,
 136because it will still catch some kinds of locking bugs.
 137
 138Mutexes still exist, because they are required for synchronization
 139between user contexts, as we will see below.
 140
 141Locking Only In User Context
 142----------------------------
 143
 144If you have a data structure which is only ever accessed from user
 145context, then you can use a simple mutex (``include/linux/mutex.h``) to
 146protect it. This is the most trivial case: you initialize the mutex.
 147Then you can call mutex_lock_interruptible() to grab the
 148mutex, and mutex_unlock() to release it. There is also a
 149mutex_lock(), which should be avoided, because it will
 150not return if a signal is received.
 151
 152Example: ``net/netfilter/nf_sockopt.c`` allows registration of new
 153setsockopt() and getsockopt() calls, with
 154nf_register_sockopt(). Registration and de-registration
 155are only done on module load and unload (and boot time, where there is
 156no concurrency), and the list of registrations is only consulted for an
 157unknown setsockopt() or getsockopt() system
 158call. The ``nf_sockopt_mutex`` is perfect to protect this, especially
 159since the setsockopt and getsockopt calls may well sleep.
 160
 161Locking Between User Context and Softirqs
 162-----------------------------------------
 163
 164If a softirq shares data with user context, you have two problems.
 165Firstly, the current user context can be interrupted by a softirq, and
 166secondly, the critical region could be entered from another CPU. This is
 167where spin_lock_bh() (``include/linux/spinlock.h``) is
 168used. It disables softirqs on that CPU, then grabs the lock.
 169spin_unlock_bh() does the reverse. (The '_bh' suffix is
 170a historical reference to "Bottom Halves", the old name for software
 171interrupts. It should really be called spin_lock_softirq()' in a
 172perfect world).
 173
 174Note that you can also use spin_lock_irq() or
 175spin_lock_irqsave() here, which stop hardware interrupts
 176as well: see `Hard IRQ Context`_.
 177
 178This works perfectly for UP as well: the spin lock vanishes, and this
 179macro simply becomes local_bh_disable()
 180(``include/linux/interrupt.h``), which protects you from the softirq
 181being run.
 182
 183Locking Between User Context and Tasklets
 184-----------------------------------------
 185
 186This is exactly the same as above, because tasklets are actually run
 187from a softirq.
 188
 189Locking Between User Context and Timers
 190---------------------------------------
 191
 192This, too, is exactly the same as above, because timers are actually run
 193from a softirq. From a locking point of view, tasklets and timers are
 194identical.
 195
 196Locking Between Tasklets/Timers
 197-------------------------------
 198
 199Sometimes a tasklet or timer might want to share data with another
 200tasklet or timer.
 201
 202The Same Tasklet/Timer
 203~~~~~~~~~~~~~~~~~~~~~~
 204
 205Since a tasklet is never run on two CPUs at once, you don't need to
 206worry about your tasklet being reentrant (running twice at once), even
 207on SMP.
 208
 209Different Tasklets/Timers
 210~~~~~~~~~~~~~~~~~~~~~~~~~
 211
 212If another tasklet/timer wants to share data with your tasklet or timer
 213, you will both need to use spin_lock() and
 214spin_unlock() calls. spin_lock_bh() is
 215unnecessary here, as you are already in a tasklet, and none will be run
 216on the same CPU.
 217
 218Locking Between Softirqs
 219------------------------
 220
 221Often a softirq might want to share data with itself or a tasklet/timer.
 222
 223The Same Softirq
 224~~~~~~~~~~~~~~~~
 225
 226The same softirq can run on the other CPUs: you can use a per-CPU array
 227(see `Per-CPU Data`_) for better performance. If you're
 228going so far as to use a softirq, you probably care about scalable
 229performance enough to justify the extra complexity.
 230
 231You'll need to use spin_lock() and
 232spin_unlock() for shared data.
 233
 234Different Softirqs
 235~~~~~~~~~~~~~~~~~~
 236
 237You'll need to use spin_lock() and
 238spin_unlock() for shared data, whether it be a timer,
 239tasklet, different softirq or the same or another softirq: any of them
 240could be running on a different CPU.
 241
 242Hard IRQ Context
 243================
 244
 245Hardware interrupts usually communicate with a tasklet or softirq.
 246Frequently this involves putting work in a queue, which the softirq will
 247take out.
 248
 249Locking Between Hard IRQ and Softirqs/Tasklets
 250----------------------------------------------
 251
 252If a hardware irq handler shares data with a softirq, you have two
 253concerns. Firstly, the softirq processing can be interrupted by a
 254hardware interrupt, and secondly, the critical region could be entered
 255by a hardware interrupt on another CPU. This is where
 256spin_lock_irq() is used. It is defined to disable
 257interrupts on that cpu, then grab the lock.
 258spin_unlock_irq() does the reverse.
 259
 260The irq handler does not need to use spin_lock_irq(), because
 261the softirq cannot run while the irq handler is running: it can use
 262spin_lock(), which is slightly faster. The only exception
 263would be if a different hardware irq handler uses the same lock:
 264spin_lock_irq() will stop that from interrupting us.
 265
 266This works perfectly for UP as well: the spin lock vanishes, and this
 267macro simply becomes local_irq_disable()
 268(``include/asm/smp.h``), which protects you from the softirq/tasklet/BH
 269being run.
 270
 271spin_lock_irqsave() (``include/linux/spinlock.h``) is a
 272variant which saves whether interrupts were on or off in a flags word,
 273which is passed to spin_unlock_irqrestore(). This means
 274that the same code can be used inside an hard irq handler (where
 275interrupts are already off) and in softirqs (where the irq disabling is
 276required).
 277
 278Note that softirqs (and hence tasklets and timers) are run on return
 279from hardware interrupts, so spin_lock_irq() also stops
 280these. In that sense, spin_lock_irqsave() is the most
 281general and powerful locking function.
 282
 283Locking Between Two Hard IRQ Handlers
 284-------------------------------------
 285
 286It is rare to have to share data between two IRQ handlers, but if you
 287do, spin_lock_irqsave() should be used: it is
 288architecture-specific whether all interrupts are disabled inside irq
 289handlers themselves.
 290
 291Cheat Sheet For Locking
 292=======================
 293
 294Pete Zaitcev gives the following summary:
 295
 296-  If you are in a process context (any syscall) and want to lock other
 297   process out, use a mutex. You can take a mutex and sleep
 298   (``copy_from_user()`` or ``kmalloc(x,GFP_KERNEL)``).
 299
 300-  Otherwise (== data can be touched in an interrupt), use
 301   spin_lock_irqsave() and
 302   spin_unlock_irqrestore().
 303
 304-  Avoid holding spinlock for more than 5 lines of code and across any
 305   function call (except accessors like readb()).
 306
 307Table of Minimum Requirements
 308-----------------------------
 309
 310The following table lists the **minimum** locking requirements between
 311various contexts. In some cases, the same context can only be running on
 312one CPU at a time, so no locking is required for that context (eg. a
 313particular thread can only run on one CPU at a time, but if it needs
 314shares data with another thread, locking is required).
 315
 316Remember the advice above: you can always use
 317spin_lock_irqsave(), which is a superset of all other
 318spinlock primitives.
 319
 320============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ==============
 321.              IRQ Handler A IRQ Handler B Softirq A Softirq B Tasklet A Tasklet B Timer A Timer B User Context A User Context B
 322============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ==============
 323IRQ Handler A  None
 324IRQ Handler B  SLIS          None
 325Softirq A      SLI           SLI           SL
 326Softirq B      SLI           SLI           SL        SL
 327Tasklet A      SLI           SLI           SL        SL        None
 328Tasklet B      SLI           SLI           SL        SL        SL        None
 329Timer A        SLI           SLI           SL        SL        SL        SL        None
 330Timer B        SLI           SLI           SL        SL        SL        SL        SL      None
 331User Context A SLI           SLI           SLBH      SLBH      SLBH      SLBH      SLBH    SLBH    None
 332User Context B SLI           SLI           SLBH      SLBH      SLBH      SLBH      SLBH    SLBH    MLI            None
 333============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ==============
 334
 335Table: Table of Locking Requirements
 336
 337+--------+----------------------------+
 338| SLIS   | spin_lock_irqsave          |
 339+--------+----------------------------+
 340| SLI    | spin_lock_irq              |
 341+--------+----------------------------+
 342| SL     | spin_lock                  |
 343+--------+----------------------------+
 344| SLBH   | spin_lock_bh               |
 345+--------+----------------------------+
 346| MLI    | mutex_lock_interruptible   |
 347+--------+----------------------------+
 348
 349Table: Legend for Locking Requirements Table
 350
 351The trylock Functions
 352=====================
 353
 354There are functions that try to acquire a lock only once and immediately
 355return a value telling about success or failure to acquire the lock.
 356They can be used if you need no access to the data protected with the
 357lock when some other thread is holding the lock. You should acquire the
 358lock later if you then need access to the data protected with the lock.
 359
 360spin_trylock() does not spin but returns non-zero if it
 361acquires the spinlock on the first try or 0 if not. This function can be
 362used in all contexts like spin_lock(): you must have
 363disabled the contexts that might interrupt you and acquire the spin
 364lock.
 365
 366mutex_trylock() does not suspend your task but returns
 367non-zero if it could lock the mutex on the first try or 0 if not. This
 368function cannot be safely used in hardware or software interrupt
 369contexts despite not sleeping.
 370
 371Common Examples
 372===============
 373
 374Let's step through a simple example: a cache of number to name mappings.
 375The cache keeps a count of how often each of the objects is used, and
 376when it gets full, throws out the least used one.
 377
 378All In User Context
 379-------------------
 380
 381For our first example, we assume that all operations are in user context
 382(ie. from system calls), so we can sleep. This means we can use a mutex
 383to protect the cache and all the objects within it. Here's the code::
 384
 385    #include <linux/list.h>
 386    #include <linux/slab.h>
 387    #include <linux/string.h>
 388    #include <linux/mutex.h>
 389    #include <asm/errno.h>
 390
 391    struct object
 392    {
 393            struct list_head list;
 394            int id;
 395            char name[32];
 396            int popularity;
 397    };
 398
 399    /* Protects the cache, cache_num, and the objects within it */
 400    static DEFINE_MUTEX(cache_lock);
 401    static LIST_HEAD(cache);
 402    static unsigned int cache_num = 0;
 403    #define MAX_CACHE_SIZE 10
 404
 405    /* Must be holding cache_lock */
 406    static struct object *__cache_find(int id)
 407    {
 408            struct object *i;
 409
 410            list_for_each_entry(i, &cache, list)
 411                    if (i->id == id) {
 412                            i->popularity++;
 413                            return i;
 414                    }
 415            return NULL;
 416    }
 417
 418    /* Must be holding cache_lock */
 419    static void __cache_delete(struct object *obj)
 420    {
 421            BUG_ON(!obj);
 422            list_del(&obj->list);
 423            kfree(obj);
 424            cache_num--;
 425    }
 426
 427    /* Must be holding cache_lock */
 428    static void __cache_add(struct object *obj)
 429    {
 430            list_add(&obj->list, &cache);
 431            if (++cache_num > MAX_CACHE_SIZE) {
 432                    struct object *i, *outcast = NULL;
 433                    list_for_each_entry(i, &cache, list) {
 434                            if (!outcast || i->popularity < outcast->popularity)
 435                                    outcast = i;
 436                    }
 437                    __cache_delete(outcast);
 438            }
 439    }
 440
 441    int cache_add(int id, const char *name)
 442    {
 443            struct object *obj;
 444
 445            if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
 446                    return -ENOMEM;
 447
 448            strscpy(obj->name, name, sizeof(obj->name));
 449            obj->id = id;
 450            obj->popularity = 0;
 451
 452            mutex_lock(&cache_lock);
 453            __cache_add(obj);
 454            mutex_unlock(&cache_lock);
 455            return 0;
 456    }
 457
 458    void cache_delete(int id)
 459    {
 460            mutex_lock(&cache_lock);
 461            __cache_delete(__cache_find(id));
 462            mutex_unlock(&cache_lock);
 463    }
 464
 465    int cache_find(int id, char *name)
 466    {
 467            struct object *obj;
 468            int ret = -ENOENT;
 469
 470            mutex_lock(&cache_lock);
 471            obj = __cache_find(id);
 472            if (obj) {
 473                    ret = 0;
 474                    strcpy(name, obj->name);
 475            }
 476            mutex_unlock(&cache_lock);
 477            return ret;
 478    }
 479
 480Note that we always make sure we have the cache_lock when we add,
 481delete, or look up the cache: both the cache infrastructure itself and
 482the contents of the objects are protected by the lock. In this case it's
 483easy, since we copy the data for the user, and never let them access the
 484objects directly.
 485
 486There is a slight (and common) optimization here: in
 487cache_add() we set up the fields of the object before
 488grabbing the lock. This is safe, as no-one else can access it until we
 489put it in cache.
 490
 491Accessing From Interrupt Context
 492--------------------------------
 493
 494Now consider the case where cache_find() can be called
 495from interrupt context: either a hardware interrupt or a softirq. An
 496example would be a timer which deletes object from the cache.
 497
 498The change is shown below, in standard patch format: the ``-`` are lines
 499which are taken away, and the ``+`` are lines which are added.
 500
 501::
 502
 503    --- cache.c.usercontext 2003-12-09 13:58:54.000000000 +1100
 504    +++ cache.c.interrupt   2003-12-09 14:07:49.000000000 +1100
 505    @@ -12,7 +12,7 @@
 506             int popularity;
 507     };
 508
 509    -static DEFINE_MUTEX(cache_lock);
 510    +static DEFINE_SPINLOCK(cache_lock);
 511     static LIST_HEAD(cache);
 512     static unsigned int cache_num = 0;
 513     #define MAX_CACHE_SIZE 10
 514    @@ -55,6 +55,7 @@
 515     int cache_add(int id, const char *name)
 516     {
 517             struct object *obj;
 518    +        unsigned long flags;
 519
 520             if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
 521                     return -ENOMEM;
 522    @@ -63,30 +64,33 @@
 523             obj->id = id;
 524             obj->popularity = 0;
 525
 526    -        mutex_lock(&cache_lock);
 527    +        spin_lock_irqsave(&cache_lock, flags);
 528             __cache_add(obj);
 529    -        mutex_unlock(&cache_lock);
 530    +        spin_unlock_irqrestore(&cache_lock, flags);
 531             return 0;
 532     }
 533
 534     void cache_delete(int id)
 535     {
 536    -        mutex_lock(&cache_lock);
 537    +        unsigned long flags;
 538    +
 539    +        spin_lock_irqsave(&cache_lock, flags);
 540             __cache_delete(__cache_find(id));
 541    -        mutex_unlock(&cache_lock);
 542    +        spin_unlock_irqrestore(&cache_lock, flags);
 543     }
 544
 545     int cache_find(int id, char *name)
 546     {
 547             struct object *obj;
 548             int ret = -ENOENT;
 549    +        unsigned long flags;
 550
 551    -        mutex_lock(&cache_lock);
 552    +        spin_lock_irqsave(&cache_lock, flags);
 553             obj = __cache_find(id);
 554             if (obj) {
 555                     ret = 0;
 556                     strcpy(name, obj->name);
 557             }
 558    -        mutex_unlock(&cache_lock);
 559    +        spin_unlock_irqrestore(&cache_lock, flags);
 560             return ret;
 561     }
 562
 563Note that the spin_lock_irqsave() will turn off
 564interrupts if they are on, otherwise does nothing (if we are already in
 565an interrupt handler), hence these functions are safe to call from any
 566context.
 567
 568Unfortunately, cache_add() calls kmalloc()
 569with the ``GFP_KERNEL`` flag, which is only legal in user context. I
 570have assumed that cache_add() is still only called in
 571user context, otherwise this should become a parameter to
 572cache_add().
 573
 574Exposing Objects Outside This File
 575----------------------------------
 576
 577If our objects contained more information, it might not be sufficient to
 578copy the information in and out: other parts of the code might want to
 579keep pointers to these objects, for example, rather than looking up the
 580id every time. This produces two problems.
 581
 582The first problem is that we use the ``cache_lock`` to protect objects:
 583we'd need to make this non-static so the rest of the code can use it.
 584This makes locking trickier, as it is no longer all in one place.
 585
 586The second problem is the lifetime problem: if another structure keeps a
 587pointer to an object, it presumably expects that pointer to remain
 588valid. Unfortunately, this is only guaranteed while you hold the lock,
 589otherwise someone might call cache_delete() and even
 590worse, add another object, re-using the same address.
 591
 592As there is only one lock, you can't hold it forever: no-one else would
 593get any work done.
 594
 595The solution to this problem is to use a reference count: everyone who
 596has a pointer to the object increases it when they first get the object,
 597and drops the reference count when they're finished with it. Whoever
 598drops it to zero knows it is unused, and can actually delete it.
 599
 600Here is the code::
 601
 602    --- cache.c.interrupt   2003-12-09 14:25:43.000000000 +1100
 603    +++ cache.c.refcnt  2003-12-09 14:33:05.000000000 +1100
 604    @@ -7,6 +7,7 @@
 605     struct object
 606     {
 607             struct list_head list;
 608    +        unsigned int refcnt;
 609             int id;
 610             char name[32];
 611             int popularity;
 612    @@ -17,6 +18,35 @@
 613     static unsigned int cache_num = 0;
 614     #define MAX_CACHE_SIZE 10
 615
 616    +static void __object_put(struct object *obj)
 617    +{
 618    +        if (--obj->refcnt == 0)
 619    +                kfree(obj);
 620    +}
 621    +
 622    +static void __object_get(struct object *obj)
 623    +{
 624    +        obj->refcnt++;
 625    +}
 626    +
 627    +void object_put(struct object *obj)
 628    +{
 629    +        unsigned long flags;
 630    +
 631    +        spin_lock_irqsave(&cache_lock, flags);
 632    +        __object_put(obj);
 633    +        spin_unlock_irqrestore(&cache_lock, flags);
 634    +}
 635    +
 636    +void object_get(struct object *obj)
 637    +{
 638    +        unsigned long flags;
 639    +
 640    +        spin_lock_irqsave(&cache_lock, flags);
 641    +        __object_get(obj);
 642    +        spin_unlock_irqrestore(&cache_lock, flags);
 643    +}
 644    +
 645     /* Must be holding cache_lock */
 646     static struct object *__cache_find(int id)
 647     {
 648    @@ -35,6 +65,7 @@
 649     {
 650             BUG_ON(!obj);
 651             list_del(&obj->list);
 652    +        __object_put(obj);
 653             cache_num--;
 654     }
 655
 656    @@ -63,6 +94,7 @@
 657             strscpy(obj->name, name, sizeof(obj->name));
 658             obj->id = id;
 659             obj->popularity = 0;
 660    +        obj->refcnt = 1; /* The cache holds a reference */
 661
 662             spin_lock_irqsave(&cache_lock, flags);
 663             __cache_add(obj);
 664    @@ -79,18 +111,15 @@
 665             spin_unlock_irqrestore(&cache_lock, flags);
 666     }
 667
 668    -int cache_find(int id, char *name)
 669    +struct object *cache_find(int id)
 670     {
 671             struct object *obj;
 672    -        int ret = -ENOENT;
 673             unsigned long flags;
 674
 675             spin_lock_irqsave(&cache_lock, flags);
 676             obj = __cache_find(id);
 677    -        if (obj) {
 678    -                ret = 0;
 679    -                strcpy(name, obj->name);
 680    -        }
 681    +        if (obj)
 682    +                __object_get(obj);
 683             spin_unlock_irqrestore(&cache_lock, flags);
 684    -        return ret;
 685    +        return obj;
 686     }
 687
 688We encapsulate the reference counting in the standard 'get' and 'put'
 689functions. Now we can return the object itself from
 690cache_find() which has the advantage that the user can
 691now sleep holding the object (eg. to copy_to_user() to
 692name to userspace).
 693
 694The other point to note is that I said a reference should be held for
 695every pointer to the object: thus the reference count is 1 when first
 696inserted into the cache. In some versions the framework does not hold a
 697reference count, but they are more complicated.
 698
 699Using Atomic Operations For The Reference Count
 700~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 701
 702In practice, :c:type:`atomic_t` would usually be used for refcnt. There are a
 703number of atomic operations defined in ``include/asm/atomic.h``: these
 704are guaranteed to be seen atomically from all CPUs in the system, so no
 705lock is required. In this case, it is simpler than using spinlocks,
 706although for anything non-trivial using spinlocks is clearer. The
 707atomic_inc() and atomic_dec_and_test()
 708are used instead of the standard increment and decrement operators, and
 709the lock is no longer used to protect the reference count itself.
 710
 711::
 712
 713    --- cache.c.refcnt  2003-12-09 15:00:35.000000000 +1100
 714    +++ cache.c.refcnt-atomic   2003-12-11 15:49:42.000000000 +1100
 715    @@ -7,7 +7,7 @@
 716     struct object
 717     {
 718             struct list_head list;
 719    -        unsigned int refcnt;
 720    +        atomic_t refcnt;
 721             int id;
 722             char name[32];
 723             int popularity;
 724    @@ -18,33 +18,15 @@
 725     static unsigned int cache_num = 0;
 726     #define MAX_CACHE_SIZE 10
 727
 728    -static void __object_put(struct object *obj)
 729    -{
 730    -        if (--obj->refcnt == 0)
 731    -                kfree(obj);
 732    -}
 733    -
 734    -static void __object_get(struct object *obj)
 735    -{
 736    -        obj->refcnt++;
 737    -}
 738    -
 739     void object_put(struct object *obj)
 740     {
 741    -        unsigned long flags;
 742    -
 743    -        spin_lock_irqsave(&cache_lock, flags);
 744    -        __object_put(obj);
 745    -        spin_unlock_irqrestore(&cache_lock, flags);
 746    +        if (atomic_dec_and_test(&obj->refcnt))
 747    +                kfree(obj);
 748     }
 749
 750     void object_get(struct object *obj)
 751     {
 752    -        unsigned long flags;
 753    -
 754    -        spin_lock_irqsave(&cache_lock, flags);
 755    -        __object_get(obj);
 756    -        spin_unlock_irqrestore(&cache_lock, flags);
 757    +        atomic_inc(&obj->refcnt);
 758     }
 759
 760     /* Must be holding cache_lock */
 761    @@ -65,7 +47,7 @@
 762     {
 763             BUG_ON(!obj);
 764             list_del(&obj->list);
 765    -        __object_put(obj);
 766    +        object_put(obj);
 767             cache_num--;
 768     }
 769
 770    @@ -94,7 +76,7 @@
 771             strscpy(obj->name, name, sizeof(obj->name));
 772             obj->id = id;
 773             obj->popularity = 0;
 774    -        obj->refcnt = 1; /* The cache holds a reference */
 775    +        atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
 776
 777             spin_lock_irqsave(&cache_lock, flags);
 778             __cache_add(obj);
 779    @@ -119,7 +101,7 @@
 780             spin_lock_irqsave(&cache_lock, flags);
 781             obj = __cache_find(id);
 782             if (obj)
 783    -                __object_get(obj);
 784    +                object_get(obj);
 785             spin_unlock_irqrestore(&cache_lock, flags);
 786             return obj;
 787     }
 788
 789Protecting The Objects Themselves
 790---------------------------------
 791
 792In these examples, we assumed that the objects (except the reference
 793counts) never changed once they are created. If we wanted to allow the
 794name to change, there are three possibilities:
 795
 796-  You can make ``cache_lock`` non-static, and tell people to grab that
 797   lock before changing the name in any object.
 798
 799-  You can provide a cache_obj_rename() which grabs this
 800   lock and changes the name for the caller, and tell everyone to use
 801   that function.
 802
 803-  You can make the ``cache_lock`` protect only the cache itself, and
 804   use another lock to protect the name.
 805
 806Theoretically, you can make the locks as fine-grained as one lock for
 807every field, for every object. In practice, the most common variants
 808are:
 809
 810-  One lock which protects the infrastructure (the ``cache`` list in
 811   this example) and all the objects. This is what we have done so far.
 812
 813-  One lock which protects the infrastructure (including the list
 814   pointers inside the objects), and one lock inside the object which
 815   protects the rest of that object.
 816
 817-  Multiple locks to protect the infrastructure (eg. one lock per hash
 818   chain), possibly with a separate per-object lock.
 819
 820Here is the "lock-per-object" implementation:
 821
 822::
 823
 824    --- cache.c.refcnt-atomic   2003-12-11 15:50:54.000000000 +1100
 825    +++ cache.c.perobjectlock   2003-12-11 17:15:03.000000000 +1100
 826    @@ -6,11 +6,17 @@
 827
 828     struct object
 829     {
 830    +        /* These two protected by cache_lock. */
 831             struct list_head list;
 832    +        int popularity;
 833    +
 834             atomic_t refcnt;
 835    +
 836    +        /* Doesn't change once created. */
 837             int id;
 838    +
 839    +        spinlock_t lock; /* Protects the name */
 840             char name[32];
 841    -        int popularity;
 842     };
 843
 844     static DEFINE_SPINLOCK(cache_lock);
 845    @@ -77,6 +84,7 @@
 846             obj->id = id;
 847             obj->popularity = 0;
 848             atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
 849    +        spin_lock_init(&obj->lock);
 850
 851             spin_lock_irqsave(&cache_lock, flags);
 852             __cache_add(obj);
 853
 854Note that I decide that the popularity count should be protected by the
 855``cache_lock`` rather than the per-object lock: this is because it (like
 856the :c:type:`struct list_head <list_head>` inside the object)
 857is logically part of the infrastructure. This way, I don't need to grab
 858the lock of every object in __cache_add() when seeking
 859the least popular.
 860
 861I also decided that the id member is unchangeable, so I don't need to
 862grab each object lock in __cache_find() to examine the
 863id: the object lock is only used by a caller who wants to read or write
 864the name field.
 865
 866Note also that I added a comment describing what data was protected by
 867which locks. This is extremely important, as it describes the runtime
 868behavior of the code, and can be hard to gain from just reading. And as
 869Alan Cox says, “Lock data, not code”.
 870
 871Common Problems
 872===============
 873
 874Deadlock: Simple and Advanced
 875-----------------------------
 876
 877There is a coding bug where a piece of code tries to grab a spinlock
 878twice: it will spin forever, waiting for the lock to be released
 879(spinlocks, rwlocks and mutexes are not recursive in Linux). This is
 880trivial to diagnose: not a
 881stay-up-five-nights-talk-to-fluffy-code-bunnies kind of problem.
 882
 883For a slightly more complex case, imagine you have a region shared by a
 884softirq and user context. If you use a spin_lock() call
 885to protect it, it is possible that the user context will be interrupted
 886by the softirq while it holds the lock, and the softirq will then spin
 887forever trying to get the same lock.
 888
 889Both of these are called deadlock, and as shown above, it can occur even
 890with a single CPU (although not on UP compiles, since spinlocks vanish
 891on kernel compiles with ``CONFIG_SMP``\ =n. You'll still get data
 892corruption in the second example).
 893
 894This complete lockup is easy to diagnose: on SMP boxes the watchdog
 895timer or compiling with ``DEBUG_SPINLOCK`` set
 896(``include/linux/spinlock.h``) will show this up immediately when it
 897happens.
 898
 899A more complex problem is the so-called 'deadly embrace', involving two
 900or more locks. Say you have a hash table: each entry in the table is a
 901spinlock, and a chain of hashed objects. Inside a softirq handler, you
 902sometimes want to alter an object from one place in the hash to another:
 903you grab the spinlock of the old hash chain and the spinlock of the new
 904hash chain, and delete the object from the old one, and insert it in the
 905new one.
 906
 907There are two problems here. First, if your code ever tries to move the
 908object to the same chain, it will deadlock with itself as it tries to
 909lock it twice. Secondly, if the same softirq on another CPU is trying to
 910move another object in the reverse direction, the following could
 911happen:
 912
 913+-----------------------+-----------------------+
 914| CPU 1                 | CPU 2                 |
 915+=======================+=======================+
 916| Grab lock A -> OK     | Grab lock B -> OK     |
 917+-----------------------+-----------------------+
 918| Grab lock B -> spin   | Grab lock A -> spin   |
 919+-----------------------+-----------------------+
 920
 921Table: Consequences
 922
 923The two CPUs will spin forever, waiting for the other to give up their
 924lock. It will look, smell, and feel like a crash.
 925
 926Preventing Deadlock
 927-------------------
 928
 929Textbooks will tell you that if you always lock in the same order, you
 930will never get this kind of deadlock. Practice will tell you that this
 931approach doesn't scale: when I create a new lock, I don't understand
 932enough of the kernel to figure out where in the 5000 lock hierarchy it
 933will fit.
 934
 935The best locks are encapsulated: they never get exposed in headers, and
 936are never held around calls to non-trivial functions outside the same
 937file. You can read through this code and see that it will never
 938deadlock, because it never tries to grab another lock while it has that
 939one. People using your code don't even need to know you are using a
 940lock.
 941
 942A classic problem here is when you provide callbacks or hooks: if you
 943call these with the lock held, you risk simple deadlock, or a deadly
 944embrace (who knows what the callback will do?).
 945
 946Overzealous Prevention Of Deadlocks
 947~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 948
 949Deadlocks are problematic, but not as bad as data corruption. Code which
 950grabs a read lock, searches a list, fails to find what it wants, drops
 951the read lock, grabs a write lock and inserts the object has a race
 952condition.
 953
 954Racing Timers: A Kernel Pastime
 955-------------------------------
 956
 957Timers can produce their own special problems with races. Consider a
 958collection of objects (list, hash, etc) where each object has a timer
 959which is due to destroy it.
 960
 961If you want to destroy the entire collection (say on module removal),
 962you might do the following::
 963
 964            /* THIS CODE BAD BAD BAD BAD: IF IT WAS ANY WORSE IT WOULD USE
 965               HUNGARIAN NOTATION */
 966            spin_lock_bh(&list_lock);
 967
 968            while (list) {
 969                    struct foo *next = list->next;
 970                    timer_delete(&list->timer);
 971                    kfree(list);
 972                    list = next;
 973            }
 974
 975            spin_unlock_bh(&list_lock);
 976
 977
 978Sooner or later, this will crash on SMP, because a timer can have just
 979gone off before the spin_lock_bh(), and it will only get
 980the lock after we spin_unlock_bh(), and then try to free
 981the element (which has already been freed!).
 982
 983This can be avoided by checking the result of
 984timer_delete(): if it returns 1, the timer has been deleted.
 985If 0, it means (in this case) that it is currently running, so we can
 986do::
 987
 988            retry:
 989                    spin_lock_bh(&list_lock);
 990
 991                    while (list) {
 992                            struct foo *next = list->next;
 993                            if (!timer_delete(&list->timer)) {
 994                                    /* Give timer a chance to delete this */
 995                                    spin_unlock_bh(&list_lock);
 996                                    goto retry;
 997                            }
 998                            kfree(list);
 999                            list = next;
1000                    }
1001
1002                    spin_unlock_bh(&list_lock);
1003
1004
1005Another common problem is deleting timers which restart themselves (by
1006calling add_timer() at the end of their timer function).
1007Because this is a fairly common case which is prone to races, you should
1008use timer_delete_sync() (``include/linux/timer.h``) to handle this case.
1009
1010Before freeing a timer, timer_shutdown() or timer_shutdown_sync() should be
1011called which will keep it from being rearmed. Any subsequent attempt to
1012rearm the timer will be silently ignored by the core code.
1013
1014
1015Locking Speed
1016=============
1017
1018There are three main things to worry about when considering speed of
1019some code which does locking. First is concurrency: how many things are
1020going to be waiting while someone else is holding a lock. Second is the
1021time taken to actually acquire and release an uncontended lock. Third is
1022using fewer, or smarter locks. I'm assuming that the lock is used fairly
1023often: otherwise, you wouldn't be concerned about efficiency.
1024
1025Concurrency depends on how long the lock is usually held: you should
1026hold the lock for as long as needed, but no longer. In the cache
1027example, we always create the object without the lock held, and then
1028grab the lock only when we are ready to insert it in the list.
1029
1030Acquisition times depend on how much damage the lock operations do to
1031the pipeline (pipeline stalls) and how likely it is that this CPU was
1032the last one to grab the lock (ie. is the lock cache-hot for this CPU):
1033on a machine with more CPUs, this likelihood drops fast. Consider a
1034700MHz Intel Pentium III: an instruction takes about 0.7ns, an atomic
1035increment takes about 58ns, a lock which is cache-hot on this CPU takes
1036160ns, and a cacheline transfer from another CPU takes an additional 170
1037to 360ns. (These figures from Paul McKenney's `Linux Journal RCU
1038article <http://www.linuxjournal.com/article.php?sid=6993>`__).
1039
1040These two aims conflict: holding a lock for a short time might be done
1041by splitting locks into parts (such as in our final per-object-lock
1042example), but this increases the number of lock acquisitions, and the
1043results are often slower than having a single lock. This is another
1044reason to advocate locking simplicity.
1045
1046The third concern is addressed below: there are some methods to reduce
1047the amount of locking which needs to be done.
1048
1049Read/Write Lock Variants
1050------------------------
1051
1052Both spinlocks and mutexes have read/write variants: ``rwlock_t`` and
1053:c:type:`struct rw_semaphore <rw_semaphore>`. These divide
1054users into two classes: the readers and the writers. If you are only
1055reading the data, you can get a read lock, but to write to the data you
1056need the write lock. Many people can hold a read lock, but a writer must
1057be sole holder.
1058
1059If your code divides neatly along reader/writer lines (as our cache code
1060does), and the lock is held by readers for significant lengths of time,
1061using these locks can help. They are slightly slower than the normal
1062locks though, so in practice ``rwlock_t`` is not usually worthwhile.
1063
1064Avoiding Locks: Read Copy Update
1065--------------------------------
1066
1067There is a special method of read/write locking called Read Copy Update.
1068Using RCU, the readers can avoid taking a lock altogether: as we expect
1069our cache to be read more often than updated (otherwise the cache is a
1070waste of time), it is a candidate for this optimization.
1071
1072How do we get rid of read locks? Getting rid of read locks means that
1073writers may be changing the list underneath the readers. That is
1074actually quite simple: we can read a linked list while an element is
1075being added if the writer adds the element very carefully. For example,
1076adding ``new`` to a single linked list called ``list``::
1077
1078            new->next = list->next;
1079            wmb();
1080            list->next = new;
1081
1082
1083The wmb() is a write memory barrier. It ensures that the
1084first operation (setting the new element's ``next`` pointer) is complete
1085and will be seen by all CPUs, before the second operation is (putting
1086the new element into the list). This is important, since modern
1087compilers and modern CPUs can both reorder instructions unless told
1088otherwise: we want a reader to either not see the new element at all, or
1089see the new element with the ``next`` pointer correctly pointing at the
1090rest of the list.
1091
1092Fortunately, there is a function to do this for standard
1093:c:type:`struct list_head <list_head>` lists:
1094list_add_rcu() (``include/linux/list.h``).
1095
1096Removing an element from the list is even simpler: we replace the
1097pointer to the old element with a pointer to its successor, and readers
1098will either see it, or skip over it.
1099
1100::
1101
1102            list->next = old->next;
1103
1104
1105There is list_del_rcu() (``include/linux/list.h``) which
1106does this (the normal version poisons the old object, which we don't
1107want).
1108
1109The reader must also be careful: some CPUs can look through the ``next``
1110pointer to start reading the contents of the next element early, but
1111don't realize that the pre-fetched contents is wrong when the ``next``
1112pointer changes underneath them. Once again, there is a
1113list_for_each_entry_rcu() (``include/linux/list.h``)
1114to help you. Of course, writers can just use
1115list_for_each_entry(), since there cannot be two
1116simultaneous writers.
1117
1118Our final dilemma is this: when can we actually destroy the removed
1119element? Remember, a reader might be stepping through this element in
1120the list right now: if we free this element and the ``next`` pointer
1121changes, the reader will jump off into garbage and crash. We need to
1122wait until we know that all the readers who were traversing the list
1123when we deleted the element are finished. We use
1124call_rcu() to register a callback which will actually
1125destroy the object once all pre-existing readers are finished.
1126Alternatively, synchronize_rcu() may be used to block
1127until all pre-existing are finished.
1128
1129But how does Read Copy Update know when the readers are finished? The
1130method is this: firstly, the readers always traverse the list inside
1131rcu_read_lock()/rcu_read_unlock() pairs:
1132these simply disable preemption so the reader won't go to sleep while
1133reading the list.
1134
1135RCU then waits until every other CPU has slept at least once: since
1136readers cannot sleep, we know that any readers which were traversing the
1137list during the deletion are finished, and the callback is triggered.
1138The real Read Copy Update code is a little more optimized than this, but
1139this is the fundamental idea.
1140
1141::
1142
1143    --- cache.c.perobjectlock   2003-12-11 17:15:03.000000000 +1100
1144    +++ cache.c.rcupdate    2003-12-11 17:55:14.000000000 +1100
1145    @@ -1,15 +1,18 @@
1146     #include <linux/list.h>
1147     #include <linux/slab.h>
1148     #include <linux/string.h>
1149    +#include <linux/rcupdate.h>
1150     #include <linux/mutex.h>
1151     #include <asm/errno.h>
1152
1153     struct object
1154     {
1155    -        /* These two protected by cache_lock. */
1156    +        /* This is protected by RCU */
1157             struct list_head list;
1158             int popularity;
1159
1160    +        struct rcu_head rcu;
1161    +
1162             atomic_t refcnt;
1163
1164             /* Doesn't change once created. */
1165    @@ -40,7 +43,7 @@
1166     {
1167             struct object *i;
1168
1169    -        list_for_each_entry(i, &cache, list) {
1170    +        list_for_each_entry_rcu(i, &cache, list) {
1171                     if (i->id == id) {
1172                             i->popularity++;
1173                             return i;
1174    @@ -49,19 +52,25 @@
1175             return NULL;
1176     }
1177
1178    +/* Final discard done once we know no readers are looking. */
1179    +static void cache_delete_rcu(void *arg)
1180    +{
1181    +        object_put(arg);
1182    +}
1183    +
1184     /* Must be holding cache_lock */
1185     static void __cache_delete(struct object *obj)
1186     {
1187             BUG_ON(!obj);
1188    -        list_del(&obj->list);
1189    -        object_put(obj);
1190    +        list_del_rcu(&obj->list);
1191             cache_num--;
1192    +        call_rcu(&obj->rcu, cache_delete_rcu);
1193     }
1194
1195     /* Must be holding cache_lock */
1196     static void __cache_add(struct object *obj)
1197     {
1198    -        list_add(&obj->list, &cache);
1199    +        list_add_rcu(&obj->list, &cache);
1200             if (++cache_num > MAX_CACHE_SIZE) {
1201                     struct object *i, *outcast = NULL;
1202                     list_for_each_entry(i, &cache, list) {
1203    @@ -104,12 +114,11 @@
1204     struct object *cache_find(int id)
1205     {
1206             struct object *obj;
1207    -        unsigned long flags;
1208
1209    -        spin_lock_irqsave(&cache_lock, flags);
1210    +        rcu_read_lock();
1211             obj = __cache_find(id);
1212             if (obj)
1213                     object_get(obj);
1214    -        spin_unlock_irqrestore(&cache_lock, flags);
1215    +        rcu_read_unlock();
1216             return obj;
1217     }
1218
1219Note that the reader will alter the popularity member in
1220__cache_find(), and now it doesn't hold a lock. One
1221solution would be to make it an ``atomic_t``, but for this usage, we
1222don't really care about races: an approximate result is good enough, so
1223I didn't change it.
1224
1225The result is that cache_find() requires no
1226synchronization with any other functions, so is almost as fast on SMP as
1227it would be on UP.
1228
1229There is a further optimization possible here: remember our original
1230cache code, where there were no reference counts and the caller simply
1231held the lock whenever using the object? This is still possible: if you
1232hold the lock, no one can delete the object, so you don't need to get
1233and put the reference count.
1234
1235Now, because the 'read lock' in RCU is simply disabling preemption, a
1236caller which always has preemption disabled between calling
1237cache_find() and object_put() does not
1238need to actually get and put the reference count: we could expose
1239__cache_find() by making it non-static, and such
1240callers could simply call that.
1241
1242The benefit here is that the reference count is not written to: the
1243object is not altered in any way, which is much faster on SMP machines
1244due to caching.
1245
1246Per-CPU Data
1247------------
1248
1249Another technique for avoiding locking which is used fairly widely is to
1250duplicate information for each CPU. For example, if you wanted to keep a
1251count of a common condition, you could use a spin lock and a single
1252counter. Nice and simple.
1253
1254If that was too slow (it's usually not, but if you've got a really big
1255machine to test on and can show that it is), you could instead use a
1256counter for each CPU, then none of them need an exclusive lock. See
1257DEFINE_PER_CPU(), get_cpu_var() and
1258put_cpu_var() (``include/linux/percpu.h``).
1259
1260Of particular use for simple per-cpu counters is the ``local_t`` type,
1261and the cpu_local_inc() and related functions, which are
1262more efficient than simple code on some architectures
1263(``include/asm/local.h``).
1264
1265Note that there is no simple, reliable way of getting an exact value of
1266such a counter, without introducing more locks. This is not a problem
1267for some uses.
1268
1269Data Which Mostly Used By An IRQ Handler
1270----------------------------------------
1271
1272If data is always accessed from within the same IRQ handler, you don't
1273need a lock at all: the kernel already guarantees that the irq handler
1274will not run simultaneously on multiple CPUs.
1275
1276Manfred Spraul points out that you can still do this, even if the data
1277is very occasionally accessed in user context or softirqs/tasklets. The
1278irq handler doesn't use a lock, and all other accesses are done as so::
1279
1280        mutex_lock(&lock);
1281        disable_irq(irq);
1282        ...
1283        enable_irq(irq);
1284        mutex_unlock(&lock);
1285
1286The disable_irq() prevents the irq handler from running
1287(and waits for it to finish if it's currently running on other CPUs).
1288The spinlock prevents any other accesses happening at the same time.
1289Naturally, this is slower than just a spin_lock_irq()
1290call, so it only makes sense if this type of access happens extremely
1291rarely.
1292
1293What Functions Are Safe To Call From Interrupts?
1294================================================
1295
1296Many functions in the kernel sleep (ie. call schedule()) directly or
1297indirectly: you can never call them while holding a spinlock, or with
1298preemption disabled. This also means you need to be in user context:
1299calling them from an interrupt is illegal.
1300
1301Some Functions Which Sleep
1302--------------------------
1303
1304The most common ones are listed below, but you usually have to read the
1305code to find out if other calls are safe. If everyone else who calls it
1306can sleep, you probably need to be able to sleep, too. In particular,
1307registration and deregistration functions usually expect to be called
1308from user context, and can sleep.
1309
1310-  Accesses to userspace:
1311
1312   -  copy_from_user()
1313
1314   -  copy_to_user()
1315
1316   -  get_user()
1317
1318   -  put_user()
1319
1320-  kmalloc(GP_KERNEL) <kmalloc>`
1321
1322-  mutex_lock_interruptible() and
1323   mutex_lock()
1324
1325   There is a mutex_trylock() which does not sleep.
1326   Still, it must not be used inside interrupt context since its
1327   implementation is not safe for that. mutex_unlock()
1328   will also never sleep. It cannot be used in interrupt context either
1329   since a mutex must be released by the same task that acquired it.
1330
1331Some Functions Which Don't Sleep
1332--------------------------------
1333
1334Some functions are safe to call from any context, or holding almost any
1335lock.
1336
1337-  printk()
1338
1339-  kfree()
1340
1341-  add_timer() and timer_delete()
1342
1343Mutex API reference
1344===================
1345
1346.. kernel-doc:: include/linux/mutex.h
1347   :internal:
1348
1349.. kernel-doc:: kernel/locking/mutex.c
1350   :export:
1351
1352Futex API reference
1353===================
1354
1355.. kernel-doc:: kernel/futex/core.c
1356   :internal:
1357
1358.. kernel-doc:: kernel/futex/futex.h
1359   :internal:
1360
1361.. kernel-doc:: kernel/futex/pi.c
1362   :internal:
1363
1364.. kernel-doc:: kernel/futex/requeue.c
1365   :internal:
1366
1367.. kernel-doc:: kernel/futex/waitwake.c
1368   :internal:
1369
1370Further reading
1371===============
1372
1373-  ``Documentation/locking/spinlocks.rst``: Linus Torvalds' spinlocking
1374   tutorial in the kernel sources.
1375
1376-  Unix Systems for Modern Architectures: Symmetric Multiprocessing and
1377   Caching for Kernel Programmers:
1378
1379   Curt Schimmel's very good introduction to kernel level locking (not
1380   written for Linux, but nearly everything applies). The book is
1381   expensive, but really worth every penny to understand SMP locking.
1382   [ISBN: 0201633388]
1383
1384Thanks
1385======
1386
1387Thanks to Telsa Gwynne for DocBooking, neatening and adding style.
1388
1389Thanks to Martin Pool, Philipp Rumpf, Stephen Rothwell, Paul Mackerras,
1390Ruedi Aschwanden, Alan Cox, Manfred Spraul, Tim Waugh, Pete Zaitcev,
1391James Morris, Robert Love, Paul McKenney, John Ashby for proofreading,
1392correcting, flaming, commenting.
1393
1394Thanks to the cabal for having no influence on this document.
1395
1396Glossary
1397========
1398
1399preemption
1400  Prior to 2.5, or when ``CONFIG_PREEMPT`` is unset, processes in user
1401  context inside the kernel would not preempt each other (ie. you had that
1402  CPU until you gave it up, except for interrupts). With the addition of
1403  ``CONFIG_PREEMPT`` in 2.5.4, this changed: when in user context, higher
1404  priority tasks can "cut in": spinlocks were changed to disable
1405  preemption, even on UP.
1406
1407bh
1408  Bottom Half: for historical reasons, functions with '_bh' in them often
1409  now refer to any software interrupt, e.g. spin_lock_bh()
1410  blocks any software interrupt on the current CPU. Bottom halves are
1411  deprecated, and will eventually be replaced by tasklets. Only one bottom
1412  half will be running at any time.
1413
1414Hardware Interrupt / Hardware IRQ
1415  Hardware interrupt request. in_hardirq() returns true in a
1416  hardware interrupt handler.
1417
1418Interrupt Context
1419  Not user context: processing a hardware irq or software irq. Indicated
1420  by the in_interrupt() macro returning true.
1421
1422SMP
1423  Symmetric Multi-Processor: kernels compiled for multiple-CPU machines.
1424  (``CONFIG_SMP=y``).
1425
1426Software Interrupt / softirq
1427  Software interrupt handler. in_hardirq() returns false;
1428  in_softirq() returns true. Tasklets and softirqs both
1429  fall into the category of 'software interrupts'.
1430
1431  Strictly speaking a softirq is one of up to 32 enumerated software
1432  interrupts which can run on multiple CPUs at once. Sometimes used to
1433  refer to tasklets as well (ie. all software interrupts).
1434
1435tasklet
1436  A dynamically-registrable software interrupt, which is guaranteed to
1437  only run on one CPU at a time.
1438
1439timer
1440  A dynamically-registrable software interrupt, which is run at (or close
1441  to) a given time. When running, it is just like a tasklet (in fact, they
1442  are called from the ``TIMER_SOFTIRQ``).
1443
1444UP
1445  Uni-Processor: Non-SMP. (``CONFIG_SMP=n``).
1446
1447User Context
1448  The kernel executing on behalf of a particular process (ie. a system
1449  call or trap) or kernel thread. You can tell which process with the
1450  ``current`` macro.) Not to be confused with userspace. Can be
1451  interrupted by software or hardware interrupts.
1452
1453Userspace
1454  A process executing its own code outside the kernel.