Linux Audio

Check our new training course

Loading...
Note: File does not exist in v6.13.7.
   1Please note that the "What is RCU?" LWN series is an excellent place
   2to start learning about RCU:
   3
   41.	What is RCU, Fundamentally?  http://lwn.net/Articles/262464/
   52.	What is RCU? Part 2: Usage   http://lwn.net/Articles/263130/
   63.	RCU part 3: the RCU API      http://lwn.net/Articles/264090/
   74.	The RCU API, 2010 Edition    http://lwn.net/Articles/418853/
   8	2010 Big API Table           http://lwn.net/Articles/419086/
   95.	The RCU API, 2014 Edition    http://lwn.net/Articles/609904/
  10	2014 Big API Table           http://lwn.net/Articles/609973/
  11
  12
  13What is RCU?
  14
  15RCU is a synchronization mechanism that was added to the Linux kernel
  16during the 2.5 development effort that is optimized for read-mostly
  17situations.  Although RCU is actually quite simple once you understand it,
  18getting there can sometimes be a challenge.  Part of the problem is that
  19most of the past descriptions of RCU have been written with the mistaken
  20assumption that there is "one true way" to describe RCU.  Instead,
  21the experience has been that different people must take different paths
  22to arrive at an understanding of RCU.  This document provides several
  23different paths, as follows:
  24
  251.	RCU OVERVIEW
  262.	WHAT IS RCU'S CORE API?
  273.	WHAT ARE SOME EXAMPLE USES OF CORE RCU API?
  284.	WHAT IF MY UPDATING THREAD CANNOT BLOCK?
  295.	WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU?
  306.	ANALOGY WITH READER-WRITER LOCKING
  317.	FULL LIST OF RCU APIs
  328.	ANSWERS TO QUICK QUIZZES
  33
  34People who prefer starting with a conceptual overview should focus on
  35Section 1, though most readers will profit by reading this section at
  36some point.  People who prefer to start with an API that they can then
  37experiment with should focus on Section 2.  People who prefer to start
  38with example uses should focus on Sections 3 and 4.  People who need to
  39understand the RCU implementation should focus on Section 5, then dive
  40into the kernel source code.  People who reason best by analogy should
  41focus on Section 6.  Section 7 serves as an index to the docbook API
  42documentation, and Section 8 is the traditional answer key.
  43
  44So, start with the section that makes the most sense to you and your
  45preferred method of learning.  If you need to know everything about
  46everything, feel free to read the whole thing -- but if you are really
  47that type of person, you have perused the source code and will therefore
  48never need this document anyway.  ;-)
  49
  50
  511.  RCU OVERVIEW
  52
  53The basic idea behind RCU is to split updates into "removal" and
  54"reclamation" phases.  The removal phase removes references to data items
  55within a data structure (possibly by replacing them with references to
  56new versions of these data items), and can run concurrently with readers.
  57The reason that it is safe to run the removal phase concurrently with
  58readers is the semantics of modern CPUs guarantee that readers will see
  59either the old or the new version of the data structure rather than a
  60partially updated reference.  The reclamation phase does the work of reclaiming
  61(e.g., freeing) the data items removed from the data structure during the
  62removal phase.  Because reclaiming data items can disrupt any readers
  63concurrently referencing those data items, the reclamation phase must
  64not start until readers no longer hold references to those data items.
  65
  66Splitting the update into removal and reclamation phases permits the
  67updater to perform the removal phase immediately, and to defer the
  68reclamation phase until all readers active during the removal phase have
  69completed, either by blocking until they finish or by registering a
  70callback that is invoked after they finish.  Only readers that are active
  71during the removal phase need be considered, because any reader starting
  72after the removal phase will be unable to gain a reference to the removed
  73data items, and therefore cannot be disrupted by the reclamation phase.
  74
  75So the typical RCU update sequence goes something like the following:
  76
  77a.	Remove pointers to a data structure, so that subsequent
  78	readers cannot gain a reference to it.
  79
  80b.	Wait for all previous readers to complete their RCU read-side
  81	critical sections.
  82
  83c.	At this point, there cannot be any readers who hold references
  84	to the data structure, so it now may safely be reclaimed
  85	(e.g., kfree()d).
  86
  87Step (b) above is the key idea underlying RCU's deferred destruction.
  88The ability to wait until all readers are done allows RCU readers to
  89use much lighter-weight synchronization, in some cases, absolutely no
  90synchronization at all.  In contrast, in more conventional lock-based
  91schemes, readers must use heavy-weight synchronization in order to
  92prevent an updater from deleting the data structure out from under them.
  93This is because lock-based updaters typically update data items in place,
  94and must therefore exclude readers.  In contrast, RCU-based updaters
  95typically take advantage of the fact that writes to single aligned
  96pointers are atomic on modern CPUs, allowing atomic insertion, removal,
  97and replacement of data items in a linked structure without disrupting
  98readers.  Concurrent RCU readers can then continue accessing the old
  99versions, and can dispense with the atomic operations, memory barriers,
 100and communications cache misses that are so expensive on present-day
 101SMP computer systems, even in absence of lock contention.
 102
 103In the three-step procedure shown above, the updater is performing both
 104the removal and the reclamation step, but it is often helpful for an
 105entirely different thread to do the reclamation, as is in fact the case
 106in the Linux kernel's directory-entry cache (dcache).  Even if the same
 107thread performs both the update step (step (a) above) and the reclamation
 108step (step (c) above), it is often helpful to think of them separately.
 109For example, RCU readers and updaters need not communicate at all,
 110but RCU provides implicit low-overhead communication between readers
 111and reclaimers, namely, in step (b) above.
 112
 113So how the heck can a reclaimer tell when a reader is done, given
 114that readers are not doing any sort of synchronization operations???
 115Read on to learn about how RCU's API makes this easy.
 116
 117
 1182.  WHAT IS RCU'S CORE API?
 119
 120The core RCU API is quite small:
 121
 122a.	rcu_read_lock()
 123b.	rcu_read_unlock()
 124c.	synchronize_rcu() / call_rcu()
 125d.	rcu_assign_pointer()
 126e.	rcu_dereference()
 127
 128There are many other members of the RCU API, but the rest can be
 129expressed in terms of these five, though most implementations instead
 130express synchronize_rcu() in terms of the call_rcu() callback API.
 131
 132The five core RCU APIs are described below, the other 18 will be enumerated
 133later.  See the kernel docbook documentation for more info, or look directly
 134at the function header comments.
 135
 136rcu_read_lock()
 137
 138	void rcu_read_lock(void);
 139
 140	Used by a reader to inform the reclaimer that the reader is
 141	entering an RCU read-side critical section.  It is illegal
 142	to block while in an RCU read-side critical section, though
 143	kernels built with CONFIG_PREEMPT_RCU can preempt RCU
 144	read-side critical sections.  Any RCU-protected data structure
 145	accessed during an RCU read-side critical section is guaranteed to
 146	remain unreclaimed for the full duration of that critical section.
 147	Reference counts may be used in conjunction with RCU to maintain
 148	longer-term references to data structures.
 149
 150rcu_read_unlock()
 151
 152	void rcu_read_unlock(void);
 153
 154	Used by a reader to inform the reclaimer that the reader is
 155	exiting an RCU read-side critical section.  Note that RCU
 156	read-side critical sections may be nested and/or overlapping.
 157
 158synchronize_rcu()
 159
 160	void synchronize_rcu(void);
 161
 162	Marks the end of updater code and the beginning of reclaimer
 163	code.  It does this by blocking until all pre-existing RCU
 164	read-side critical sections on all CPUs have completed.
 165	Note that synchronize_rcu() will -not- necessarily wait for
 166	any subsequent RCU read-side critical sections to complete.
 167	For example, consider the following sequence of events:
 168
 169	         CPU 0                  CPU 1                 CPU 2
 170	     ----------------- ------------------------- ---------------
 171	 1.  rcu_read_lock()
 172	 2.                    enters synchronize_rcu()
 173	 3.                                               rcu_read_lock()
 174	 4.  rcu_read_unlock()
 175	 5.                     exits synchronize_rcu()
 176	 6.                                              rcu_read_unlock()
 177
 178	To reiterate, synchronize_rcu() waits only for ongoing RCU
 179	read-side critical sections to complete, not necessarily for
 180	any that begin after synchronize_rcu() is invoked.
 181
 182	Of course, synchronize_rcu() does not necessarily return
 183	-immediately- after the last pre-existing RCU read-side critical
 184	section completes.  For one thing, there might well be scheduling
 185	delays.  For another thing, many RCU implementations process
 186	requests in batches in order to improve efficiencies, which can
 187	further delay synchronize_rcu().
 188
 189	Since synchronize_rcu() is the API that must figure out when
 190	readers are done, its implementation is key to RCU.  For RCU
 191	to be useful in all but the most read-intensive situations,
 192	synchronize_rcu()'s overhead must also be quite small.
 193
 194	The call_rcu() API is a callback form of synchronize_rcu(),
 195	and is described in more detail in a later section.  Instead of
 196	blocking, it registers a function and argument which are invoked
 197	after all ongoing RCU read-side critical sections have completed.
 198	This callback variant is particularly useful in situations where
 199	it is illegal to block or where update-side performance is
 200	critically important.
 201
 202	However, the call_rcu() API should not be used lightly, as use
 203	of the synchronize_rcu() API generally results in simpler code.
 204	In addition, the synchronize_rcu() API has the nice property
 205	of automatically limiting update rate should grace periods
 206	be delayed.  This property results in system resilience in face
 207	of denial-of-service attacks.  Code using call_rcu() should limit
 208	update rate in order to gain this same sort of resilience.  See
 209	checklist.txt for some approaches to limiting the update rate.
 210
 211rcu_assign_pointer()
 212
 213	typeof(p) rcu_assign_pointer(p, typeof(p) v);
 214
 215	Yes, rcu_assign_pointer() -is- implemented as a macro, though it
 216	would be cool to be able to declare a function in this manner.
 217	(Compiler experts will no doubt disagree.)
 218
 219	The updater uses this function to assign a new value to an
 220	RCU-protected pointer, in order to safely communicate the change
 221	in value from the updater to the reader.  This function returns
 222	the new value, and also executes any memory-barrier instructions
 223	required for a given CPU architecture.
 224
 225	Perhaps just as important, it serves to document (1) which
 226	pointers are protected by RCU and (2) the point at which a
 227	given structure becomes accessible to other CPUs.  That said,
 228	rcu_assign_pointer() is most frequently used indirectly, via
 229	the _rcu list-manipulation primitives such as list_add_rcu().
 230
 231rcu_dereference()
 232
 233	typeof(p) rcu_dereference(p);
 234
 235	Like rcu_assign_pointer(), rcu_dereference() must be implemented
 236	as a macro.
 237
 238	The reader uses rcu_dereference() to fetch an RCU-protected
 239	pointer, which returns a value that may then be safely
 240	dereferenced.  Note that rcu_dereference() does not actually
 241	dereference the pointer, instead, it protects the pointer for
 242	later dereferencing.  It also executes any needed memory-barrier
 243	instructions for a given CPU architecture.  Currently, only Alpha
 244	needs memory barriers within rcu_dereference() -- on other CPUs,
 245	it compiles to nothing, not even a compiler directive.
 246
 247	Common coding practice uses rcu_dereference() to copy an
 248	RCU-protected pointer to a local variable, then dereferences
 249	this local variable, for example as follows:
 250
 251		p = rcu_dereference(head.next);
 252		return p->data;
 253
 254	However, in this case, one could just as easily combine these
 255	into one statement:
 256
 257		return rcu_dereference(head.next)->data;
 258
 259	If you are going to be fetching multiple fields from the
 260	RCU-protected structure, using the local variable is of
 261	course preferred.  Repeated rcu_dereference() calls look
 262	ugly, do not guarantee that the same pointer will be returned
 263	if an update happened while in the critical section, and incur
 264	unnecessary overhead on Alpha CPUs.
 265
 266	Note that the value returned by rcu_dereference() is valid
 267	only within the enclosing RCU read-side critical section.
 268	For example, the following is -not- legal:
 269
 270		rcu_read_lock();
 271		p = rcu_dereference(head.next);
 272		rcu_read_unlock();
 273		x = p->address;	/* BUG!!! */
 274		rcu_read_lock();
 275		y = p->data;	/* BUG!!! */
 276		rcu_read_unlock();
 277
 278	Holding a reference from one RCU read-side critical section
 279	to another is just as illegal as holding a reference from
 280	one lock-based critical section to another!  Similarly,
 281	using a reference outside of the critical section in which
 282	it was acquired is just as illegal as doing so with normal
 283	locking.
 284
 285	As with rcu_assign_pointer(), an important function of
 286	rcu_dereference() is to document which pointers are protected by
 287	RCU, in particular, flagging a pointer that is subject to changing
 288	at any time, including immediately after the rcu_dereference().
 289	And, again like rcu_assign_pointer(), rcu_dereference() is
 290	typically used indirectly, via the _rcu list-manipulation
 291	primitives, such as list_for_each_entry_rcu().
 292
 293The following diagram shows how each API communicates among the
 294reader, updater, and reclaimer.
 295
 296
 297	    rcu_assign_pointer()
 298	    			    +--------+
 299	    +---------------------->| reader |---------+
 300	    |                       +--------+         |
 301	    |                           |              |
 302	    |                           |              | Protect:
 303	    |                           |              | rcu_read_lock()
 304	    |                           |              | rcu_read_unlock()
 305	    |        rcu_dereference()  |              |
 306       +---------+                      |              |
 307       | updater |<---------------------+              |
 308       +---------+                                     V
 309	    |                                    +-----------+
 310	    +----------------------------------->| reclaimer |
 311	    				         +-----------+
 312	      Defer:
 313	      synchronize_rcu() & call_rcu()
 314
 315
 316The RCU infrastructure observes the time sequence of rcu_read_lock(),
 317rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in
 318order to determine when (1) synchronize_rcu() invocations may return
 319to their callers and (2) call_rcu() callbacks may be invoked.  Efficient
 320implementations of the RCU infrastructure make heavy use of batching in
 321order to amortize their overhead over many uses of the corresponding APIs.
 322
 323There are no fewer than three RCU mechanisms in the Linux kernel; the
 324diagram above shows the first one, which is by far the most commonly used.
 325The rcu_dereference() and rcu_assign_pointer() primitives are used for
 326all three mechanisms, but different defer and protect primitives are
 327used as follows:
 328
 329	Defer			Protect
 330
 331a.	synchronize_rcu()	rcu_read_lock() / rcu_read_unlock()
 332	call_rcu()		rcu_dereference()
 333
 334b.	synchronize_rcu_bh()	rcu_read_lock_bh() / rcu_read_unlock_bh()
 335	call_rcu_bh()		rcu_dereference_bh()
 336
 337c.	synchronize_sched()	rcu_read_lock_sched() / rcu_read_unlock_sched()
 338	call_rcu_sched()	preempt_disable() / preempt_enable()
 339				local_irq_save() / local_irq_restore()
 340				hardirq enter / hardirq exit
 341				NMI enter / NMI exit
 342				rcu_dereference_sched()
 343
 344These three mechanisms are used as follows:
 345
 346a.	RCU applied to normal data structures.
 347
 348b.	RCU applied to networking data structures that may be subjected
 349	to remote denial-of-service attacks.
 350
 351c.	RCU applied to scheduler and interrupt/NMI-handler tasks.
 352
 353Again, most uses will be of (a).  The (b) and (c) cases are important
 354for specialized uses, but are relatively uncommon.
 355
 356
 3573.  WHAT ARE SOME EXAMPLE USES OF CORE RCU API?
 358
 359This section shows a simple use of the core RCU API to protect a
 360global pointer to a dynamically allocated structure.  More-typical
 361uses of RCU may be found in listRCU.txt, arrayRCU.txt, and NMI-RCU.txt.
 362
 363	struct foo {
 364		int a;
 365		char b;
 366		long c;
 367	};
 368	DEFINE_SPINLOCK(foo_mutex);
 369
 370	struct foo __rcu *gbl_foo;
 371
 372	/*
 373	 * Create a new struct foo that is the same as the one currently
 374	 * pointed to by gbl_foo, except that field "a" is replaced
 375	 * with "new_a".  Points gbl_foo to the new structure, and
 376	 * frees up the old structure after a grace period.
 377	 *
 378	 * Uses rcu_assign_pointer() to ensure that concurrent readers
 379	 * see the initialized version of the new structure.
 380	 *
 381	 * Uses synchronize_rcu() to ensure that any readers that might
 382	 * have references to the old structure complete before freeing
 383	 * the old structure.
 384	 */
 385	void foo_update_a(int new_a)
 386	{
 387		struct foo *new_fp;
 388		struct foo *old_fp;
 389
 390		new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL);
 391		spin_lock(&foo_mutex);
 392		old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex));
 393		*new_fp = *old_fp;
 394		new_fp->a = new_a;
 395		rcu_assign_pointer(gbl_foo, new_fp);
 396		spin_unlock(&foo_mutex);
 397		synchronize_rcu();
 398		kfree(old_fp);
 399	}
 400
 401	/*
 402	 * Return the value of field "a" of the current gbl_foo
 403	 * structure.  Use rcu_read_lock() and rcu_read_unlock()
 404	 * to ensure that the structure does not get deleted out
 405	 * from under us, and use rcu_dereference() to ensure that
 406	 * we see the initialized version of the structure (important
 407	 * for DEC Alpha and for people reading the code).
 408	 */
 409	int foo_get_a(void)
 410	{
 411		int retval;
 412
 413		rcu_read_lock();
 414		retval = rcu_dereference(gbl_foo)->a;
 415		rcu_read_unlock();
 416		return retval;
 417	}
 418
 419So, to sum up:
 420
 421o	Use rcu_read_lock() and rcu_read_unlock() to guard RCU
 422	read-side critical sections.
 423
 424o	Within an RCU read-side critical section, use rcu_dereference()
 425	to dereference RCU-protected pointers.
 426
 427o	Use some solid scheme (such as locks or semaphores) to
 428	keep concurrent updates from interfering with each other.
 429
 430o	Use rcu_assign_pointer() to update an RCU-protected pointer.
 431	This primitive protects concurrent readers from the updater,
 432	-not- concurrent updates from each other!  You therefore still
 433	need to use locking (or something similar) to keep concurrent
 434	rcu_assign_pointer() primitives from interfering with each other.
 435
 436o	Use synchronize_rcu() -after- removing a data element from an
 437	RCU-protected data structure, but -before- reclaiming/freeing
 438	the data element, in order to wait for the completion of all
 439	RCU read-side critical sections that might be referencing that
 440	data item.
 441
 442See checklist.txt for additional rules to follow when using RCU.
 443And again, more-typical uses of RCU may be found in listRCU.txt,
 444arrayRCU.txt, and NMI-RCU.txt.
 445
 446
 4474.  WHAT IF MY UPDATING THREAD CANNOT BLOCK?
 448
 449In the example above, foo_update_a() blocks until a grace period elapses.
 450This is quite simple, but in some cases one cannot afford to wait so
 451long -- there might be other high-priority work to be done.
 452
 453In such cases, one uses call_rcu() rather than synchronize_rcu().
 454The call_rcu() API is as follows:
 455
 456	void call_rcu(struct rcu_head * head,
 457		      void (*func)(struct rcu_head *head));
 458
 459This function invokes func(head) after a grace period has elapsed.
 460This invocation might happen from either softirq or process context,
 461so the function is not permitted to block.  The foo struct needs to
 462have an rcu_head structure added, perhaps as follows:
 463
 464	struct foo {
 465		int a;
 466		char b;
 467		long c;
 468		struct rcu_head rcu;
 469	};
 470
 471The foo_update_a() function might then be written as follows:
 472
 473	/*
 474	 * Create a new struct foo that is the same as the one currently
 475	 * pointed to by gbl_foo, except that field "a" is replaced
 476	 * with "new_a".  Points gbl_foo to the new structure, and
 477	 * frees up the old structure after a grace period.
 478	 *
 479	 * Uses rcu_assign_pointer() to ensure that concurrent readers
 480	 * see the initialized version of the new structure.
 481	 *
 482	 * Uses call_rcu() to ensure that any readers that might have
 483	 * references to the old structure complete before freeing the
 484	 * old structure.
 485	 */
 486	void foo_update_a(int new_a)
 487	{
 488		struct foo *new_fp;
 489		struct foo *old_fp;
 490
 491		new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL);
 492		spin_lock(&foo_mutex);
 493		old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex));
 494		*new_fp = *old_fp;
 495		new_fp->a = new_a;
 496		rcu_assign_pointer(gbl_foo, new_fp);
 497		spin_unlock(&foo_mutex);
 498		call_rcu(&old_fp->rcu, foo_reclaim);
 499	}
 500
 501The foo_reclaim() function might appear as follows:
 502
 503	void foo_reclaim(struct rcu_head *rp)
 504	{
 505		struct foo *fp = container_of(rp, struct foo, rcu);
 506
 507		foo_cleanup(fp->a);
 508
 509		kfree(fp);
 510	}
 511
 512The container_of() primitive is a macro that, given a pointer into a
 513struct, the type of the struct, and the pointed-to field within the
 514struct, returns a pointer to the beginning of the struct.
 515
 516The use of call_rcu() permits the caller of foo_update_a() to
 517immediately regain control, without needing to worry further about the
 518old version of the newly updated element.  It also clearly shows the
 519RCU distinction between updater, namely foo_update_a(), and reclaimer,
 520namely foo_reclaim().
 521
 522The summary of advice is the same as for the previous section, except
 523that we are now using call_rcu() rather than synchronize_rcu():
 524
 525o	Use call_rcu() -after- removing a data element from an
 526	RCU-protected data structure in order to register a callback
 527	function that will be invoked after the completion of all RCU
 528	read-side critical sections that might be referencing that
 529	data item.
 530
 531If the callback for call_rcu() is not doing anything more than calling
 532kfree() on the structure, you can use kfree_rcu() instead of call_rcu()
 533to avoid having to write your own callback:
 534
 535	kfree_rcu(old_fp, rcu);
 536
 537Again, see checklist.txt for additional rules governing the use of RCU.
 538
 539
 5405.  WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU?
 541
 542One of the nice things about RCU is that it has extremely simple "toy"
 543implementations that are a good first step towards understanding the
 544production-quality implementations in the Linux kernel.  This section
 545presents two such "toy" implementations of RCU, one that is implemented
 546in terms of familiar locking primitives, and another that more closely
 547resembles "classic" RCU.  Both are way too simple for real-world use,
 548lacking both functionality and performance.  However, they are useful
 549in getting a feel for how RCU works.  See kernel/rcupdate.c for a
 550production-quality implementation, and see:
 551
 552	http://www.rdrop.com/users/paulmck/RCU
 553
 554for papers describing the Linux kernel RCU implementation.  The OLS'01
 555and OLS'02 papers are a good introduction, and the dissertation provides
 556more details on the current implementation as of early 2004.
 557
 558
 5595A.  "TOY" IMPLEMENTATION #1: LOCKING
 560
 561This section presents a "toy" RCU implementation that is based on
 562familiar locking primitives.  Its overhead makes it a non-starter for
 563real-life use, as does its lack of scalability.  It is also unsuitable
 564for realtime use, since it allows scheduling latency to "bleed" from
 565one read-side critical section to another.  It also assumes recursive
 566reader-writer locks:  If you try this with non-recursive locks, and
 567you allow nested rcu_read_lock() calls, you can deadlock.
 568
 569However, it is probably the easiest implementation to relate to, so is
 570a good starting point.
 571
 572It is extremely simple:
 573
 574	static DEFINE_RWLOCK(rcu_gp_mutex);
 575
 576	void rcu_read_lock(void)
 577	{
 578		read_lock(&rcu_gp_mutex);
 579	}
 580
 581	void rcu_read_unlock(void)
 582	{
 583		read_unlock(&rcu_gp_mutex);
 584	}
 585
 586	void synchronize_rcu(void)
 587	{
 588		write_lock(&rcu_gp_mutex);
 589		write_unlock(&rcu_gp_mutex);
 590	}
 591
 592[You can ignore rcu_assign_pointer() and rcu_dereference() without missing
 593much.  But here are simplified versions anyway.  And whatever you do,
 594don't forget about them when submitting patches making use of RCU!]
 595
 596	#define rcu_assign_pointer(p, v) \
 597	({ \
 598		smp_store_release(&(p), (v)); \
 599	})
 600
 601	#define rcu_dereference(p) \
 602	({ \
 603		typeof(p) _________p1 = READ_ONCE(p); \
 604		(_________p1); \
 605	})
 606
 607
 608The rcu_read_lock() and rcu_read_unlock() primitive read-acquire
 609and release a global reader-writer lock.  The synchronize_rcu()
 610primitive write-acquires this same lock, then immediately releases
 611it.  This means that once synchronize_rcu() exits, all RCU read-side
 612critical sections that were in progress before synchronize_rcu() was
 613called are guaranteed to have completed -- there is no way that
 614synchronize_rcu() would have been able to write-acquire the lock
 615otherwise.
 616
 617It is possible to nest rcu_read_lock(), since reader-writer locks may
 618be recursively acquired.  Note also that rcu_read_lock() is immune
 619from deadlock (an important property of RCU).  The reason for this is
 620that the only thing that can block rcu_read_lock() is a synchronize_rcu().
 621But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex,
 622so there can be no deadlock cycle.
 623
 624Quick Quiz #1:	Why is this argument naive?  How could a deadlock
 625		occur when using this algorithm in a real-world Linux
 626		kernel?  How could this deadlock be avoided?
 627
 628
 6295B.  "TOY" EXAMPLE #2: CLASSIC RCU
 630
 631This section presents a "toy" RCU implementation that is based on
 632"classic RCU".  It is also short on performance (but only for updates) and
 633on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT
 634kernels.  The definitions of rcu_dereference() and rcu_assign_pointer()
 635are the same as those shown in the preceding section, so they are omitted.
 636
 637	void rcu_read_lock(void) { }
 638
 639	void rcu_read_unlock(void) { }
 640
 641	void synchronize_rcu(void)
 642	{
 643		int cpu;
 644
 645		for_each_possible_cpu(cpu)
 646			run_on(cpu);
 647	}
 648
 649Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing.
 650This is the great strength of classic RCU in a non-preemptive kernel:
 651read-side overhead is precisely zero, at least on non-Alpha CPUs.
 652And there is absolutely no way that rcu_read_lock() can possibly
 653participate in a deadlock cycle!
 654
 655The implementation of synchronize_rcu() simply schedules itself on each
 656CPU in turn.  The run_on() primitive can be implemented straightforwardly
 657in terms of the sched_setaffinity() primitive.  Of course, a somewhat less
 658"toy" implementation would restore the affinity upon completion rather
 659than just leaving all tasks running on the last CPU, but when I said
 660"toy", I meant -toy-!
 661
 662So how the heck is this supposed to work???
 663
 664Remember that it is illegal to block while in an RCU read-side critical
 665section.  Therefore, if a given CPU executes a context switch, we know
 666that it must have completed all preceding RCU read-side critical sections.
 667Once -all- CPUs have executed a context switch, then -all- preceding
 668RCU read-side critical sections will have completed.
 669
 670So, suppose that we remove a data item from its structure and then invoke
 671synchronize_rcu().  Once synchronize_rcu() returns, we are guaranteed
 672that there are no RCU read-side critical sections holding a reference
 673to that data item, so we can safely reclaim it.
 674
 675Quick Quiz #2:	Give an example where Classic RCU's read-side
 676		overhead is -negative-.
 677
 678Quick Quiz #3:  If it is illegal to block in an RCU read-side
 679		critical section, what the heck do you do in
 680		PREEMPT_RT, where normal spinlocks can block???
 681
 682
 6836.  ANALOGY WITH READER-WRITER LOCKING
 684
 685Although RCU can be used in many different ways, a very common use of
 686RCU is analogous to reader-writer locking.  The following unified
 687diff shows how closely related RCU and reader-writer locking can be.
 688
 689	@@ -5,5 +5,5 @@ struct el {
 690	 	int data;
 691	 	/* Other data fields */
 692	 };
 693	-rwlock_t listmutex;
 694	+spinlock_t listmutex;
 695	 struct el head;
 696
 697	@@ -13,15 +14,15 @@
 698		struct list_head *lp;
 699		struct el *p;
 700
 701	-	read_lock(&listmutex);
 702	-	list_for_each_entry(p, head, lp) {
 703	+	rcu_read_lock();
 704	+	list_for_each_entry_rcu(p, head, lp) {
 705			if (p->key == key) {
 706				*result = p->data;
 707	-			read_unlock(&listmutex);
 708	+			rcu_read_unlock();
 709				return 1;
 710			}
 711		}
 712	-	read_unlock(&listmutex);
 713	+	rcu_read_unlock();
 714		return 0;
 715	 }
 716
 717	@@ -29,15 +30,16 @@
 718	 {
 719		struct el *p;
 720
 721	-	write_lock(&listmutex);
 722	+	spin_lock(&listmutex);
 723		list_for_each_entry(p, head, lp) {
 724			if (p->key == key) {
 725	-			list_del(&p->list);
 726	-			write_unlock(&listmutex);
 727	+			list_del_rcu(&p->list);
 728	+			spin_unlock(&listmutex);
 729	+			synchronize_rcu();
 730				kfree(p);
 731				return 1;
 732			}
 733		}
 734	-	write_unlock(&listmutex);
 735	+	spin_unlock(&listmutex);
 736		return 0;
 737	 }
 738
 739Or, for those who prefer a side-by-side listing:
 740
 741 1 struct el {                          1 struct el {
 742 2   struct list_head list;             2   struct list_head list;
 743 3   long key;                          3   long key;
 744 4   spinlock_t mutex;                  4   spinlock_t mutex;
 745 5   int data;                          5   int data;
 746 6   /* Other data fields */            6   /* Other data fields */
 747 7 };                                   7 };
 748 8 rwlock_t listmutex;                  8 spinlock_t listmutex;
 749 9 struct el head;                      9 struct el head;
 750
 751 1 int search(long key, int *result)    1 int search(long key, int *result)
 752 2 {                                    2 {
 753 3   struct list_head *lp;              3   struct list_head *lp;
 754 4   struct el *p;                      4   struct el *p;
 755 5                                      5
 756 6   read_lock(&listmutex);             6   rcu_read_lock();
 757 7   list_for_each_entry(p, head, lp) { 7   list_for_each_entry_rcu(p, head, lp) {
 758 8     if (p->key == key) {             8     if (p->key == key) {
 759 9       *result = p->data;             9       *result = p->data;
 76010       read_unlock(&listmutex);      10       rcu_read_unlock();
 76111       return 1;                     11       return 1;
 76212     }                               12     }
 76313   }                                 13   }
 76414   read_unlock(&listmutex);          14   rcu_read_unlock();
 76515   return 0;                         15   return 0;
 76616 }                                   16 }
 767
 768 1 int delete(long key)                 1 int delete(long key)
 769 2 {                                    2 {
 770 3   struct el *p;                      3   struct el *p;
 771 4                                      4
 772 5   write_lock(&listmutex);            5   spin_lock(&listmutex);
 773 6   list_for_each_entry(p, head, lp) { 6   list_for_each_entry(p, head, lp) {
 774 7     if (p->key == key) {             7     if (p->key == key) {
 775 8       list_del(&p->list);            8       list_del_rcu(&p->list);
 776 9       write_unlock(&listmutex);      9       spin_unlock(&listmutex);
 777                                       10       synchronize_rcu();
 77810       kfree(p);                     11       kfree(p);
 77911       return 1;                     12       return 1;
 78012     }                               13     }
 78113   }                                 14   }
 78214   write_unlock(&listmutex);         15   spin_unlock(&listmutex);
 78315   return 0;                         16   return 0;
 78416 }                                   17 }
 785
 786Either way, the differences are quite small.  Read-side locking moves
 787to rcu_read_lock() and rcu_read_unlock, update-side locking moves from
 788a reader-writer lock to a simple spinlock, and a synchronize_rcu()
 789precedes the kfree().
 790
 791However, there is one potential catch: the read-side and update-side
 792critical sections can now run concurrently.  In many cases, this will
 793not be a problem, but it is necessary to check carefully regardless.
 794For example, if multiple independent list updates must be seen as
 795a single atomic update, converting to RCU will require special care.
 796
 797Also, the presence of synchronize_rcu() means that the RCU version of
 798delete() can now block.  If this is a problem, there is a callback-based
 799mechanism that never blocks, namely call_rcu() or kfree_rcu(), that can
 800be used in place of synchronize_rcu().
 801
 802
 8037.  FULL LIST OF RCU APIs
 804
 805The RCU APIs are documented in docbook-format header comments in the
 806Linux-kernel source code, but it helps to have a full list of the
 807APIs, since there does not appear to be a way to categorize them
 808in docbook.  Here is the list, by category.
 809
 810RCU list traversal:
 811
 812	list_entry_rcu
 813	list_first_entry_rcu
 814	list_next_rcu
 815	list_for_each_entry_rcu
 816	list_for_each_entry_continue_rcu
 817	hlist_first_rcu
 818	hlist_next_rcu
 819	hlist_pprev_rcu
 820	hlist_for_each_entry_rcu
 821	hlist_for_each_entry_rcu_bh
 822	hlist_for_each_entry_continue_rcu
 823	hlist_for_each_entry_continue_rcu_bh
 824	hlist_nulls_first_rcu
 825	hlist_nulls_for_each_entry_rcu
 826	hlist_bl_first_rcu
 827	hlist_bl_for_each_entry_rcu
 828
 829RCU pointer/list update:
 830
 831	rcu_assign_pointer
 832	list_add_rcu
 833	list_add_tail_rcu
 834	list_del_rcu
 835	list_replace_rcu
 836	hlist_add_behind_rcu
 837	hlist_add_before_rcu
 838	hlist_add_head_rcu
 839	hlist_del_rcu
 840	hlist_del_init_rcu
 841	hlist_replace_rcu
 842	list_splice_init_rcu()
 843	hlist_nulls_del_init_rcu
 844	hlist_nulls_del_rcu
 845	hlist_nulls_add_head_rcu
 846	hlist_bl_add_head_rcu
 847	hlist_bl_del_init_rcu
 848	hlist_bl_del_rcu
 849	hlist_bl_set_first_rcu
 850
 851RCU:	Critical sections	Grace period		Barrier
 852
 853	rcu_read_lock		synchronize_net		rcu_barrier
 854	rcu_read_unlock		synchronize_rcu
 855	rcu_dereference		synchronize_rcu_expedited
 856	rcu_read_lock_held	call_rcu
 857	rcu_dereference_check	kfree_rcu
 858	rcu_dereference_protected
 859
 860bh:	Critical sections	Grace period		Barrier
 861
 862	rcu_read_lock_bh	call_rcu_bh		rcu_barrier_bh
 863	rcu_read_unlock_bh	synchronize_rcu_bh
 864	rcu_dereference_bh	synchronize_rcu_bh_expedited
 865	rcu_dereference_bh_check
 866	rcu_dereference_bh_protected
 867	rcu_read_lock_bh_held
 868
 869sched:	Critical sections	Grace period		Barrier
 870
 871	rcu_read_lock_sched	synchronize_sched	rcu_barrier_sched
 872	rcu_read_unlock_sched	call_rcu_sched
 873	[preempt_disable]	synchronize_sched_expedited
 874	[and friends]
 875	rcu_read_lock_sched_notrace
 876	rcu_read_unlock_sched_notrace
 877	rcu_dereference_sched
 878	rcu_dereference_sched_check
 879	rcu_dereference_sched_protected
 880	rcu_read_lock_sched_held
 881
 882
 883SRCU:	Critical sections	Grace period		Barrier
 884
 885	srcu_read_lock		synchronize_srcu	srcu_barrier
 886	srcu_read_unlock	call_srcu
 887	srcu_dereference	synchronize_srcu_expedited
 888	srcu_dereference_check
 889	srcu_read_lock_held
 890
 891SRCU:	Initialization/cleanup
 892	DEFINE_SRCU
 893	DEFINE_STATIC_SRCU
 894	init_srcu_struct
 895	cleanup_srcu_struct
 896
 897All:  lockdep-checked RCU-protected pointer access
 898
 899	rcu_access_pointer
 900	rcu_dereference_raw
 901	RCU_LOCKDEP_WARN
 902	rcu_sleep_check
 903	RCU_NONIDLE
 904
 905See the comment headers in the source code (or the docbook generated
 906from them) for more information.
 907
 908However, given that there are no fewer than four families of RCU APIs
 909in the Linux kernel, how do you choose which one to use?  The following
 910list can be helpful:
 911
 912a.	Will readers need to block?  If so, you need SRCU.
 913
 914b.	What about the -rt patchset?  If readers would need to block
 915	in an non-rt kernel, you need SRCU.  If readers would block
 916	in a -rt kernel, but not in a non-rt kernel, SRCU is not
 917	necessary.  (The -rt patchset turns spinlocks into sleeplocks,
 918	hence this distinction.)
 919
 920c.	Do you need to treat NMI handlers, hardirq handlers,
 921	and code segments with preemption disabled (whether
 922	via preempt_disable(), local_irq_save(), local_bh_disable(),
 923	or some other mechanism) as if they were explicit RCU readers?
 924	If so, RCU-sched is the only choice that will work for you.
 925
 926d.	Do you need RCU grace periods to complete even in the face
 927	of softirq monopolization of one or more of the CPUs?  For
 928	example, is your code subject to network-based denial-of-service
 929	attacks?  If so, you need RCU-bh.
 930
 931e.	Is your workload too update-intensive for normal use of
 932	RCU, but inappropriate for other synchronization mechanisms?
 933	If so, consider SLAB_TYPESAFE_BY_RCU (which was originally
 934	named SLAB_DESTROY_BY_RCU).  But please be careful!
 935
 936f.	Do you need read-side critical sections that are respected
 937	even though they are in the middle of the idle loop, during
 938	user-mode execution, or on an offlined CPU?  If so, SRCU is the
 939	only choice that will work for you.
 940
 941g.	Otherwise, use RCU.
 942
 943Of course, this all assumes that you have determined that RCU is in fact
 944the right tool for your job.
 945
 946
 9478.  ANSWERS TO QUICK QUIZZES
 948
 949Quick Quiz #1:	Why is this argument naive?  How could a deadlock
 950		occur when using this algorithm in a real-world Linux
 951		kernel?  [Referring to the lock-based "toy" RCU
 952		algorithm.]
 953
 954Answer:		Consider the following sequence of events:
 955
 956		1.	CPU 0 acquires some unrelated lock, call it
 957			"problematic_lock", disabling irq via
 958			spin_lock_irqsave().
 959
 960		2.	CPU 1 enters synchronize_rcu(), write-acquiring
 961			rcu_gp_mutex.
 962
 963		3.	CPU 0 enters rcu_read_lock(), but must wait
 964			because CPU 1 holds rcu_gp_mutex.
 965
 966		4.	CPU 1 is interrupted, and the irq handler
 967			attempts to acquire problematic_lock.
 968
 969		The system is now deadlocked.
 970
 971		One way to avoid this deadlock is to use an approach like
 972		that of CONFIG_PREEMPT_RT, where all normal spinlocks
 973		become blocking locks, and all irq handlers execute in
 974		the context of special tasks.  In this case, in step 4
 975		above, the irq handler would block, allowing CPU 1 to
 976		release rcu_gp_mutex, avoiding the deadlock.
 977
 978		Even in the absence of deadlock, this RCU implementation
 979		allows latency to "bleed" from readers to other
 980		readers through synchronize_rcu().  To see this,
 981		consider task A in an RCU read-side critical section
 982		(thus read-holding rcu_gp_mutex), task B blocked
 983		attempting to write-acquire rcu_gp_mutex, and
 984		task C blocked in rcu_read_lock() attempting to
 985		read_acquire rcu_gp_mutex.  Task A's RCU read-side
 986		latency is holding up task C, albeit indirectly via
 987		task B.
 988
 989		Realtime RCU implementations therefore use a counter-based
 990		approach where tasks in RCU read-side critical sections
 991		cannot be blocked by tasks executing synchronize_rcu().
 992
 993Quick Quiz #2:	Give an example where Classic RCU's read-side
 994		overhead is -negative-.
 995
 996Answer:		Imagine a single-CPU system with a non-CONFIG_PREEMPT
 997		kernel where a routing table is used by process-context
 998		code, but can be updated by irq-context code (for example,
 999		by an "ICMP REDIRECT" packet).	The usual way of handling
1000		this would be to have the process-context code disable
1001		interrupts while searching the routing table.  Use of
1002		RCU allows such interrupt-disabling to be dispensed with.
1003		Thus, without RCU, you pay the cost of disabling interrupts,
1004		and with RCU you don't.
1005
1006		One can argue that the overhead of RCU in this
1007		case is negative with respect to the single-CPU
1008		interrupt-disabling approach.  Others might argue that
1009		the overhead of RCU is merely zero, and that replacing
1010		the positive overhead of the interrupt-disabling scheme
1011		with the zero-overhead RCU scheme does not constitute
1012		negative overhead.
1013
1014		In real life, of course, things are more complex.  But
1015		even the theoretical possibility of negative overhead for
1016		a synchronization primitive is a bit unexpected.  ;-)
1017
1018Quick Quiz #3:  If it is illegal to block in an RCU read-side
1019		critical section, what the heck do you do in
1020		PREEMPT_RT, where normal spinlocks can block???
1021
1022Answer:		Just as PREEMPT_RT permits preemption of spinlock
1023		critical sections, it permits preemption of RCU
1024		read-side critical sections.  It also permits
1025		spinlocks blocking while in RCU read-side critical
1026		sections.
1027
1028		Why the apparent inconsistency?  Because it is it
1029		possible to use priority boosting to keep the RCU
1030		grace periods short if need be (for example, if running
1031		short of memory).  In contrast, if blocking waiting
1032		for (say) network reception, there is no way to know
1033		what should be boosted.  Especially given that the
1034		process we need to boost might well be a human being
1035		who just went out for a pizza or something.  And although
1036		a computer-operated cattle prod might arouse serious
1037		interest, it might also provoke serious objections.
1038		Besides, how does the computer know what pizza parlor
1039		the human being went to???
1040
1041
1042ACKNOWLEDGEMENTS
1043
1044My thanks to the people who helped make this human-readable, including
1045Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern.
1046
1047
1048For more information, see http://www.rdrop.com/users/paulmck/RCU.