Linux Audio

Check our new training course

Loading...
Note: File does not exist in v6.13.7.
   1Please note that the "What is RCU?" LWN series is an excellent place
   2to start learning about RCU:
   3
   41.	What is RCU, Fundamentally?  http://lwn.net/Articles/262464/
   52.	What is RCU? Part 2: Usage   http://lwn.net/Articles/263130/
   63.	RCU part 3: the RCU API      http://lwn.net/Articles/264090/
   74.	The RCU API, 2010 Edition    http://lwn.net/Articles/418853/
   8
   9
  10What is RCU?
  11
  12RCU is a synchronization mechanism that was added to the Linux kernel
  13during the 2.5 development effort that is optimized for read-mostly
  14situations.  Although RCU is actually quite simple once you understand it,
  15getting there can sometimes be a challenge.  Part of the problem is that
  16most of the past descriptions of RCU have been written with the mistaken
  17assumption that there is "one true way" to describe RCU.  Instead,
  18the experience has been that different people must take different paths
  19to arrive at an understanding of RCU.  This document provides several
  20different paths, as follows:
  21
  221.	RCU OVERVIEW
  232.	WHAT IS RCU'S CORE API?
  243.	WHAT ARE SOME EXAMPLE USES OF CORE RCU API?
  254.	WHAT IF MY UPDATING THREAD CANNOT BLOCK?
  265.	WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU?
  276.	ANALOGY WITH READER-WRITER LOCKING
  287.	FULL LIST OF RCU APIs
  298.	ANSWERS TO QUICK QUIZZES
  30
  31People who prefer starting with a conceptual overview should focus on
  32Section 1, though most readers will profit by reading this section at
  33some point.  People who prefer to start with an API that they can then
  34experiment with should focus on Section 2.  People who prefer to start
  35with example uses should focus on Sections 3 and 4.  People who need to
  36understand the RCU implementation should focus on Section 5, then dive
  37into the kernel source code.  People who reason best by analogy should
  38focus on Section 6.  Section 7 serves as an index to the docbook API
  39documentation, and Section 8 is the traditional answer key.
  40
  41So, start with the section that makes the most sense to you and your
  42preferred method of learning.  If you need to know everything about
  43everything, feel free to read the whole thing -- but if you are really
  44that type of person, you have perused the source code and will therefore
  45never need this document anyway.  ;-)
  46
  47
  481.  RCU OVERVIEW
  49
  50The basic idea behind RCU is to split updates into "removal" and
  51"reclamation" phases.  The removal phase removes references to data items
  52within a data structure (possibly by replacing them with references to
  53new versions of these data items), and can run concurrently with readers.
  54The reason that it is safe to run the removal phase concurrently with
  55readers is the semantics of modern CPUs guarantee that readers will see
  56either the old or the new version of the data structure rather than a
  57partially updated reference.  The reclamation phase does the work of reclaiming
  58(e.g., freeing) the data items removed from the data structure during the
  59removal phase.  Because reclaiming data items can disrupt any readers
  60concurrently referencing those data items, the reclamation phase must
  61not start until readers no longer hold references to those data items.
  62
  63Splitting the update into removal and reclamation phases permits the
  64updater to perform the removal phase immediately, and to defer the
  65reclamation phase until all readers active during the removal phase have
  66completed, either by blocking until they finish or by registering a
  67callback that is invoked after they finish.  Only readers that are active
  68during the removal phase need be considered, because any reader starting
  69after the removal phase will be unable to gain a reference to the removed
  70data items, and therefore cannot be disrupted by the reclamation phase.
  71
  72So the typical RCU update sequence goes something like the following:
  73
  74a.	Remove pointers to a data structure, so that subsequent
  75	readers cannot gain a reference to it.
  76
  77b.	Wait for all previous readers to complete their RCU read-side
  78	critical sections.
  79
  80c.	At this point, there cannot be any readers who hold references
  81	to the data structure, so it now may safely be reclaimed
  82	(e.g., kfree()d).
  83
  84Step (b) above is the key idea underlying RCU's deferred destruction.
  85The ability to wait until all readers are done allows RCU readers to
  86use much lighter-weight synchronization, in some cases, absolutely no
  87synchronization at all.  In contrast, in more conventional lock-based
  88schemes, readers must use heavy-weight synchronization in order to
  89prevent an updater from deleting the data structure out from under them.
  90This is because lock-based updaters typically update data items in place,
  91and must therefore exclude readers.  In contrast, RCU-based updaters
  92typically take advantage of the fact that writes to single aligned
  93pointers are atomic on modern CPUs, allowing atomic insertion, removal,
  94and replacement of data items in a linked structure without disrupting
  95readers.  Concurrent RCU readers can then continue accessing the old
  96versions, and can dispense with the atomic operations, memory barriers,
  97and communications cache misses that are so expensive on present-day
  98SMP computer systems, even in absence of lock contention.
  99
 100In the three-step procedure shown above, the updater is performing both
 101the removal and the reclamation step, but it is often helpful for an
 102entirely different thread to do the reclamation, as is in fact the case
 103in the Linux kernel's directory-entry cache (dcache).  Even if the same
 104thread performs both the update step (step (a) above) and the reclamation
 105step (step (c) above), it is often helpful to think of them separately.
 106For example, RCU readers and updaters need not communicate at all,
 107but RCU provides implicit low-overhead communication between readers
 108and reclaimers, namely, in step (b) above.
 109
 110So how the heck can a reclaimer tell when a reader is done, given
 111that readers are not doing any sort of synchronization operations???
 112Read on to learn about how RCU's API makes this easy.
 113
 114
 1152.  WHAT IS RCU'S CORE API?
 116
 117The core RCU API is quite small:
 118
 119a.	rcu_read_lock()
 120b.	rcu_read_unlock()
 121c.	synchronize_rcu() / call_rcu()
 122d.	rcu_assign_pointer()
 123e.	rcu_dereference()
 124
 125There are many other members of the RCU API, but the rest can be
 126expressed in terms of these five, though most implementations instead
 127express synchronize_rcu() in terms of the call_rcu() callback API.
 128
 129The five core RCU APIs are described below, the other 18 will be enumerated
 130later.  See the kernel docbook documentation for more info, or look directly
 131at the function header comments.
 132
 133rcu_read_lock()
 134
 135	void rcu_read_lock(void);
 136
 137	Used by a reader to inform the reclaimer that the reader is
 138	entering an RCU read-side critical section.  It is illegal
 139	to block while in an RCU read-side critical section, though
 140	kernels built with CONFIG_PREEMPT_RCU can preempt RCU
 141	read-side critical sections.  Any RCU-protected data structure
 142	accessed during an RCU read-side critical section is guaranteed to
 143	remain unreclaimed for the full duration of that critical section.
 144	Reference counts may be used in conjunction with RCU to maintain
 145	longer-term references to data structures.
 146
 147rcu_read_unlock()
 148
 149	void rcu_read_unlock(void);
 150
 151	Used by a reader to inform the reclaimer that the reader is
 152	exiting an RCU read-side critical section.  Note that RCU
 153	read-side critical sections may be nested and/or overlapping.
 154
 155synchronize_rcu()
 156
 157	void synchronize_rcu(void);
 158
 159	Marks the end of updater code and the beginning of reclaimer
 160	code.  It does this by blocking until all pre-existing RCU
 161	read-side critical sections on all CPUs have completed.
 162	Note that synchronize_rcu() will -not- necessarily wait for
 163	any subsequent RCU read-side critical sections to complete.
 164	For example, consider the following sequence of events:
 165
 166	         CPU 0                  CPU 1                 CPU 2
 167	     ----------------- ------------------------- ---------------
 168	 1.  rcu_read_lock()
 169	 2.                    enters synchronize_rcu()
 170	 3.                                               rcu_read_lock()
 171	 4.  rcu_read_unlock()
 172	 5.                     exits synchronize_rcu()
 173	 6.                                              rcu_read_unlock()
 174
 175	To reiterate, synchronize_rcu() waits only for ongoing RCU
 176	read-side critical sections to complete, not necessarily for
 177	any that begin after synchronize_rcu() is invoked.
 178
 179	Of course, synchronize_rcu() does not necessarily return
 180	-immediately- after the last pre-existing RCU read-side critical
 181	section completes.  For one thing, there might well be scheduling
 182	delays.  For another thing, many RCU implementations process
 183	requests in batches in order to improve efficiencies, which can
 184	further delay synchronize_rcu().
 185
 186	Since synchronize_rcu() is the API that must figure out when
 187	readers are done, its implementation is key to RCU.  For RCU
 188	to be useful in all but the most read-intensive situations,
 189	synchronize_rcu()'s overhead must also be quite small.
 190
 191	The call_rcu() API is a callback form of synchronize_rcu(),
 192	and is described in more detail in a later section.  Instead of
 193	blocking, it registers a function and argument which are invoked
 194	after all ongoing RCU read-side critical sections have completed.
 195	This callback variant is particularly useful in situations where
 196	it is illegal to block or where update-side performance is
 197	critically important.
 198
 199	However, the call_rcu() API should not be used lightly, as use
 200	of the synchronize_rcu() API generally results in simpler code.
 201	In addition, the synchronize_rcu() API has the nice property
 202	of automatically limiting update rate should grace periods
 203	be delayed.  This property results in system resilience in face
 204	of denial-of-service attacks.  Code using call_rcu() should limit
 205	update rate in order to gain this same sort of resilience.  See
 206	checklist.txt for some approaches to limiting the update rate.
 207
 208rcu_assign_pointer()
 209
 210	typeof(p) rcu_assign_pointer(p, typeof(p) v);
 211
 212	Yes, rcu_assign_pointer() -is- implemented as a macro, though it
 213	would be cool to be able to declare a function in this manner.
 214	(Compiler experts will no doubt disagree.)
 215
 216	The updater uses this function to assign a new value to an
 217	RCU-protected pointer, in order to safely communicate the change
 218	in value from the updater to the reader.  This function returns
 219	the new value, and also executes any memory-barrier instructions
 220	required for a given CPU architecture.
 221
 222	Perhaps just as important, it serves to document (1) which
 223	pointers are protected by RCU and (2) the point at which a
 224	given structure becomes accessible to other CPUs.  That said,
 225	rcu_assign_pointer() is most frequently used indirectly, via
 226	the _rcu list-manipulation primitives such as list_add_rcu().
 227
 228rcu_dereference()
 229
 230	typeof(p) rcu_dereference(p);
 231
 232	Like rcu_assign_pointer(), rcu_dereference() must be implemented
 233	as a macro.
 234
 235	The reader uses rcu_dereference() to fetch an RCU-protected
 236	pointer, which returns a value that may then be safely
 237	dereferenced.  Note that rcu_deference() does not actually
 238	dereference the pointer, instead, it protects the pointer for
 239	later dereferencing.  It also executes any needed memory-barrier
 240	instructions for a given CPU architecture.  Currently, only Alpha
 241	needs memory barriers within rcu_dereference() -- on other CPUs,
 242	it compiles to nothing, not even a compiler directive.
 243
 244	Common coding practice uses rcu_dereference() to copy an
 245	RCU-protected pointer to a local variable, then dereferences
 246	this local variable, for example as follows:
 247
 248		p = rcu_dereference(head.next);
 249		return p->data;
 250
 251	However, in this case, one could just as easily combine these
 252	into one statement:
 253
 254		return rcu_dereference(head.next)->data;
 255
 256	If you are going to be fetching multiple fields from the
 257	RCU-protected structure, using the local variable is of
 258	course preferred.  Repeated rcu_dereference() calls look
 259	ugly, do not guarantee that the same pointer will be returned
 260	if an update happened while in the critical section, and incur
 261	unnecessary overhead on Alpha CPUs.
 262
 263	Note that the value returned by rcu_dereference() is valid
 264	only within the enclosing RCU read-side critical section.
 265	For example, the following is -not- legal:
 266
 267		rcu_read_lock();
 268		p = rcu_dereference(head.next);
 269		rcu_read_unlock();
 270		x = p->address;	/* BUG!!! */
 271		rcu_read_lock();
 272		y = p->data;	/* BUG!!! */
 273		rcu_read_unlock();
 274
 275	Holding a reference from one RCU read-side critical section
 276	to another is just as illegal as holding a reference from
 277	one lock-based critical section to another!  Similarly,
 278	using a reference outside of the critical section in which
 279	it was acquired is just as illegal as doing so with normal
 280	locking.
 281
 282	As with rcu_assign_pointer(), an important function of
 283	rcu_dereference() is to document which pointers are protected by
 284	RCU, in particular, flagging a pointer that is subject to changing
 285	at any time, including immediately after the rcu_dereference().
 286	And, again like rcu_assign_pointer(), rcu_dereference() is
 287	typically used indirectly, via the _rcu list-manipulation
 288	primitives, such as list_for_each_entry_rcu().
 289
 290The following diagram shows how each API communicates among the
 291reader, updater, and reclaimer.
 292
 293
 294	    rcu_assign_pointer()
 295	    			    +--------+
 296	    +---------------------->| reader |---------+
 297	    |                       +--------+         |
 298	    |                           |              |
 299	    |                           |              | Protect:
 300	    |                           |              | rcu_read_lock()
 301	    |                           |              | rcu_read_unlock()
 302	    |        rcu_dereference()  |              |
 303       +---------+                      |              |
 304       | updater |<---------------------+              |
 305       +---------+                                     V
 306	    |                                    +-----------+
 307	    +----------------------------------->| reclaimer |
 308	    				         +-----------+
 309	      Defer:
 310	      synchronize_rcu() & call_rcu()
 311
 312
 313The RCU infrastructure observes the time sequence of rcu_read_lock(),
 314rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in
 315order to determine when (1) synchronize_rcu() invocations may return
 316to their callers and (2) call_rcu() callbacks may be invoked.  Efficient
 317implementations of the RCU infrastructure make heavy use of batching in
 318order to amortize their overhead over many uses of the corresponding APIs.
 319
 320There are no fewer than three RCU mechanisms in the Linux kernel; the
 321diagram above shows the first one, which is by far the most commonly used.
 322The rcu_dereference() and rcu_assign_pointer() primitives are used for
 323all three mechanisms, but different defer and protect primitives are
 324used as follows:
 325
 326	Defer			Protect
 327
 328a.	synchronize_rcu()	rcu_read_lock() / rcu_read_unlock()
 329	call_rcu()		rcu_dereference()
 330
 331b.	synchronize_rcu_bh()	rcu_read_lock_bh() / rcu_read_unlock_bh()
 332	call_rcu_bh()		rcu_dereference_bh()
 333
 334c.	synchronize_sched()	rcu_read_lock_sched() / rcu_read_unlock_sched()
 335	call_rcu_sched()	preempt_disable() / preempt_enable()
 336				local_irq_save() / local_irq_restore()
 337				hardirq enter / hardirq exit
 338				NMI enter / NMI exit
 339				rcu_dereference_sched()
 340
 341These three mechanisms are used as follows:
 342
 343a.	RCU applied to normal data structures.
 344
 345b.	RCU applied to networking data structures that may be subjected
 346	to remote denial-of-service attacks.
 347
 348c.	RCU applied to scheduler and interrupt/NMI-handler tasks.
 349
 350Again, most uses will be of (a).  The (b) and (c) cases are important
 351for specialized uses, but are relatively uncommon.
 352
 353
 3543.  WHAT ARE SOME EXAMPLE USES OF CORE RCU API?
 355
 356This section shows a simple use of the core RCU API to protect a
 357global pointer to a dynamically allocated structure.  More-typical
 358uses of RCU may be found in listRCU.txt, arrayRCU.txt, and NMI-RCU.txt.
 359
 360	struct foo {
 361		int a;
 362		char b;
 363		long c;
 364	};
 365	DEFINE_SPINLOCK(foo_mutex);
 366
 367	struct foo __rcu *gbl_foo;
 368
 369	/*
 370	 * Create a new struct foo that is the same as the one currently
 371	 * pointed to by gbl_foo, except that field "a" is replaced
 372	 * with "new_a".  Points gbl_foo to the new structure, and
 373	 * frees up the old structure after a grace period.
 374	 *
 375	 * Uses rcu_assign_pointer() to ensure that concurrent readers
 376	 * see the initialized version of the new structure.
 377	 *
 378	 * Uses synchronize_rcu() to ensure that any readers that might
 379	 * have references to the old structure complete before freeing
 380	 * the old structure.
 381	 */
 382	void foo_update_a(int new_a)
 383	{
 384		struct foo *new_fp;
 385		struct foo *old_fp;
 386
 387		new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL);
 388		spin_lock(&foo_mutex);
 389		old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex));
 390		*new_fp = *old_fp;
 391		new_fp->a = new_a;
 392		rcu_assign_pointer(gbl_foo, new_fp);
 393		spin_unlock(&foo_mutex);
 394		synchronize_rcu();
 395		kfree(old_fp);
 396	}
 397
 398	/*
 399	 * Return the value of field "a" of the current gbl_foo
 400	 * structure.  Use rcu_read_lock() and rcu_read_unlock()
 401	 * to ensure that the structure does not get deleted out
 402	 * from under us, and use rcu_dereference() to ensure that
 403	 * we see the initialized version of the structure (important
 404	 * for DEC Alpha and for people reading the code).
 405	 */
 406	int foo_get_a(void)
 407	{
 408		int retval;
 409
 410		rcu_read_lock();
 411		retval = rcu_dereference(gbl_foo)->a;
 412		rcu_read_unlock();
 413		return retval;
 414	}
 415
 416So, to sum up:
 417
 418o	Use rcu_read_lock() and rcu_read_unlock() to guard RCU
 419	read-side critical sections.
 420
 421o	Within an RCU read-side critical section, use rcu_dereference()
 422	to dereference RCU-protected pointers.
 423
 424o	Use some solid scheme (such as locks or semaphores) to
 425	keep concurrent updates from interfering with each other.
 426
 427o	Use rcu_assign_pointer() to update an RCU-protected pointer.
 428	This primitive protects concurrent readers from the updater,
 429	-not- concurrent updates from each other!  You therefore still
 430	need to use locking (or something similar) to keep concurrent
 431	rcu_assign_pointer() primitives from interfering with each other.
 432
 433o	Use synchronize_rcu() -after- removing a data element from an
 434	RCU-protected data structure, but -before- reclaiming/freeing
 435	the data element, in order to wait for the completion of all
 436	RCU read-side critical sections that might be referencing that
 437	data item.
 438
 439See checklist.txt for additional rules to follow when using RCU.
 440And again, more-typical uses of RCU may be found in listRCU.txt,
 441arrayRCU.txt, and NMI-RCU.txt.
 442
 443
 4444.  WHAT IF MY UPDATING THREAD CANNOT BLOCK?
 445
 446In the example above, foo_update_a() blocks until a grace period elapses.
 447This is quite simple, but in some cases one cannot afford to wait so
 448long -- there might be other high-priority work to be done.
 449
 450In such cases, one uses call_rcu() rather than synchronize_rcu().
 451The call_rcu() API is as follows:
 452
 453	void call_rcu(struct rcu_head * head,
 454		      void (*func)(struct rcu_head *head));
 455
 456This function invokes func(head) after a grace period has elapsed.
 457This invocation might happen from either softirq or process context,
 458so the function is not permitted to block.  The foo struct needs to
 459have an rcu_head structure added, perhaps as follows:
 460
 461	struct foo {
 462		int a;
 463		char b;
 464		long c;
 465		struct rcu_head rcu;
 466	};
 467
 468The foo_update_a() function might then be written as follows:
 469
 470	/*
 471	 * Create a new struct foo that is the same as the one currently
 472	 * pointed to by gbl_foo, except that field "a" is replaced
 473	 * with "new_a".  Points gbl_foo to the new structure, and
 474	 * frees up the old structure after a grace period.
 475	 *
 476	 * Uses rcu_assign_pointer() to ensure that concurrent readers
 477	 * see the initialized version of the new structure.
 478	 *
 479	 * Uses call_rcu() to ensure that any readers that might have
 480	 * references to the old structure complete before freeing the
 481	 * old structure.
 482	 */
 483	void foo_update_a(int new_a)
 484	{
 485		struct foo *new_fp;
 486		struct foo *old_fp;
 487
 488		new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL);
 489		spin_lock(&foo_mutex);
 490		old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex));
 491		*new_fp = *old_fp;
 492		new_fp->a = new_a;
 493		rcu_assign_pointer(gbl_foo, new_fp);
 494		spin_unlock(&foo_mutex);
 495		call_rcu(&old_fp->rcu, foo_reclaim);
 496	}
 497
 498The foo_reclaim() function might appear as follows:
 499
 500	void foo_reclaim(struct rcu_head *rp)
 501	{
 502		struct foo *fp = container_of(rp, struct foo, rcu);
 503
 504		foo_cleanup(fp->a);
 505
 506		kfree(fp);
 507	}
 508
 509The container_of() primitive is a macro that, given a pointer into a
 510struct, the type of the struct, and the pointed-to field within the
 511struct, returns a pointer to the beginning of the struct.
 512
 513The use of call_rcu() permits the caller of foo_update_a() to
 514immediately regain control, without needing to worry further about the
 515old version of the newly updated element.  It also clearly shows the
 516RCU distinction between updater, namely foo_update_a(), and reclaimer,
 517namely foo_reclaim().
 518
 519The summary of advice is the same as for the previous section, except
 520that we are now using call_rcu() rather than synchronize_rcu():
 521
 522o	Use call_rcu() -after- removing a data element from an
 523	RCU-protected data structure in order to register a callback
 524	function that will be invoked after the completion of all RCU
 525	read-side critical sections that might be referencing that
 526	data item.
 527
 528If the callback for call_rcu() is not doing anything more than calling
 529kfree() on the structure, you can use kfree_rcu() instead of call_rcu()
 530to avoid having to write your own callback:
 531
 532	kfree_rcu(old_fp, rcu);
 533
 534Again, see checklist.txt for additional rules governing the use of RCU.
 535
 536
 5375.  WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU?
 538
 539One of the nice things about RCU is that it has extremely simple "toy"
 540implementations that are a good first step towards understanding the
 541production-quality implementations in the Linux kernel.  This section
 542presents two such "toy" implementations of RCU, one that is implemented
 543in terms of familiar locking primitives, and another that more closely
 544resembles "classic" RCU.  Both are way too simple for real-world use,
 545lacking both functionality and performance.  However, they are useful
 546in getting a feel for how RCU works.  See kernel/rcupdate.c for a
 547production-quality implementation, and see:
 548
 549	http://www.rdrop.com/users/paulmck/RCU
 550
 551for papers describing the Linux kernel RCU implementation.  The OLS'01
 552and OLS'02 papers are a good introduction, and the dissertation provides
 553more details on the current implementation as of early 2004.
 554
 555
 5565A.  "TOY" IMPLEMENTATION #1: LOCKING
 557
 558This section presents a "toy" RCU implementation that is based on
 559familiar locking primitives.  Its overhead makes it a non-starter for
 560real-life use, as does its lack of scalability.  It is also unsuitable
 561for realtime use, since it allows scheduling latency to "bleed" from
 562one read-side critical section to another.
 563
 564However, it is probably the easiest implementation to relate to, so is
 565a good starting point.
 566
 567It is extremely simple:
 568
 569	static DEFINE_RWLOCK(rcu_gp_mutex);
 570
 571	void rcu_read_lock(void)
 572	{
 573		read_lock(&rcu_gp_mutex);
 574	}
 575
 576	void rcu_read_unlock(void)
 577	{
 578		read_unlock(&rcu_gp_mutex);
 579	}
 580
 581	void synchronize_rcu(void)
 582	{
 583		write_lock(&rcu_gp_mutex);
 584		write_unlock(&rcu_gp_mutex);
 585	}
 586
 587[You can ignore rcu_assign_pointer() and rcu_dereference() without
 588missing much.  But here they are anyway.  And whatever you do, don't
 589forget about them when submitting patches making use of RCU!]
 590
 591	#define rcu_assign_pointer(p, v)	({ \
 592							smp_wmb(); \
 593							(p) = (v); \
 594						})
 595
 596	#define rcu_dereference(p)     ({ \
 597					typeof(p) _________p1 = p; \
 598					smp_read_barrier_depends(); \
 599					(_________p1); \
 600					})
 601
 602
 603The rcu_read_lock() and rcu_read_unlock() primitive read-acquire
 604and release a global reader-writer lock.  The synchronize_rcu()
 605primitive write-acquires this same lock, then immediately releases
 606it.  This means that once synchronize_rcu() exits, all RCU read-side
 607critical sections that were in progress before synchronize_rcu() was
 608called are guaranteed to have completed -- there is no way that
 609synchronize_rcu() would have been able to write-acquire the lock
 610otherwise.
 611
 612It is possible to nest rcu_read_lock(), since reader-writer locks may
 613be recursively acquired.  Note also that rcu_read_lock() is immune
 614from deadlock (an important property of RCU).  The reason for this is
 615that the only thing that can block rcu_read_lock() is a synchronize_rcu().
 616But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex,
 617so there can be no deadlock cycle.
 618
 619Quick Quiz #1:	Why is this argument naive?  How could a deadlock
 620		occur when using this algorithm in a real-world Linux
 621		kernel?  How could this deadlock be avoided?
 622
 623
 6245B.  "TOY" EXAMPLE #2: CLASSIC RCU
 625
 626This section presents a "toy" RCU implementation that is based on
 627"classic RCU".  It is also short on performance (but only for updates) and
 628on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT
 629kernels.  The definitions of rcu_dereference() and rcu_assign_pointer()
 630are the same as those shown in the preceding section, so they are omitted.
 631
 632	void rcu_read_lock(void) { }
 633
 634	void rcu_read_unlock(void) { }
 635
 636	void synchronize_rcu(void)
 637	{
 638		int cpu;
 639
 640		for_each_possible_cpu(cpu)
 641			run_on(cpu);
 642	}
 643
 644Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing.
 645This is the great strength of classic RCU in a non-preemptive kernel:
 646read-side overhead is precisely zero, at least on non-Alpha CPUs.
 647And there is absolutely no way that rcu_read_lock() can possibly
 648participate in a deadlock cycle!
 649
 650The implementation of synchronize_rcu() simply schedules itself on each
 651CPU in turn.  The run_on() primitive can be implemented straightforwardly
 652in terms of the sched_setaffinity() primitive.  Of course, a somewhat less
 653"toy" implementation would restore the affinity upon completion rather
 654than just leaving all tasks running on the last CPU, but when I said
 655"toy", I meant -toy-!
 656
 657So how the heck is this supposed to work???
 658
 659Remember that it is illegal to block while in an RCU read-side critical
 660section.  Therefore, if a given CPU executes a context switch, we know
 661that it must have completed all preceding RCU read-side critical sections.
 662Once -all- CPUs have executed a context switch, then -all- preceding
 663RCU read-side critical sections will have completed.
 664
 665So, suppose that we remove a data item from its structure and then invoke
 666synchronize_rcu().  Once synchronize_rcu() returns, we are guaranteed
 667that there are no RCU read-side critical sections holding a reference
 668to that data item, so we can safely reclaim it.
 669
 670Quick Quiz #2:	Give an example where Classic RCU's read-side
 671		overhead is -negative-.
 672
 673Quick Quiz #3:  If it is illegal to block in an RCU read-side
 674		critical section, what the heck do you do in
 675		PREEMPT_RT, where normal spinlocks can block???
 676
 677
 6786.  ANALOGY WITH READER-WRITER LOCKING
 679
 680Although RCU can be used in many different ways, a very common use of
 681RCU is analogous to reader-writer locking.  The following unified
 682diff shows how closely related RCU and reader-writer locking can be.
 683
 684	@@ -13,15 +14,15 @@
 685		struct list_head *lp;
 686		struct el *p;
 687
 688	-	read_lock();
 689	-	list_for_each_entry(p, head, lp) {
 690	+	rcu_read_lock();
 691	+	list_for_each_entry_rcu(p, head, lp) {
 692			if (p->key == key) {
 693				*result = p->data;
 694	-			read_unlock();
 695	+			rcu_read_unlock();
 696				return 1;
 697			}
 698		}
 699	-	read_unlock();
 700	+	rcu_read_unlock();
 701		return 0;
 702	 }
 703
 704	@@ -29,15 +30,16 @@
 705	 {
 706		struct el *p;
 707
 708	-	write_lock(&listmutex);
 709	+	spin_lock(&listmutex);
 710		list_for_each_entry(p, head, lp) {
 711			if (p->key == key) {
 712	-			list_del(&p->list);
 713	-			write_unlock(&listmutex);
 714	+			list_del_rcu(&p->list);
 715	+			spin_unlock(&listmutex);
 716	+			synchronize_rcu();
 717				kfree(p);
 718				return 1;
 719			}
 720		}
 721	-	write_unlock(&listmutex);
 722	+	spin_unlock(&listmutex);
 723		return 0;
 724	 }
 725
 726Or, for those who prefer a side-by-side listing:
 727
 728 1 struct el {                          1 struct el {
 729 2   struct list_head list;             2   struct list_head list;
 730 3   long key;                          3   long key;
 731 4   spinlock_t mutex;                  4   spinlock_t mutex;
 732 5   int data;                          5   int data;
 733 6   /* Other data fields */            6   /* Other data fields */
 734 7 };                                   7 };
 735 8 spinlock_t listmutex;                8 spinlock_t listmutex;
 736 9 struct el head;                      9 struct el head;
 737
 738 1 int search(long key, int *result)    1 int search(long key, int *result)
 739 2 {                                    2 {
 740 3   struct list_head *lp;              3   struct list_head *lp;
 741 4   struct el *p;                      4   struct el *p;
 742 5                                      5
 743 6   read_lock();                       6   rcu_read_lock();
 744 7   list_for_each_entry(p, head, lp) { 7   list_for_each_entry_rcu(p, head, lp) {
 745 8     if (p->key == key) {             8     if (p->key == key) {
 746 9       *result = p->data;             9       *result = p->data;
 74710       read_unlock();                10       rcu_read_unlock();
 74811       return 1;                     11       return 1;
 74912     }                               12     }
 75013   }                                 13   }
 75114   read_unlock();                    14   rcu_read_unlock();
 75215   return 0;                         15   return 0;
 75316 }                                   16 }
 754
 755 1 int delete(long key)                 1 int delete(long key)
 756 2 {                                    2 {
 757 3   struct el *p;                      3   struct el *p;
 758 4                                      4
 759 5   write_lock(&listmutex);            5   spin_lock(&listmutex);
 760 6   list_for_each_entry(p, head, lp) { 6   list_for_each_entry(p, head, lp) {
 761 7     if (p->key == key) {             7     if (p->key == key) {
 762 8       list_del(&p->list);            8       list_del_rcu(&p->list);
 763 9       write_unlock(&listmutex);      9       spin_unlock(&listmutex);
 764                                       10       synchronize_rcu();
 76510       kfree(p);                     11       kfree(p);
 76611       return 1;                     12       return 1;
 76712     }                               13     }
 76813   }                                 14   }
 76914   write_unlock(&listmutex);         15   spin_unlock(&listmutex);
 77015   return 0;                         16   return 0;
 77116 }                                   17 }
 772
 773Either way, the differences are quite small.  Read-side locking moves
 774to rcu_read_lock() and rcu_read_unlock, update-side locking moves from
 775a reader-writer lock to a simple spinlock, and a synchronize_rcu()
 776precedes the kfree().
 777
 778However, there is one potential catch: the read-side and update-side
 779critical sections can now run concurrently.  In many cases, this will
 780not be a problem, but it is necessary to check carefully regardless.
 781For example, if multiple independent list updates must be seen as
 782a single atomic update, converting to RCU will require special care.
 783
 784Also, the presence of synchronize_rcu() means that the RCU version of
 785delete() can now block.  If this is a problem, there is a callback-based
 786mechanism that never blocks, namely call_rcu() or kfree_rcu(), that can
 787be used in place of synchronize_rcu().
 788
 789
 7907.  FULL LIST OF RCU APIs
 791
 792The RCU APIs are documented in docbook-format header comments in the
 793Linux-kernel source code, but it helps to have a full list of the
 794APIs, since there does not appear to be a way to categorize them
 795in docbook.  Here is the list, by category.
 796
 797RCU list traversal:
 798
 799	list_entry_rcu
 800	list_first_entry_rcu
 801	list_next_rcu
 802	list_for_each_entry_rcu
 803	list_for_each_entry_continue_rcu
 804	hlist_first_rcu
 805	hlist_next_rcu
 806	hlist_pprev_rcu
 807	hlist_for_each_entry_rcu
 808	hlist_for_each_entry_rcu_bh
 809	hlist_for_each_entry_continue_rcu
 810	hlist_for_each_entry_continue_rcu_bh
 811	hlist_nulls_first_rcu
 812	hlist_nulls_for_each_entry_rcu
 813	hlist_bl_first_rcu
 814	hlist_bl_for_each_entry_rcu
 815
 816RCU pointer/list update:
 817
 818	rcu_assign_pointer
 819	list_add_rcu
 820	list_add_tail_rcu
 821	list_del_rcu
 822	list_replace_rcu
 823	hlist_add_behind_rcu
 824	hlist_add_before_rcu
 825	hlist_add_head_rcu
 826	hlist_del_rcu
 827	hlist_del_init_rcu
 828	hlist_replace_rcu
 829	list_splice_init_rcu()
 830	hlist_nulls_del_init_rcu
 831	hlist_nulls_del_rcu
 832	hlist_nulls_add_head_rcu
 833	hlist_bl_add_head_rcu
 834	hlist_bl_del_init_rcu
 835	hlist_bl_del_rcu
 836	hlist_bl_set_first_rcu
 837
 838RCU:	Critical sections	Grace period		Barrier
 839
 840	rcu_read_lock		synchronize_net		rcu_barrier
 841	rcu_read_unlock		synchronize_rcu
 842	rcu_dereference		synchronize_rcu_expedited
 843	rcu_read_lock_held	call_rcu
 844	rcu_dereference_check	kfree_rcu
 845	rcu_dereference_protected
 846
 847bh:	Critical sections	Grace period		Barrier
 848
 849	rcu_read_lock_bh	call_rcu_bh		rcu_barrier_bh
 850	rcu_read_unlock_bh	synchronize_rcu_bh
 851	rcu_dereference_bh	synchronize_rcu_bh_expedited
 852	rcu_dereference_bh_check
 853	rcu_dereference_bh_protected
 854	rcu_read_lock_bh_held
 855
 856sched:	Critical sections	Grace period		Barrier
 857
 858	rcu_read_lock_sched	synchronize_sched	rcu_barrier_sched
 859	rcu_read_unlock_sched	call_rcu_sched
 860	[preempt_disable]	synchronize_sched_expedited
 861	[and friends]
 862	rcu_read_lock_sched_notrace
 863	rcu_read_unlock_sched_notrace
 864	rcu_dereference_sched
 865	rcu_dereference_sched_check
 866	rcu_dereference_sched_protected
 867	rcu_read_lock_sched_held
 868
 869
 870SRCU:	Critical sections	Grace period		Barrier
 871
 872	srcu_read_lock		synchronize_srcu	srcu_barrier
 873	srcu_read_unlock	call_srcu
 874	srcu_dereference	synchronize_srcu_expedited
 875	srcu_dereference_check
 876	srcu_read_lock_held
 877
 878SRCU:	Initialization/cleanup
 879	init_srcu_struct
 880	cleanup_srcu_struct
 881
 882All:  lockdep-checked RCU-protected pointer access
 883
 884	rcu_access_pointer
 885	rcu_dereference_raw
 886	RCU_LOCKDEP_WARN
 887	rcu_sleep_check
 888	RCU_NONIDLE
 889
 890See the comment headers in the source code (or the docbook generated
 891from them) for more information.
 892
 893However, given that there are no fewer than four families of RCU APIs
 894in the Linux kernel, how do you choose which one to use?  The following
 895list can be helpful:
 896
 897a.	Will readers need to block?  If so, you need SRCU.
 898
 899b.	What about the -rt patchset?  If readers would need to block
 900	in an non-rt kernel, you need SRCU.  If readers would block
 901	in a -rt kernel, but not in a non-rt kernel, SRCU is not
 902	necessary.
 903
 904c.	Do you need to treat NMI handlers, hardirq handlers,
 905	and code segments with preemption disabled (whether
 906	via preempt_disable(), local_irq_save(), local_bh_disable(),
 907	or some other mechanism) as if they were explicit RCU readers?
 908	If so, RCU-sched is the only choice that will work for you.
 909
 910d.	Do you need RCU grace periods to complete even in the face
 911	of softirq monopolization of one or more of the CPUs?  For
 912	example, is your code subject to network-based denial-of-service
 913	attacks?  If so, you need RCU-bh.
 914
 915e.	Is your workload too update-intensive for normal use of
 916	RCU, but inappropriate for other synchronization mechanisms?
 917	If so, consider SLAB_DESTROY_BY_RCU.  But please be careful!
 918
 919f.	Do you need read-side critical sections that are respected
 920	even though they are in the middle of the idle loop, during
 921	user-mode execution, or on an offlined CPU?  If so, SRCU is the
 922	only choice that will work for you.
 923
 924g.	Otherwise, use RCU.
 925
 926Of course, this all assumes that you have determined that RCU is in fact
 927the right tool for your job.
 928
 929
 9308.  ANSWERS TO QUICK QUIZZES
 931
 932Quick Quiz #1:	Why is this argument naive?  How could a deadlock
 933		occur when using this algorithm in a real-world Linux
 934		kernel?  [Referring to the lock-based "toy" RCU
 935		algorithm.]
 936
 937Answer:		Consider the following sequence of events:
 938
 939		1.	CPU 0 acquires some unrelated lock, call it
 940			"problematic_lock", disabling irq via
 941			spin_lock_irqsave().
 942
 943		2.	CPU 1 enters synchronize_rcu(), write-acquiring
 944			rcu_gp_mutex.
 945
 946		3.	CPU 0 enters rcu_read_lock(), but must wait
 947			because CPU 1 holds rcu_gp_mutex.
 948
 949		4.	CPU 1 is interrupted, and the irq handler
 950			attempts to acquire problematic_lock.
 951
 952		The system is now deadlocked.
 953
 954		One way to avoid this deadlock is to use an approach like
 955		that of CONFIG_PREEMPT_RT, where all normal spinlocks
 956		become blocking locks, and all irq handlers execute in
 957		the context of special tasks.  In this case, in step 4
 958		above, the irq handler would block, allowing CPU 1 to
 959		release rcu_gp_mutex, avoiding the deadlock.
 960
 961		Even in the absence of deadlock, this RCU implementation
 962		allows latency to "bleed" from readers to other
 963		readers through synchronize_rcu().  To see this,
 964		consider task A in an RCU read-side critical section
 965		(thus read-holding rcu_gp_mutex), task B blocked
 966		attempting to write-acquire rcu_gp_mutex, and
 967		task C blocked in rcu_read_lock() attempting to
 968		read_acquire rcu_gp_mutex.  Task A's RCU read-side
 969		latency is holding up task C, albeit indirectly via
 970		task B.
 971
 972		Realtime RCU implementations therefore use a counter-based
 973		approach where tasks in RCU read-side critical sections
 974		cannot be blocked by tasks executing synchronize_rcu().
 975
 976Quick Quiz #2:	Give an example where Classic RCU's read-side
 977		overhead is -negative-.
 978
 979Answer:		Imagine a single-CPU system with a non-CONFIG_PREEMPT
 980		kernel where a routing table is used by process-context
 981		code, but can be updated by irq-context code (for example,
 982		by an "ICMP REDIRECT" packet).	The usual way of handling
 983		this would be to have the process-context code disable
 984		interrupts while searching the routing table.  Use of
 985		RCU allows such interrupt-disabling to be dispensed with.
 986		Thus, without RCU, you pay the cost of disabling interrupts,
 987		and with RCU you don't.
 988
 989		One can argue that the overhead of RCU in this
 990		case is negative with respect to the single-CPU
 991		interrupt-disabling approach.  Others might argue that
 992		the overhead of RCU is merely zero, and that replacing
 993		the positive overhead of the interrupt-disabling scheme
 994		with the zero-overhead RCU scheme does not constitute
 995		negative overhead.
 996
 997		In real life, of course, things are more complex.  But
 998		even the theoretical possibility of negative overhead for
 999		a synchronization primitive is a bit unexpected.  ;-)
1000
1001Quick Quiz #3:  If it is illegal to block in an RCU read-side
1002		critical section, what the heck do you do in
1003		PREEMPT_RT, where normal spinlocks can block???
1004
1005Answer:		Just as PREEMPT_RT permits preemption of spinlock
1006		critical sections, it permits preemption of RCU
1007		read-side critical sections.  It also permits
1008		spinlocks blocking while in RCU read-side critical
1009		sections.
1010
1011		Why the apparent inconsistency?  Because it is it
1012		possible to use priority boosting to keep the RCU
1013		grace periods short if need be (for example, if running
1014		short of memory).  In contrast, if blocking waiting
1015		for (say) network reception, there is no way to know
1016		what should be boosted.  Especially given that the
1017		process we need to boost might well be a human being
1018		who just went out for a pizza or something.  And although
1019		a computer-operated cattle prod might arouse serious
1020		interest, it might also provoke serious objections.
1021		Besides, how does the computer know what pizza parlor
1022		the human being went to???
1023
1024
1025ACKNOWLEDGEMENTS
1026
1027My thanks to the people who helped make this human-readable, including
1028Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern.
1029
1030
1031For more information, see http://www.rdrop.com/users/paulmck/RCU.