Commit | Line | Data |
---|---|---|
d001c886 MJ |
1 | <!-- |
2 | SPDX-FileCopyrightText: 2023 EfficiOS Inc. | |
3 | ||
4 | SPDX-License-Identifier: CC-BY-4.0 | |
5 | --> | |
6 | ||
dcb9c05a PP |
7 | Userspace RCU API |
8 | ================= | |
9 | ||
10 | by Mathieu Desnoyers and Paul E. McKenney | |
11 | ||
12 | ||
13 | API | |
14 | --- | |
15 | ||
16 | ```c | |
17 | void rcu_init(void); | |
18 | ``` | |
19 | ||
20 | This must be called before any of the following functions | |
21 | are invoked. | |
22 | ||
23 | ||
24 | ```c | |
25 | void rcu_read_lock(void); | |
26 | ``` | |
27 | ||
28 | Begin an RCU read-side critical section. These critical | |
29 | sections may be nested. | |
30 | ||
31 | ||
32 | ```c | |
33 | void rcu_read_unlock(void); | |
34 | ``` | |
35 | ||
36 | End an RCU read-side critical section. | |
37 | ||
38 | ||
39 | ```c | |
40 | void rcu_register_thread(void); | |
41 | ``` | |
42 | ||
43 | Each thread must invoke this function before its first call to | |
44 | `rcu_read_lock()`. Threads that never call `rcu_read_lock()` need | |
45 | not invoke this function. In addition, `rcu-bp` ("bullet proof" | |
46 | RCU) does not require any thread to invoke `rcu_register_thread()`. | |
47 | ||
48 | ||
49 | ```c | |
50 | void rcu_unregister_thread(void); | |
51 | ``` | |
52 | ||
53 | Each thread that invokes `rcu_register_thread()` must invoke | |
54 | `rcu_unregister_thread()` before `invoking pthread_exit()` | |
55 | or before returning from its top-level function. | |
56 | ||
57 | ||
58 | ```c | |
59 | void synchronize_rcu(void); | |
60 | ``` | |
61 | ||
62 | Wait until every pre-existing RCU read-side critical section | |
63 | has completed. Note that this primitive will not necessarily | |
64 | wait for RCU read-side critical sections that have not yet | |
65 | started: this is not a reader-writer lock. The duration | |
66 | actually waited is called an RCU grace period. | |
67 | ||
68 | ||
a88a3a86 MD |
69 | ```c |
70 | struct urcu_gp_poll_state start_poll_synchronize_rcu(void); | |
71 | ``` | |
72 | ||
b285374a JG |
73 | Provides a handle for checking if a new grace period has started |
74 | and completed since the handle was obtained. It returns a | |
75 | `struct urcu_gp_poll_state` handle that can be used with | |
76 | `poll_state_synchronize_rcu` to check, by polling, if the | |
77 | associated grace period has completed. | |
a88a3a86 | 78 | |
b285374a JG |
79 | `start_poll_synchronize_rcu` must only be called from |
80 | registered RCU read-side threads. For the QSBR flavor, the | |
81 | caller must be online. | |
a88a3a86 MD |
82 | |
83 | ||
84 | ```c | |
85 | bool poll_state_synchronize_rcu(struct urcu_gp_poll_state state); | |
86 | ``` | |
87 | ||
b285374a JG |
88 | Checks if the grace period associated with the |
89 | `struct urcu_gp_poll_state` handle has completed. If the grace | |
90 | period has completed, the function returns true. Otherwise, | |
91 | it returns false. | |
a88a3a86 MD |
92 | |
93 | ||
dcb9c05a PP |
94 | ```c |
95 | void call_rcu(struct rcu_head *head, | |
96 | void (*func)(struct rcu_head *head)); | |
97 | ``` | |
98 | ||
99 | Registers the callback indicated by "head". This means | |
100 | that `func` will be invoked after the end of a future | |
101 | RCU grace period. The `rcu_head` structure referenced | |
102 | by `head` will normally be a field in a larger RCU-protected | |
103 | structure. A typical implementation of `func` is as | |
104 | follows: | |
105 | ||
106 | ```c | |
107 | void func(struct rcu_head *head) | |
108 | { | |
109 | struct foo *p = container_of(head, struct foo, rcu); | |
110 | ||
111 | free(p); | |
112 | } | |
113 | ``` | |
114 | ||
115 | This RCU callback function can be registered as follows | |
116 | given a pointer `p` to the enclosing structure: | |
117 | ||
118 | ```c | |
119 | call_rcu(&p->rcu, func); | |
120 | ``` | |
121 | ||
122 | `call_rcu` should be called from registered RCU read-side threads. | |
123 | For the QSBR flavor, the caller should be online. | |
124 | ||
125 | ||
126 | ```c | |
127 | void rcu_barrier(void); | |
128 | ``` | |
129 | ||
130 | Wait for all `call_rcu()` work initiated prior to `rcu_barrier()` by | |
131 | _any_ thread on the system to have completed before `rcu_barrier()` | |
132 | returns. `rcu_barrier()` should never be called from a `call_rcu()` | |
133 | thread. This function can be used, for instance, to ensure that | |
134 | all memory reclaim involving a shared object has completed | |
135 | before allowing `dlclose()` of this shared object to complete. | |
136 | ||
137 | ||
138 | ```c | |
139 | struct call_rcu_data *create_call_rcu_data(unsigned long flags, | |
140 | int cpu_affinity); | |
141 | ``` | |
142 | ||
143 | Returns a handle that can be passed to the following | |
144 | primitives. The `flags` argument can be zero, or can be | |
145 | `URCU_CALL_RCU_RT` if the worker threads associated with the | |
146 | new helper thread are to get real-time response. The argument | |
147 | `cpu_affinity` specifies a CPU on which the `call_rcu` thread should | |
148 | be affined to. It is ignored if negative. | |
149 | ||
150 | ||
151 | ```c | |
152 | void call_rcu_data_free(struct call_rcu_data *crdp); | |
153 | ``` | |
154 | ||
155 | Terminates a `call_rcu()` helper thread and frees its associated | |
156 | data. The caller must have ensured that this thread is no longer | |
157 | in use, for example, by passing `NULL` to `set_thread_call_rcu_data()` | |
158 | and `set_cpu_call_rcu_data()` as required. | |
159 | ||
160 | ||
161 | ```c | |
162 | struct call_rcu_data *get_default_call_rcu_data(void); | |
163 | ``` | |
164 | ||
165 | Returns the handle for the default `call_rcu()` helper thread. | |
166 | Creates it if necessary. | |
167 | ||
168 | ||
169 | ```c | |
170 | struct call_rcu_data *get_cpu_call_rcu_data(int cpu); | |
171 | ``` | |
172 | ||
173 | Returns the handle for the current CPU's `call_rcu()` helper | |
174 | thread, or `NULL` if the current CPU has no helper thread | |
175 | currently assigned. The call to this function and use of the | |
176 | returned `call_rcu_data` should be protected by RCU read-side | |
177 | lock. | |
178 | ||
179 | ||
180 | ```c | |
181 | struct call_rcu_data *get_thread_call_rcu_data(void); | |
182 | ``` | |
183 | ||
184 | Returns the handle for the current thread's hard-assigned | |
185 | `call_rcu()` helper thread, or `NULL` if the current thread is | |
186 | instead using a per-CPU or the default helper thread. | |
187 | ||
188 | ||
189 | ```c | |
190 | struct call_rcu_data *get_call_rcu_data(void); | |
191 | ``` | |
192 | ||
193 | Returns the handle for the current thread's `call_rcu()` helper | |
194 | thread, which is either, in increasing order of preference: | |
195 | per-thread hard-assigned helper thread, per-CPU helper thread, | |
196 | or default helper thread. `get_call_rcu_data` should be called | |
197 | from registered RCU read-side threads. For the QSBR flavor, the | |
198 | caller should be online. | |
199 | ||
200 | ||
201 | ```c | |
202 | pthread_t get_call_rcu_thread(struct call_rcu_data *crdp); | |
203 | ``` | |
204 | ||
205 | Returns the helper thread's pthread identifier linked to a call | |
206 | rcu helper thread data. | |
207 | ||
208 | ||
209 | ```c | |
210 | void set_thread_call_rcu_data(struct call_rcu_data *crdp); | |
211 | ``` | |
212 | ||
213 | Sets the current thread's hard-assigned `call_rcu()` helper to the | |
214 | handle specified by `crdp`. Note that `crdp` can be `NULL` to | |
215 | disassociate this thread from its helper. Once a thread is | |
216 | disassociated from its helper, further `call_rcu()` invocations | |
217 | use the current CPU's helper if there is one and the default | |
218 | helper otherwise. | |
219 | ||
220 | ||
221 | ```c | |
222 | int set_cpu_call_rcu_data(int cpu, struct call_rcu_data *crdp); | |
223 | ``` | |
224 | ||
225 | Sets the specified CPU's `call_rcu()` helper to the handle | |
226 | specified by `crdp`. Again, `crdp` can be `NULL` to disassociate | |
227 | this CPU from its helper thread. Once a CPU has been | |
228 | disassociated from its helper, further `call_rcu()` invocations | |
229 | that would otherwise have used this CPU's helper will instead | |
230 | use the default helper. | |
231 | ||
232 | The caller must wait for a grace-period to pass between return from | |
233 | `set_cpu_call_rcu_data()` and call to `call_rcu_data_free()` passing the | |
234 | previous call rcu data as argument. | |
235 | ||
236 | ||
237 | ```c | |
238 | int create_all_cpu_call_rcu_data(unsigned long flags); | |
239 | ``` | |
240 | ||
241 | Creates a separate `call_rcu()` helper thread for each CPU. | |
242 | After this primitive is invoked, the global default `call_rcu()` | |
243 | helper thread will not be called. | |
244 | ||
245 | The `set_thread_call_rcu_data()`, `set_cpu_call_rcu_data()`, and | |
246 | `create_all_cpu_call_rcu_data()` functions may be combined to set up | |
247 | pretty much any desired association between worker and `call_rcu()` | |
248 | helper threads. If a given executable calls only `call_rcu()`, | |
249 | then that executable will have only the single global default | |
250 | `call_rcu()` helper thread. This will suffice in most cases. | |
251 | ||
252 | ||
253 | ```c | |
254 | void free_all_cpu_call_rcu_data(void); | |
255 | ``` | |
256 | ||
257 | Clean up all the per-CPU `call_rcu` threads. Should be paired with | |
258 | `create_all_cpu_call_rcu_data()` to perform teardown. Note that | |
259 | this function invokes `synchronize_rcu()` internally, so the | |
260 | caller should be careful not to hold mutexes (or mutexes within a | |
261 | dependency chain) that are also taken within a RCU read-side | |
262 | critical section, or in a section where QSBR threads are online. | |
263 | ||
264 | ||
265 | ```c | |
ceb592f9 MD |
266 | void call_rcu_before_fork_parent(void); |
267 | void call_rcu_after_fork_parent(void); | |
dcb9c05a PP |
268 | void call_rcu_after_fork_child(void); |
269 | ``` | |
270 | ||
271 | Should be used as `pthread_atfork()` handler for programs using | |
272 | `call_rcu` and performing `fork()` or `clone()` without a following | |
273 | `exec()`. |