Fix: don't use overlapping mmap mappings on Cygwin
[urcu.git] / doc / uatomic-api.md
CommitLineData
dcb9c05a
PP
1Userspace RCU Atomic Operations API
2===================================
3
4by Mathieu Desnoyers and Paul E. McKenney
5
6This document describes the `<urcu/uatomic.h>` API. Those are the atomic
7operations provided by the Userspace RCU library. The general rule
8regarding memory barriers is that only `uatomic_xchg()`,
9`uatomic_cmpxchg()`, `uatomic_add_return()`, and `uatomic_sub_return()` imply
10full memory barriers before and after the atomic operation. Other
11primitives don't guarantee any memory barrier.
12
13Only atomic operations performed on integers (`int` and `long`, signed
14and unsigned) are supported on all architectures. Some architectures
15also support 1-byte and 2-byte atomic operations. Those respectively
16have `UATOMIC_HAS_ATOMIC_BYTE` and `UATOMIC_HAS_ATOMIC_SHORT` defined when
17`uatomic.h` is included. An architecture trying to perform an atomic write
18to a type size not supported by the architecture will trigger an illegal
19instruction.
20
21In the description below, `type` is a type that can be atomically
22written to by the architecture. It needs to be at most word-sized, and
23its alignment needs to greater or equal to its size.
24
25
26API
27---
28
29```c
30void uatomic_set(type *addr, type v)
31```
32
33Atomically write `v` into `addr`. By "atomically", we mean that no
34concurrent operation that reads from addr will see partial
35effects of `uatomic_set()`.
36
37
38```c
39type uatomic_read(type *addr)
40```
41
42Atomically read `v` from `addr`. By "atomically", we mean that
43`uatomic_read()` cannot see a partial effect of any concurrent
44uatomic update.
45
46
47```c
48type uatomic_cmpxchg(type *addr, type old, type new)
49```
50
51An atomic read-modify-write operation that performs this
52sequence of operations atomically: check if `addr` contains `old`.
53If true, then replace the content of `addr` by `new`. Return the
20d8db46 54value previously contained by `addr`. This function implies a full
dcb9c05a
PP
55memory barrier before and after the atomic operation.
56
57
58```c
59type uatomic_xchg(type *addr, type new)
60```
61
62An atomic read-modify-write operation that performs this sequence
63of operations atomically: replace the content of `addr` by `new`,
64and return the value previously contained by `addr`. This
20d8db46 65function implies a full memory barrier before and after the atomic
dcb9c05a
PP
66operation.
67
68
69```c
70type uatomic_add_return(type *addr, type v)
71type uatomic_sub_return(type *addr, type v)
72```
73
74An atomic read-modify-write operation that performs this
75sequence of operations atomically: increment/decrement the
76content of `addr` by `v`, and return the resulting value. This
20d8db46 77function implies a full memory barrier before and after the atomic
dcb9c05a
PP
78operation.
79
80
81```c
82void uatomic_and(type *addr, type mask)
83void uatomic_or(type *addr, type mask)
84```
85
86Atomically write the result of bitwise "and"/"or" between the
87content of `addr` and `mask` into `addr`.
88
89These operations do not necessarily imply memory barriers.
90If memory barriers are needed, they may be provided by explicitly using
91`cmm_smp_mb__before_uatomic_and()`, `cmm_smp_mb__after_uatomic_and()`,
92`cmm_smp_mb__before_uatomic_or()`, and `cmm_smp_mb__after_uatomic_or()`.
93These explicit barriers are no-ops on architectures in which the underlying
94atomic instructions implicitly supply the needed memory barriers.
95
96
97```c
98void uatomic_add(type *addr, type v)
99void uatomic_sub(type *addr, type v)
100```
101
102Atomically increment/decrement the content of `addr` by `v`.
103These operations do not necessarily imply memory barriers.
104If memory barriers are needed, they may be provided by
105explicitly using `cmm_smp_mb__before_uatomic_add()`,
106`cmm_smp_mb__after_uatomic_add()`, `cmm_smp_mb__before_uatomic_sub()`, and
107`cmm_smp_mb__after_uatomic_sub()`. These explicit barriers are
108no-ops on architectures in which the underlying atomic
109instructions implicitly supply the needed memory barriers.
110
111
112```c
113void uatomic_inc(type *addr)
114void uatomic_dec(type *addr)
115```
116
117Atomically increment/decrement the content of `addr` by 1.
118These operations do not necessarily imply memory barriers.
119If memory barriers are needed, they may be provided by
120explicitly using `cmm_smp_mb__before_uatomic_inc()`,
121`cmm_smp_mb__after_uatomic_inc()`, `cmm_smp_mb__before_uatomic_dec()`,
122and `cmm_smp_mb__after_uatomic_dec()`. These explicit barriers are
123no-ops on architectures in which the underlying atomic
124instructions implicitly supply the needed memory barriers.
This page took 0.032615 seconds and 4 git commands to generate.