Performance: mark lib_ring_buffer_write always inline
authorMathieu Desnoyers <mathieu.desnoyers@efficios.com>
Sun, 25 Sep 2016 14:43:22 +0000 (10:43 -0400)
committerMathieu Desnoyers <mathieu.desnoyers@efficios.com>
Sun, 25 Sep 2016 14:43:22 +0000 (10:43 -0400)
The underlying copy operation is more efficient if the size is a
constant, which only happens if this function is inlined in the caller.
Otherwise, we end up calling memcpy for each field.

Force inlining for performance reasons.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
lib/ringbuffer/backend.h

index 8f6d7d04d8a8ed76d86ed694c44e1864cc05d509..449d663555dc37ecf11e7165fa564a6a45f94f57 100644 (file)
@@ -83,7 +83,7 @@ lib_ring_buffer_read_offset_address(struct lib_ring_buffer_backend *bufb,
  * backend-specific memcpy() operation. Calls the slow path (_ring_buffer_write)
  * if copy is crossing a page boundary.
  */
-static inline
+static inline __attribute__((always_inline))
 void lib_ring_buffer_write(const struct lib_ring_buffer_config *config,
                           struct lib_ring_buffer_ctx *ctx,
                           const void *src, size_t len)
This page took 0.025721 seconds and 4 git commands to generate.