Rename C++ header files to .hpp Rename all C++ header files (include/**/*-internal.h, src/**/*.h except argpar and msgpack, some headers in tests) to have the .hpp extension. Doing so highlights that we include some C++ header files in some test files still compiled as C. This is ok for now, as the files they include don't actually contain C++ code incompatible with C yet, but they could eventually. This is something we can fix later. Change-Id: I8bf326b6b2946a3e26704f3ef3ac5831bbe9bc26 Signed-off-by: Simon Marchi <simon.marchi@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Remove extern "C" from internal headers All internal code is now compiled as C++, we can now remove all 'extern "C"' declarations from internal headers. This means files will see each other's declarations as C++, and we can now use things in headers. Remove the min/min_t/max/max_t macros from macros.h as well. Change-Id: I5a6b7ef60be5f46160c6d5ca39f082d2137d5a07 Signed-off-by: Simon Marchi <simon.marchi@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Clean-up: consumer: move open packet to post_consume Move the "open packet" step of read_subbuffer to a post-consume callback as this only needs to be done for data streams; it does not belong in the core of the read_subbuffer template method. Change-Id: Ia4d3f8f833e213a8d0e39bcf5ec766c2c05bcf80 Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Fix: consumerd: user space metadata not regenerated Observed Issue ============== The LTTng-IVC tests fail on the `regenerate metadata` tests which essentially: - Setups a user space session - Enables events - Traces an application - Stops tracing - Validates the trace - Truncates the metadata file (empties it) - Starts tracing - Regenerates the metadata - Stops the session - Validates the trace The last trace validation step fails on an empty file (locally) or a garbled file (remote). The in-tree tests did no catch any of this since they essentially don't test much. They verify that the command works (returns 0) but do not validate any of its effects. The issue was bisected down to: commit 6f9449c22eef59294cf1e1dc3610a5cbf14baec0 (HEAD) Author: Jérémie Galarneau <jeremie.galarneau@efficios.com> Date: Sun May 10 18:00:26 2020 -0400 consumerd: refactor: split read_subbuf into sub-operations [...] Cause ===== The commit that introduced the issue refactored the sub-buffer consumption loop to eliminate code duplications between the user space and kernel consumer daemons. In doing so, it eleminated a metadata version check from the consumption path. The consumption of a metadata sub-buffer follows those relevant high-level steps: - `get` the sub-buffer - /!\ user space specific /!\ - if the `get` fails, attempt to flush the metadata cache's contents to the ring-buffer - populate `stream_subbuffer` properties (size, version, etc.) - check the sub-buffer's version against the last known metadata version (pre-consume step) - if they don't match, a metadata regeneration occurred: reset the metadata consumed position - consume (actual write/send) - `put` sub-buffer [...] As shown above, the user space consumer must manage the flushing of the metadata cache explicitly as opposed to the kernel domain for which the tracer performs the flushing implicitly through the `get` operation. When the user space consumer encounters a `get` failure, it checks if all the metadata cache was flushed (consumed position != cache size), and flushes any remaining contents. However, the metadata version could have changed and yielded an identical cache size: a regeneration without any new metadata will yield the same cache size. Since 6f9449c22, the metadata version check is only performed after a successful `get`. This means that after a regeneration, `get` never succeeds (there is seemingly nothing to consume), and the metadata version check is never performed. Therefore, the metadata stream is never put in the `reset` mode, effectively not regenerating the data. Note that producing new metadata (e.g. a newly registering app announcing new events) would work around the problem here. Solution ======== Add a metadata version check when failing to `get` a metadata sub-buffer. This is done in `commit_one_metadata_packet()` when the cache size is seen to be equal to the consumed position. When this occurs, `consumer_stream_metadata_set_version()`, a new consumer stream method, is invoked which sets the new metadata version, sets the `reset` flag, and discards any previously bucketized metadata. The metadata cache's consumed position is also reset, allowing the cache flush to take place. `metadata_stream_reset_cache()` is renamed to `metadata_stream_reset_cache_consumed_position()` since its name is misleading and since it is used as part of the fix. Know drawbacks ============== None. Change-Id: I3b933c8293f409f860074bd49bebd8d1248b6341 Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com> Reported-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Fix: consumerd: live client receives incomplete metadata Observed issue ============== Babeltrace 1.5.x and Babeltrace 2.x can both report errors (albeit differently) when using the "lttng-live" protocol that imply that the metadata they received is incomplete. For instance, babeltrace 1.5.3 reports the following error: ``` [error] Error creating AST [error] [Context] Cannot open_mmap_trace of format ctf. [error] Error adding trace [warning] [Context] Cannot open_trace of format lttng-live at path net://localhost:xxxx/host/session/live_session. [warning] [Context] cannot open trace "net://localhost:xxxx/host/session/live_session" for reading. [error] opening trace "net://localhost:xxxx/host/session/live_session" for reading. [error] none of the specified trace paths could be opened. ``` While debugging both viewers, I noticed that both were attempting to receive the available metadata before consuming the "data" streams' content. Typically, the following exchange between the relay daemon and the lttng-live client occurs when the problem is observed: bt lttng-live: emits LTTNG_VIEWER_GET_METADATA command relayd: returns LTTNG_VIEWER_METADATA_OK, len = 4096 (default packet size) bt lttng-live: consume 4096 bytes of metadata emits LTTNG_VIEWER_GET_METADATA command relayd: returns LTTNG_VIEWER_NO_NEW_METADATA When the lttng-live client receives the LTTNG_VIEWER_NO_NEW_METADATA status code, it attempts to parse all the metadata it has received since the last LTTNG_VIEWER_NO_NEW_METADATA reply. In effect, it is expected that this forms a logical unit of metadata that is parseable on its own. If this is the first time metadata is received for that trace, the metadata is expected to contain a trace declaration, packet header declaration, etc. If metadata was already received, it is expected that the newly parsed declarations can be "appended" to the existing trace schema. It appears that the relay daemon sends the LTTNG_VIEWER_NO_NEW_METADATA while the metadata it has sent up to that point is not parseable on its own. The live protocol description does not require or imply that a viewer should attempt to parse metadata packets until it hopefully succeeds at some point. Anyhow: 1) This would make it impossible for a live viewer to correctly handle a corrupted metadata stream beyond retrying forever, 2) This behaviour is not implemented by the two reference implementations of the protocol. Cause ===== The relay daemon provides a guarantee that it will send any available metadata before allowing a data stream packet to be served to the client. In other words, a client requesting a data packet will receive the LTTNG_VIEWER_FLAG_NEW_METADATA status code (and no data) if it attempts to get a data stream packet while the relay daemon has metadata already available. This guarantee is properly enforced as far as I can tell. However, looking at the consumer daemon implementation, it appears that metadata packets are sent as soon as they are available. A metadata packet is not guaranteed to be parseable on its own. For instance, it can end in the middle the an event declaration. Hence, this hints at a race involving the tracer, the consumer daemon, the relay daemon, and the lttng-live client. Consider the following scenario: - Metadata packets (sub-buffers) are configured to be 4kB in size, - a large number of kernel events are enabled (e.g. --kernel --all), - the network connection between the consumer and relay daemons is slow 1) The kernel tracer will produce enough TSDL metadata to fill the first sub-buffer of the "metadata" ring-buffer and signal the consumer daemon that a buffer is ready. The tracer then starts writing the remaining data in the following available sub-buffers. 2) The consumer daemon metadata thread is woken up and consumes the first metadata sub-buffer and sends it to the relay daemon. 3) A live client establishes an lttng-live connection to the relay daemon and attempts to consume the available metadata. It receives the first packet and, since the relay daemon doesn't know about any follow-up metadata, receives LTTNG_VIEWER_NO_NEW_METADATA on the next attempt. 4) Having received LTTNG_VIEWER_NO_NEW_METADATA, the lttng-live client attempts to parse the metadata it has received and fails. This scenario is easy to reproduce by inserting a "sleep(1)" at src/bin/lttng-relayd/main.c:1978 (as of this revision). This simulates a relay daemon that would be slow to receive/process metadata packets from the consumer daemon. This problem similarly applies to the user space tracer. Solution ======== Having no knowledge of TSDL, the relay daemon can't "bundle" packets of metadata until they form a "parseable unit" to send to the consumer daemon. To provide the parseability guarantee expected by the viewers, and by the relay daemon implicitly, we need to ensure that the consumer daemons only send "parseable units" of metadata to the relay daemon. Unfortunately, the consumer daemons do not know how to parse TSDL either. In fact, only the metadata producers are able to provide the boundaries of the "parseable units" of metadata. The general idea of the fix is to accumulate metadata up to a point where a "parseable unit" boundary has been identified and send that content in one request to the relay daemon. Note that the solution described here only concerns the live mode. In other cases, the mechanisms described are simply bypassed. A "metadata bucket" is added to lttng_consumer_stream when it is created from a live channel. This bucket is filled until the consumption position reaches the "parseable unit" end position. A refresher about the handling of metadata in live mode ------------------------------------------------------- Three "events" are of interest here and can cause metadata to be consumed more or less indirectly: 1) A metadata packet is closed, causing the metadata thread to wake up 2) The live timer expires 3) A data sub-buffer is closed, causing the data thread to wake-up 1) The first case is simple and happens regardless of whether or not the tracing session is in live mode or not. Metadata is always consumed by the metadata thread in the same way. However, this scenario can be "caused" by (2) and (3). See [1]. A sub-buffer is "acquired" from the metadata ring-buffer and sent to the relayd daemon as the payload of a "RELAYD_SEND_METADATA" command. 2) When the live timer expires [2], the 'check_stream' function is called on all data streams of the session. As its name clearly implies, this function is responsible for flushing all streams or sending a "live beacon" (called an "empty index" in the code) if there is no data to flush. Any flushed data will result in (3). 3) When a data sub-buffer is ready to be consumed, [1] is invoked by the data thread. This function acquires a sub-buffer and sends it to the relay daemon through the data connection. Then, an important synchronization step takes place. The index of the newly-sent packet will be sent through the control connection. The relay daemon waits for both the data packet and its matching index before making the new packet visible to live viewers. Since a data packet could contain data that requires "newer" metadata to be decoded, the data thread flushes the metadata stream and enters a "waiting" phase to pause until all metadata present in the metadata ring buffer has been consumed [3]. At the end of this waiting phase, the data thread sends the data packet's index to the relay daemon, allowing the relayd to make it visible to its live clients. How to identify a "parseable unit" boundary? -------------------------------------------- In the case of the kernel domain, the kernel tracer produces the actual TSDL descriptions directly. The TSDL metadata is serialized to a metadata cache and is flushed "just in time" to the metadata ring-buffer when a "get next" operation is performed. There is no way, from user space, to query whether or not the metadata cache of the kernel tracer is empty. Hence, a new RING_RING_BUFFER_GET_NEXT_SUBBUF_METADATA_CHECK command was added to query whether or not the kernel tracer's metadata cache is empty when acquiring a sub-buffer. This allows the consumer daemon to identify a "coherent" position in the metadata stream that is safe to use as a "parseable unit" boundary. As for the user space domain, since the session daemon is responsible for generating the TSDL representation of the metadata, there is no need to change LTTng-ust APIs. The session daemon generates coherent units of metadata and adds them to its "registry" at once (protected by the registry's lock). It then flushes the contents to the consumer daemon and waits for that data to be consumed before proceeding further. On the consumer daemon side, the metadata cache is filled with the newly-produced contents. This is done atomically with respect to accesses to the metadata cache as all accesses happen through a dedicated metadata cache lock. When the consumer's metadata polling thread is woken-up, it will attempt to acquire (`get_next`) a sub-buffer from the metadata stream ring-buffer. If it fails, it will flush a sub-buffer's worth of metadata to the ring-buffer and attempt to acquire a sub-buffer again. At this point, it is possible to determine if that sub-buffer is the last one of a parseable metadata unit: the cache must be empty and the ring-buffer must be empty following the consumption of this sub-buffer. When those conditions are met, the resulting metadata `stream_subbuffer` is tagged as being `coherent`. Metadata bucket --------------- A helper interface, metadata_bucket, is introduced as part of this fix. A metadata_bucket is `fill`ed with `stream_subbuffer`s, and is eventually `flushed` when it is filled by a `coherent` sub-buffer. As older versions of LTTng-modules must remain supported, this new helper is not used when the RING_RING_BUFFER_GET_NEXT_SUBBUF_METADATA_CHECK operation is not available. When the operation is available, the metadata stream's bucketization is enabled, causing a bucket to be created and the `consume` callback to be swapped. The `consume` callback of the metadata streams is replaced by a new implementation when the metadata bucketization is activated on the stream. This implementation returns the padded size of the consumed sub-buffer when they could be added to the bucket. When the bucket is flushed, the regular `mmap`-based consumption function is called with the bucket's contents. Known drawbacks =============== This implementation causes the consumer daemon to buffer the whole initial unit of metadata before sending it. In practice, this is not expected to be a problem since the largest metadata files we have seen in real use are a couple of megabytes wide. Beyond the (temporary) memory use, this causes the metadata thread to block while this potentially large chunk of metadata is sent (rather than blocking while sending 4kb at a time). The second point is just a consequence of existing shortcomings of the consumerd; slow IO should not affect other unrelated streams. The fundamental problem is that blocking IO is used and we should switch to non-blocking communication if this is a problem (as is done in the relay daemon). The first point is more problematic given the existing tracer APIs. If the tracer could provide the boundary of a "parseable unit" of metadata, we could send the header of the RELAYD_SEND_METADATA command with that size and send the various metadata packets as they are made available. This would make no difference to the relay daemon as it is not blocking on that socket and will not make the metadata size change visible to the "live server" until it has all been received. This size can't be determined right now since it could exceed the total size of the "metadata" ring buffer. In other words, we can't wait for the production of metadata to complete before starting to consume. Finally, while implementing this fix, I also realized that the computation of the rotation position of the metadata streams is erroneous. The rotation code makes use of the ring-buffer's positions to determine the rotation position. However, since both user space and kernel domains make use of a "cache" behind the ring-buffer, that cached content must be taken into account when computing the metadata stream's rotation position. References ========== [1] https://github.com/lttng/lttng-tools/blob/d5ccf8fe0/src/common/consumer/consumer.c#L3433 [2] https://github.com/lttng/lttng-tools/blob/d5ccf8fe0/src/common/consumer/consumer-timer.c#L312 [3] https://github.com/lttng/lttng-tools/blob/d5ccf8fe0/src/common/consumer/consumer-stream.c#L492 Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com> Change-Id: I40ee07e5c344c72d9aae2b9b15dc36c00b21e5fa
consumerd: refactor: split read_subbuf into sub-operations The read_subbuf code paths intertwine domain-specific operations and metadata/data-specific logic which makes modifications error prone and introduces a fair amount of code duplication. lttng_consumer_read_subbuffer is effectively turned into a template method invoking overridable callbacks making most of the consumption logic domain and data/metadata agnostic. The goal is not to extensively clean-up that code path. A follow-up fix introduces metadata buffering logic which would not reasonably fit in the current scheme. This clean-up makes it easier to safely introduce those changes. No changes in behaviour are intended by this change. Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com> Change-Id: I9366f2e2a38018ca9b617b93ad9259340180c55d
Move to kernel style SPDX license identifiers The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. See https://spdx.org/ids-how for details. Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Change-Id: I62e7038e191a061286abcef5550b58f5ee67149d Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Fix: consumerd: NULL pointer dereference during metadata sync The following crash was reported when short-lived applications are traced in a live session with per-pid buffering channels. From the original report: ``` Thread 1 (Thread 0x7f72b67fc700 (LWP 1912155)): #0 0x00005650b3f6ccbd in commit_one_metadata_packet (stream=0x7f729c010bf0) at ust-consumer.c:2537 #1 0x00005650b3f6cf58 in lttng_ustconsumer_sync_metadata (ctx=0x5650b588ce60, metadata=0x7f729c010bf0) at ust-consumer.c:2608 #2 0x00005650b3f4dba3 in do_sync_metadata (metadata=0x7f729c010bf0, ctx=0x5650b588ce60) at consumer-stream.c:471 #3 0x00005650b3f4dd3c in consumer_stream_sync_metadata (ctx=0x5650b588ce60, session_id=0) at consumer-stream.c:548 #4 0x00005650b3f6de78 in lttng_ustconsumer_read_subbuffer (stream=0x7f729c0058e0, ctx=0x5650b588ce60) at ust-consumer.c:2917 #5 0x00005650b3f45196 in lttng_consumer_read_subbuffer (stream=0x7f729c0058e0, ctx=0x5650b588ce60) at consumer.c:3524 #6 0x00005650b3f42da7 in consumer_thread_data_poll (data=0x5650b588ce60) at consumer.c:2894 #7 0x00007f72bdc476db in start_thread (arg=0x7f72b67fc700) at pthread_create.c:463 #8 0x00007f72bd97088f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 The segfault happen on the access to 'stream->chan->metadata_cache->lock' chan value here is zero. ``` The problem is easily reproducible if a sleep(1) is added just after the call to lttng_ustconsumer_request_metadata(), before the metadata stream lock is re-acquired. During the execution of the "request_metadata", an application can close. This will cause the session daemon to push any remaining metadata to the consumer daemon and to close the metadata channel. Closing the metadata channel closes the metadata stream's wait_fd, which is an internal pipe. The closure of the metadata pipe is detected by the metadata_poll thread, which will ensure that all metadata has been consumed before issuing the deletion of the metadata stream and channel. During the deletion, the channel's "stream" attribute the stream's "chan" attribute are set to NULL as both are logically deleted and should not longer be used. Meanwhile, the thread executing commit_one_metadata_packet() re-acquires the metadata stream lock and trips on the now-NULL "chan" member. The fix consists in checking if the metadata stream is logically deleted after its lock is re-acquired. It is correct for the sync_metadata operation to then complete successfully as the metadata is synced: the metadata guarantees this before deleting the stream/channel. Since the metadata stream's lifetime is protected by its lock, there may be other sites that need such a check. The lock and deletion check could be combined into a single consumer_stream_lock() helper in follow-up fixes. Reported-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com> Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
relayd: implement file and session rotation on top of trace chunks Implement the file and session rotation functionality on top of the trace chunk API. This ensures that a relay_stream and lttng_index_file are always explicitly associated to a trace chunk and hold a reference to it as long as their underlying files are contained within a given trace chunk. A number of relay_stream specific functions are moved to stream.c as "methods" of the relay_stream interface in order to make use of internal relay_stream helpers. As part of this clean-up/move of the relay_stream code, raw payload buffer handling has been replaced to use the lttng_buffer_view interface which provides implicit bounds checking of the payload buffers. The stream rotation has been modified to reference a "new chunk id" which is the ID of the trace chunk to which streams should rotate "into". The command has also been modified to apply on a set of streams. This is done in order to limit the number of commands on the control socket. Conversely, all path names have been removed from the command's payload. The index file implementation now acquires a reference to the trace chunk from which it is created. This affects the consumer daemon as this code is shared with the relay daemon. This ensures that a chunk is not released (and its close command executed, if any) before all file descriptors related to it have been closed. Respecting this guarantee is very important as the upcoming fd-cache will remove the guarantee that an "fd" to a given file is always held open. Moreover, close commands can rename a trace chunk's folders which would cause files to be created in the wrong folder if they are not properly created through the trace chunk. Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Create stream files relative to a stream's current trace chunk Create stream (and metadata) files relative to a session's current trace chunk using the lttng_trace_chunk_open/unlink[...] functions in the consumer daemons. Four new commands are added to the sessiond <-> consumerd protocol: - CREATE_TRACE_CHUNK Command parameters: - relayd_id: uint64_t Unique ìd of the session's associated relay daemon connection - override_name: optional char[] Overriden chunk name. This field is not used by the consumer daemon; it is forwarded to the relay daemon in order to set the name of a trace chunk's directory when it should not follow the `<ts begin>-<ts end>-<index>` form used by trace archives (i.e. as produced by session rotations). This is used to preserve the existing format of snapshot output directory names. - sessiond_id: uint64_t Unique id of the session (from the sessiond's perspective) to which this trace chunk belongs. - chunk_id: uint64_t Unique id the the session's trace chunk. - credentials: pair of uint32_t uid/gid Credentials the consumer daemon should use in order to create files within the trace chunk. The session daemon maintains the current lttng_trace_chunk of an ltt_session. When a session that has an output (`output_traces` == 1), an lttng_trace_chunk is created. In local tracing modes, the current trace chunk of a session, on the session daemon's end, holds the ownership of the chunk's output directory. The CREATE_TRACE_CHUNK command is used to replicate the session daemon's current trace chunk in the consumer daemon. This representation of the current trace chunk has a different role. It is created in "user" mode. Essentialy, the trace chunk's location is opaque to the consumer daemon; it receives a directory file descriptor from which a number of stream files will be created. The trace chunk registry, as used by the consumer daemon, implicitly owns the trace chunks on behalf of the session daemon. This is only needed in the consumer since the consumer has no notion of a session beyond session IDs being used to identify other objects. When a channel is created, its session_id and initial chunk_id are provided. This allows the consumer daemon to retrieve the session's current trace chunk and associate it with the newly-created channel. The channel holds a reference to its current trace chunk. Streams created from a channel also hold a reference to their current trace chunk, as retrived from their "parent" channel. The life time of trace chunks in the consumer daemon is cooperatively managed with the session daemon. This means session daemon through the LTTNG_CONSUMER_CREATE_TRACE_CHUNK and LTTNG_CONSUMER_CLOSE_TRACE_CHUNK commands. - CLOSE_TRACE_CHUNK [... TODO ...] This command is used to release the global reference to a given trace chunk in the consumer daemon. Relasing the consumer daemon's global reference to the trace chunk leaves only the streams to hold references until the moment when they are either closed or they switch-over to another chunk in the event of a session rotation. - TRACE_CHUNK_EXISTS [... TODO ...] - ADD_TRACE_CHUNK_CLOSE_COMMAND [... TODO ...] This commit changes a lot of code since it essentialy changes how files and directories are created. A number of commands no longer need to specify a `trace_archive_id` since the CREATE_TRACE_CHUNK and CLOSE_TRACE_CHUNK allow the consumer daemon to keep track of the current trace chunk of a channel at any given time. Creation and ownership of channel sub-directories --- The path expressed in consumer channel objects is now relative to the current trace chunk rather than being absolute. For example, the `pathname` of a consumer channel is now of the form `ust/1000/64-bit` rather than containing the full output path `/home/me/lttng-traces/session-[...]/ust/1000/64-bit/`. The subdirectory of a channel (relative to a trace chunk, e.g. `ust/1000/64-bit`) is lazily created when a stream's output files are created. To do so, the `lttng_consumer_channel` now has a `most_recent_chunk_id` attribute. When a stream creates its output files (i.e. at the beginning of a session, or during a session rotation), the stream's current trace chunk `id` is compared to the channel's `most_recent_chunk_id`. If it is determined that the channel is entering a new trace chunk, its channel subdirectory is created relative to the stream's chunk. Since this new state is within the `lttng_consumer_channel`, the channel lock must be held on code paths that may result in the creation of a new set of output files for a given stream. Note that as of this commit, there is now a clear ownership boundary between directories, owned by the session daemon through its trace chunk, and files, owned by the consumer daemon. Down-scoping of channel credentials --- Since files are now created relative to their stream's current trace chunk (which has credentials set), the fewer sites need access to the channel's credentials. The only reason credentials are kept as part of the consumer channel structure is the need to open and unlink UST shared memory mappings. Since the credentials must only be used for this purpose, they are now stored as an `LTTNG_OPTIONAL` field, buffer_credentials, that is only set for UST channels. Stream files should never need those credentials to be created. The following commands sessiond <-> consumerd commands have been removed: - LTTNG_CONSUMER_ROTATE_RENAME - LTTNG_CONSUMER_CHECK_ROTATION_PENDING_LOCAL - LTTNG_CONSUMER_CHECK_ROTATION_PENDING_RELAY - LTTNG_CONSUMER_MKDIR Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>