events req servicing v2
[lttv.git] / ltt / branches / poly / doc / developer / lttvwindow_events_delivery.txt
CommitLineData
6ea2aecb 1Linux Trace Toolkit
2
3Mathieu Desnoyers 17-05-2004
4
5
6This document explains how the lttvwindow API could process the event requests
7of the viewers, merging event requests and hook lists to benefit from the fact
8that process_traceset can call multiple hooks for the same event.
9
10First, we will explain the detailed process of event delivery in the current
11framework. We will then study its strengths and weaknesses.
12
13In a second time, a framework where the events requests are dealt by the main
14window with fine granularity will be described. We will then discussed the
15advantages and inconvenients over the first framework.
16
17
181. (Actual) Boundaryless event reading
19
20Actually, viewers request events in a time interval from the main window. They
21also specify a (not so) maximum number of events to be delivered. In fact, the
22number of events to read only gives a stop point, from where only events with
23the same timestamp will be delivered.
24
25Viewers register hooks themselves in the traceset context. When merging read
26requests in the main window, all hooks registered by viewers will be called for
27the union of all the read requests, because the main window has no control on
28hook registration.
29
30The main window calls process_traceset on its own for all the intervals
31requested by all the viewers. It must not duplicate a read of the same time
32interval : it could be very hard to filter by viewers. So, in order to achieve
33this, time requests are sorted by start time, and process_traceset is called for
34each time request. We keep the last event time between each read : if the start
35time of the next read is lower than the time reached, we continue the reading
36from the actual position.
37
38We deal with specific number of events requests (infinite end time) by
39garantying that, starting from the time start of the request, at least that
40number of events will be read. As we can't do it efficiently without interacting
41very closely with process_traceset, we always read the specified number of
42events requested starting from the current position when we answer to a request
43based on the number of events.
44
45The viewers have to filter events delivered by traceset reading, because they
46can be asked by another viewer for a totally (or partially) different time
47interval.
48
49
50Weaknesses
51
52- process_middle does not guarantee the number of events read
53
54First of all, a viewer that requests events to process_traceset has no garantee
55that it will get exactly what it asked for. For example, a direct call to
56traceset_middle for a specific number of events will delived _at least_ that
57quantity of events, plus the ones that have the same timestamp that the last one
58has.
59
60- Border effects
61
62Viewer's writers will have to deal with a lot of border effects caused by the
318585ee 63particularities of the reading. They will be required to select the information
64they need from their input by filtering.
6ea2aecb 65
318585ee 66- Lack of encapsulation and difficulty of testing
6ea2aecb 67
68The viewer's writer will have to take into account all the border effects caused
69by the interaction with other modules. This means that event if a viewer works
70well alone or with another viewer, it's possible that new bugs arises when a new
318585ee 71viewer comes around. So, even if a perfect testbench works well for a viewer, it
72does not confirm that no new bug will arise when another viewer is loaded at the
73same moment asking for different time intervals.
6ea2aecb 74
75
76- Duplication of the work
77
78Time based filters and counters of events will have to be implemented at the
79viewer's side, which is a duplication of the functionnalities that would
80normally be expected from the tracecontext API.
81
82- Lack of control over the data input
83
84As we expect module's writers to prefer to be as close as possible from the raw
85datas, making them interact with a lower level library that gives them a data
86input that they only control by further filtering of the input is not
87appropriated. We should expect some reluctancy from them about using this API
88because of this lack of control on the input.
89
90- Speed cost
91
92All hooks of all viewers will be called for all the time intervals. So, if we
93have a detailed events list and a control flow view, asking both for different
94time intervals, the detailed events list will have to filter all the events
95delivered originally to the control flow view. This can be a case occuring quite
96often.
97
98
99
100Strengths
101
102- Simple concatenation of time intervals at the main window level.
103
104Having the opportunity of delivering more events than necessary to the viewers
105means that we can concatenate time intervals and number of events requested
106fairly easily, while being hard to determine if some specific cases will be
318585ee 107wrong, in depth testing being impossible.
6ea2aecb 108
109- No duplication of the tracecontext API
110
111Viewers deal directly with the tracecontext API for registering hooks, removing
112a layer of encapsulation.
113
114
115
116
117
1182. (Proposed) Strict boundaries events reading
119
120The idea behind this method is to provide exactly the events requested by the
121viewers to them, no more, no less.
122
6ea2aecb 123It uses the new API for process traceset suggested in the document
124process_traceset_strict_boundaries.txt.
125
126It also means that the lttvwindow API will have to deal with viewer's hooks.
127Those will not be allowed to add them directly in the context. They will give
128them to the lttvwindow API, along with the time interval or the position and
129number of events. The lttvwindow API will have to take care of adding and
130removing hooks for the different time intervals requested. That means that hooks
131insertion and removal will be done between each traceset processing based on
132the time intervals and event positions related to each hook. We must therefore
133provide a simple interface for hooks passing between the viewers and the main
318585ee 134window, make them easier to manage from the main window. A modification to the
135LttvHooks type solves this problem.
6ea2aecb 136
137
138Architecture
139
140Added to the lttvwindow API :
141
142
589a505d 143void lttvwindow_events_request
fc9fa653 144( MainWindow *main_win,
3c502bdc 145 EventsRequest *events_request);
6ea2aecb 146
147
148Internal functions :
149
150- lttvwindow_process_pending_requests
151
152
153
154Implementation
155
156
318585ee 157- Type LttvHooks
6ea2aecb 158
6d917c2c 159see hook_prio.txt
6ea2aecb 160
318585ee 161The viewers will just have to pass hooks to the main window through this type,
162using the hook.h interface to manipulate it. Then, the main window will add
163them and remove them from the context to deliver exactly the events requested by
164each viewer through process traceset.
6ea2aecb 165
6ea2aecb 166
318585ee 167- lttvwindow_events_request
6ea2aecb 168
40e5a133 169It adds the an EventsRequest struct to the array of time requests
3c502bdc 170pending and registers a pending request for the next g_idle if none is
40e5a133 171registered. The viewer can access this structure during the read as its
172hook_data. Only the stop_flag can be changed by the viewer through the
173event hooks.
6ea2aecb 174
318585ee 175typedef struct _EventsRequest {
40e5a133 176 gpointer viewer_data;
3c502bdc 177 LttTime start_time, /* Unset : { 0, 0 } */
178 LttvTracesetContextPosition start_position, /* Unset : num_traces = 0 */
179 gboolean stop_flag, /* Continue:TRUE Stop:FALSE */
180 LttTime end_time, /* Unset : { 0, 0 } */
181 guint num_events, /* Unset : G_MAXUINT */
182 LttvTracesetContextPosition end_position, /* Unset : num_traces = 0 */
183 LttvHooks *before_traceset, /* Unset : NULL */
184 LttvHooks *before_trace, /* Unset : NULL */
185 LttvHooks *before_tracefile, /* Unset : NULL */
186 LttvHooks *event, /* Unset : NULL */
187 LttvHooksById *event_by_id, /* Unset : NULL */
188 LttvHooks *after_tracefile, /* Unset : NULL */
189 LttvHooks *after_trace, /* Unset : NULL */
190 LttvHooks *after_traceset /* Unset : NULL */
318585ee 191} EventsRequest;
6ea2aecb 192
193
194- lttvwindow_process_pending_requests
195
196This internal function gets called by g_idle, taking care of the pending
197requests. It is responsible for concatenation of time intervals and position
318585ee 198requests. It does it with the following algorithm organizing process traceset
199calls. Here is the detailed description of the way it works :
200
201
202- Events Requests Servicing Algorithm
203
204Data structures necessary :
205
206List of requests added to context : list_in
207List of requests not added to context : list_out
208
209Initial state :
210
211list_in : empty
212list_out : many events requests
213
214
215While list_in !empty and list_out !empty
216 1. If list_in is empty (need a seek)
217 1.1 Add requests to list_in
218 1.1.1 Find all time requests with the lowest start time in list_out
219 (ltime)
220 1.1.2 Find all position requests with the lowest position in list_out
221 (lpos)
222 1.1.3 If lpos.start time < ltime
223 - Add lpos to list_in, remove them from list_out
224 1.1.4 Else, (lpos.start time >= ltime)
225 - Add ltime to list_in, remove them from list_out
226 1.2 Seek
227 1.2.1 If first request in list_in is a time request
228 1.2.1.1 Seek to that time
229 1.2.2 Else, the first request in list_in is a position request
230 1.2.2.1 Seek to that position
231 1.3 Call begin for all list_in members
232 (1.3.1 begin hooks called)
233 (1.3.2 middle hooks added)
234 2. Else, list_in is not empty, we continue a read
235 2.1 For each req of list_out
d87e4888 236 - if req.start time == current context time
318585ee 237 - Add to list_in, remove from list_out
238 - Call begin
239 - if req.start position == current position
240 - Add to list_in, remove from list_out
241 - Call begin
242
243 3. Find end criterions
244 3.1 End time
245 3.1.1 Find lowest end time in list_in
246 3.1.2 Find lowest start time in list_out
247 3.1.3 Use lowest of both as end time
248 3.2 Number of events
249 3.2.1 Find lowest number of events in list_in
250 3.3 End position
251 3.3.1 Find lowest end position in list_in
252 3.3.2 Find lowest start position in list_out
253 3.3.3 Use lowest of both as end position
254
255 4. Call process traceset middle
256 4.1 Call process traceset middle (Use end criterion found in 3)
3c502bdc 257 * note : end criterion can also be viewer's hook returning TRUE
318585ee 258 5. After process traceset middle
22485005 259 - if current context time > traceset.end time
260 - For each req in list_in
261 - Call end for req
262 - remove req from list_in
318585ee 263 5.1 For each req in list_in
264 - req.num -= count
265 - if req.num == 0
266 - Call end for req
267 - remove req from list_in
d87e4888 268 - if current context time > req.end time
318585ee 269 - Call end for req
270 - remove req from list_in
271 - if req.end pos == current pos
272 - Call end for req
273 - remove req from list_in
3c502bdc 274 - if req.stop_flag == TRUE
275 - Call end for req
276 - remove req from list_in
318585ee 277
278
279
280Notes :
281End criterions for process traceset middle :
282If the criterion is reached, event is out of boundaries and we return.
97caad97 283Current time >= End time
318585ee 284Event count > Number of events
285Current position >= End position
3c502bdc 286Last hook list called returned TRUE
318585ee 287
288The >= for position is necessary to make ensure consistency between start time
289requests and positions requests that happens to be at the exact same start time
290and position.
6ea2aecb 291
292
293
294Weaknesses
295
318585ee 296- None (nearly?) :)
6ea2aecb 297
298
299Strengths
300
301- Removes the need for filtering of information supplied to the viewers.
302
303- Viewers have a better control on their data input.
304
305- Solves all the weaknesses idenfied in the actual boundaryless traceset
306reading.
69381fc7 307
308
309
310
311- Revised Events Requests Servicing Algorithm (v2)
312
313typedef LttvEventsRequestPrio guint;
314
315typedef struct _EventsRequest {
316 gpointer viewer_data;
317 gboolean servicing; /* service in progress: TRUE */
318 LttvEventsRequestPrio prio; /* Ev. Req. priority */
319 LttTime start_time; /* Unset : { 0, 0 } */
320 LttvTracesetContextPosition *start_position; /* Unset : num_traces = 0 */
321 gboolean stop_flag; /* Continue:TRUE Stop:FALSE */
322 LttTime end_time; /* Unset : { 0, 0 } */
323 guint num_events; /* Unset : G_MAXUINT */
324 LttvTracesetContextPosition *end_position; /* Unset : num_traces = 0 */
325 LttvHooks *before_traceset; /* Unset : NULL */
326 LttvHooks *before_trace; /* Unset : NULL */
327 LttvHooks *before_tracefile;/* Unset : NULL */
328 LttvHooks *event; /* Unset : NULL */
329 LttvHooksById *event_by_id; /* Unset : NULL */
330 LttvHooks *after_tracefile; /* Unset : NULL */
331 LttvHooks *after_trace; /* Unset : NULL */
332 LttvHooks *after_traceset; /* Unset : NULL */
333 LttvHooks *before_chunk; /* Unset : NULL */
334 LttvHooks *after_chunk /* Unset : NULL */
335} EventsRequest;
336
337
338The reads are splitted in chunks. After a chunk is over, we want to check if
339there is a GTK Event pending and execute it. It can add or remove events
340requests from the event requests list. If it happens, we want to start over
341the algorithm from the beginning.
342
343Two levels of priority exists. High priority and low priority. High prio
344requests are serviced first, even if lower priority requests has lower start
345time or position.
346
347
348Data structures necessary :
349
350List of requests added to context : list_in
351List of requests not added to context : list_out
352
353Initial state :
354
355list_in : empty
356list_out : many events requests
357
358
359A. While list_in !empty and list_out !empty and !GTK Event pending
360 1. If list_in is empty (need a seek)
361 1.1 Add requests to list_in
362 1.1.1 Find all time requests with the highest priority and lowest start
363 time in list_out (ltime)
364 1.1.2 Find all position requests with the highest priority and lowest
365 position in list_out (lpos)
366 1.1.3 If lpos.prio > ltime.prio
367 || (lpos.prio == ltime.prio && lpos.start time < ltime)
368 - Add lpos to list_in, remove them from list_out
369 1.1.4 Else, (lpos.prio < ltime.prio
370 ||(lpos.prio == ltime.prio && lpos.start time >= ltime))
371 - Add ltime to list_in, remove them from list_out
372 1.2 Seek
373 1.2.1 If first request in list_in is a time request
e6359327 374 - If first req in list_in start time != current time
375 - Seek to that time
69381fc7 376 1.2.2 Else, the first request in list_in is a position request
e6359327 377 - If first req in list_in pos != current pos
378 - If the position is the same than the saved state, restore state
379 - Else, seek to that position
69381fc7 380 1.3 Add hooks and call begin for all list_in members
381 1.3.1 If !servicing
382 - begin hooks called
383 - servicing = TRUE
384 1.3.2 call before_chunk
385 1.3.3 events hooks added
386 2. Else, list_in is not empty, we continue a read
387 2.1 For each req of list_out
388 - if req.start time == current context time
389 - Add to list_in, remove from list_out
390 - If !servicing
391 - Call begin
392 - servicing = TRUE
393 - Call before_chunk
394 - events hooks added
395 - if req.start position == current position
396 - Add to list_in, remove from list_out
397 - If !servicing
398 - Call begin
399 - servicing = TRUE
400 - Call before_chunk
401 - events hooks added
402
403 3. Find end criterions
404 3.1 End time
405 3.1.1 Find lowest end time in list_in
406 3.1.2 Find lowest start time in list_out (>= than current time*)
407 * To eliminate lower prio requests
408 3.1.3 Use lowest of both as end time
409 3.2 Number of events
410 3.2.1 Find lowest number of events in list_in
411 3.2.2 Use min(CHUNK_NUM_EVENTS, min num events in list_in) as num_events
412 3.3 End position
413 3.3.1 Find lowest end position in list_in
414 3.3.2 Find lowest start position in list_out (>= than current
415 position)
416 3.3.3 Use lowest of both as end position
417
418 4. Call process traceset middle
419 4.1 Call process traceset middle (Use end criterion found in 3)
420 * note : end criterion can also be viewer's hook returning TRUE
421 5. After process traceset middle
422 - if current context time > traceset.end time
423 - For each req in list_in
424 - Call end for req
425 - Remove events hooks for req
426 - remove req from list_in
427 5.1 For each req in list_in
428 - req.num -= count
429 - if req.num == 0
430 - Call end for req
431 - Remove events hooks for req
432 - remove req from list_in
433 - if current context time > req.end time
434 - Call end for req
435 - Remove events hooks for req
436 - remove req from list_in
437 - if req.end pos == current pos
438 - Call end for req
439 - Remove events hooks for req
440 - remove req from list_in
441 - if req.stop_flag == TRUE
442 - Call end for req
443 - Remove events hooks for req
444 - remove req from list_in
445 - if exists one events requests in list_out that has
446 higher priority and time != current time
447 - Use current position as start position for req
448 - Remove start time from req
449 - Call after_chunk for req
450 - Remove event hooks for req
451 - Put req back in list_out, remove from list_in
452 - Save current state into saved_state.
453
454B. When interrupted
455 1. for each request in list_in
456 1.1. Use current postition as start position
457 1.2. Remove start time
458 1.3. Call after_chunk
459 1.4. Remove event hooks
460 1.5. Put it back in list_out
461 2. Save current state into saved_state.
462 2.1 Free old saved state.
463 2.2 save current state.
464
465
466
467
468
469Notes :
470End criterions for process traceset middle :
471If the criterion is reached, event is out of boundaries and we return.
472Current time >= End time
473Event count > Number of events
474Current position >= End position
475Last hook list called returned TRUE
476
477The >= for position is necessary to make ensure consistency between start time
478requests and positions requests that happens to be at the exact same start time
479and position.
480
481We only keep one saved state in memory. If, for example, a low priority
482servicing is interrupted, a high priority is serviced, then the low priority
483will use the saved state to start back where it was instead of seeking to the
484time. In the very specific case where a low priority servicing is interrupted,
485and then a high priority servicing on top of it is also interrupted, well, the
486low priority will loose its state and will have to seek back. It should not
487occur often. The solution to it would be to save one state per priority.
488
489
This page took 0.041769 seconds and 4 git commands to generate.