6ea2aecb |
1 | Linux Trace Toolkit |
2 | |
3 | Mathieu Desnoyers 17-05-2004 |
4 | |
5 | |
6 | This document explains how the lttvwindow API could process the event requests |
7 | of the viewers, merging event requests and hook lists to benefit from the fact |
8 | that process_traceset can call multiple hooks for the same event. |
9 | |
10 | First, we will explain the detailed process of event delivery in the current |
11 | framework. We will then study its strengths and weaknesses. |
12 | |
13 | In a second time, a framework where the events requests are dealt by the main |
14 | window with fine granularity will be described. We will then discussed the |
15 | advantages and inconvenients over the first framework. |
16 | |
17 | |
18 | 1. (Actual) Boundaryless event reading |
19 | |
20 | Actually, viewers request events in a time interval from the main window. They |
21 | also specify a (not so) maximum number of events to be delivered. In fact, the |
22 | number of events to read only gives a stop point, from where only events with |
23 | the same timestamp will be delivered. |
24 | |
25 | Viewers register hooks themselves in the traceset context. When merging read |
26 | requests in the main window, all hooks registered by viewers will be called for |
27 | the union of all the read requests, because the main window has no control on |
28 | hook registration. |
29 | |
30 | The main window calls process_traceset on its own for all the intervals |
31 | requested by all the viewers. It must not duplicate a read of the same time |
32 | interval : it could be very hard to filter by viewers. So, in order to achieve |
33 | this, time requests are sorted by start time, and process_traceset is called for |
34 | each time request. We keep the last event time between each read : if the start |
35 | time of the next read is lower than the time reached, we continue the reading |
36 | from the actual position. |
37 | |
38 | We deal with specific number of events requests (infinite end time) by |
39 | garantying that, starting from the time start of the request, at least that |
40 | number of events will be read. As we can't do it efficiently without interacting |
41 | very closely with process_traceset, we always read the specified number of |
42 | events requested starting from the current position when we answer to a request |
43 | based on the number of events. |
44 | |
45 | The viewers have to filter events delivered by traceset reading, because they |
46 | can be asked by another viewer for a totally (or partially) different time |
47 | interval. |
48 | |
49 | |
50 | Weaknesses |
51 | |
52 | - process_middle does not guarantee the number of events read |
53 | |
54 | First of all, a viewer that requests events to process_traceset has no garantee |
55 | that it will get exactly what it asked for. For example, a direct call to |
56 | traceset_middle for a specific number of events will delived _at least_ that |
57 | quantity of events, plus the ones that have the same timestamp that the last one |
58 | has. |
59 | |
60 | - Border effects |
61 | |
62 | Viewer's writers will have to deal with a lot of border effects caused by the |
318585ee |
63 | particularities of the reading. They will be required to select the information |
64 | they need from their input by filtering. |
6ea2aecb |
65 | |
318585ee |
66 | - Lack of encapsulation and difficulty of testing |
6ea2aecb |
67 | |
68 | The viewer's writer will have to take into account all the border effects caused |
69 | by the interaction with other modules. This means that event if a viewer works |
70 | well alone or with another viewer, it's possible that new bugs arises when a new |
318585ee |
71 | viewer comes around. So, even if a perfect testbench works well for a viewer, it |
72 | does not confirm that no new bug will arise when another viewer is loaded at the |
73 | same moment asking for different time intervals. |
6ea2aecb |
74 | |
75 | |
76 | - Duplication of the work |
77 | |
78 | Time based filters and counters of events will have to be implemented at the |
79 | viewer's side, which is a duplication of the functionnalities that would |
80 | normally be expected from the tracecontext API. |
81 | |
82 | - Lack of control over the data input |
83 | |
84 | As we expect module's writers to prefer to be as close as possible from the raw |
85 | datas, making them interact with a lower level library that gives them a data |
86 | input that they only control by further filtering of the input is not |
87 | appropriated. We should expect some reluctancy from them about using this API |
88 | because of this lack of control on the input. |
89 | |
90 | - Speed cost |
91 | |
92 | All hooks of all viewers will be called for all the time intervals. So, if we |
93 | have a detailed events list and a control flow view, asking both for different |
94 | time intervals, the detailed events list will have to filter all the events |
95 | delivered originally to the control flow view. This can be a case occuring quite |
96 | often. |
97 | |
98 | |
99 | |
100 | Strengths |
101 | |
102 | - Simple concatenation of time intervals at the main window level. |
103 | |
104 | Having the opportunity of delivering more events than necessary to the viewers |
105 | means that we can concatenate time intervals and number of events requested |
106 | fairly easily, while being hard to determine if some specific cases will be |
318585ee |
107 | wrong, in depth testing being impossible. |
6ea2aecb |
108 | |
109 | - No duplication of the tracecontext API |
110 | |
111 | Viewers deal directly with the tracecontext API for registering hooks, removing |
112 | a layer of encapsulation. |
113 | |
114 | |
115 | |
116 | |
117 | |
118 | 2. (Proposed) Strict boundaries events reading |
119 | |
120 | The idea behind this method is to provide exactly the events requested by the |
121 | viewers to them, no more, no less. |
122 | |
6ea2aecb |
123 | It uses the new API for process traceset suggested in the document |
124 | process_traceset_strict_boundaries.txt. |
125 | |
126 | It also means that the lttvwindow API will have to deal with viewer's hooks. |
127 | Those will not be allowed to add them directly in the context. They will give |
128 | them to the lttvwindow API, along with the time interval or the position and |
129 | number of events. The lttvwindow API will have to take care of adding and |
130 | removing hooks for the different time intervals requested. That means that hooks |
131 | insertion and removal will be done between each traceset processing based on |
132 | the time intervals and event positions related to each hook. We must therefore |
133 | provide a simple interface for hooks passing between the viewers and the main |
318585ee |
134 | window, make them easier to manage from the main window. A modification to the |
135 | LttvHooks type solves this problem. |
6ea2aecb |
136 | |
137 | |
138 | Architecture |
139 | |
140 | Added to the lttvwindow API : |
141 | |
142 | |
589a505d |
143 | void lttvwindow_events_request |
fc9fa653 |
144 | ( MainWindow *main_win, |
3c502bdc |
145 | EventsRequest *events_request); |
6ea2aecb |
146 | |
147 | |
148 | Internal functions : |
149 | |
150 | - lttvwindow_process_pending_requests |
151 | |
152 | |
153 | |
154 | Implementation |
155 | |
156 | |
318585ee |
157 | - Type LttvHooks |
6ea2aecb |
158 | |
6d917c2c |
159 | see hook_prio.txt |
6ea2aecb |
160 | |
318585ee |
161 | The viewers will just have to pass hooks to the main window through this type, |
162 | using the hook.h interface to manipulate it. Then, the main window will add |
163 | them and remove them from the context to deliver exactly the events requested by |
164 | each viewer through process traceset. |
6ea2aecb |
165 | |
6ea2aecb |
166 | |
318585ee |
167 | - lttvwindow_events_request |
6ea2aecb |
168 | |
40e5a133 |
169 | It adds the an EventsRequest struct to the array of time requests |
3c502bdc |
170 | pending and registers a pending request for the next g_idle if none is |
40e5a133 |
171 | registered. The viewer can access this structure during the read as its |
172 | hook_data. Only the stop_flag can be changed by the viewer through the |
173 | event hooks. |
6ea2aecb |
174 | |
318585ee |
175 | typedef struct _EventsRequest { |
40e5a133 |
176 | gpointer viewer_data; |
3c502bdc |
177 | LttTime start_time, /* Unset : { 0, 0 } */ |
178 | LttvTracesetContextPosition start_position, /* Unset : num_traces = 0 */ |
179 | gboolean stop_flag, /* Continue:TRUE Stop:FALSE */ |
180 | LttTime end_time, /* Unset : { 0, 0 } */ |
181 | guint num_events, /* Unset : G_MAXUINT */ |
182 | LttvTracesetContextPosition end_position, /* Unset : num_traces = 0 */ |
183 | LttvHooks *before_traceset, /* Unset : NULL */ |
184 | LttvHooks *before_trace, /* Unset : NULL */ |
185 | LttvHooks *before_tracefile, /* Unset : NULL */ |
186 | LttvHooks *event, /* Unset : NULL */ |
187 | LttvHooksById *event_by_id, /* Unset : NULL */ |
188 | LttvHooks *after_tracefile, /* Unset : NULL */ |
189 | LttvHooks *after_trace, /* Unset : NULL */ |
190 | LttvHooks *after_traceset /* Unset : NULL */ |
318585ee |
191 | } EventsRequest; |
6ea2aecb |
192 | |
193 | |
194 | - lttvwindow_process_pending_requests |
195 | |
196 | This internal function gets called by g_idle, taking care of the pending |
197 | requests. It is responsible for concatenation of time intervals and position |
318585ee |
198 | requests. It does it with the following algorithm organizing process traceset |
199 | calls. Here is the detailed description of the way it works : |
200 | |
201 | |
202 | - Events Requests Servicing Algorithm |
203 | |
204 | Data structures necessary : |
205 | |
206 | List of requests added to context : list_in |
207 | List of requests not added to context : list_out |
208 | |
209 | Initial state : |
210 | |
211 | list_in : empty |
212 | list_out : many events requests |
213 | |
214 | |
215 | While list_in !empty and list_out !empty |
216 | 1. If list_in is empty (need a seek) |
217 | 1.1 Add requests to list_in |
218 | 1.1.1 Find all time requests with the lowest start time in list_out |
219 | (ltime) |
220 | 1.1.2 Find all position requests with the lowest position in list_out |
221 | (lpos) |
222 | 1.1.3 If lpos.start time < ltime |
223 | - Add lpos to list_in, remove them from list_out |
224 | 1.1.4 Else, (lpos.start time >= ltime) |
225 | - Add ltime to list_in, remove them from list_out |
226 | 1.2 Seek |
227 | 1.2.1 If first request in list_in is a time request |
228 | 1.2.1.1 Seek to that time |
229 | 1.2.2 Else, the first request in list_in is a position request |
230 | 1.2.2.1 Seek to that position |
231 | 1.3 Call begin for all list_in members |
232 | (1.3.1 begin hooks called) |
233 | (1.3.2 middle hooks added) |
234 | 2. Else, list_in is not empty, we continue a read |
235 | 2.1 For each req of list_out |
d87e4888 |
236 | - if req.start time == current context time |
318585ee |
237 | - Add to list_in, remove from list_out |
238 | - Call begin |
239 | - if req.start position == current position |
240 | - Add to list_in, remove from list_out |
241 | - Call begin |
242 | |
243 | 3. Find end criterions |
244 | 3.1 End time |
245 | 3.1.1 Find lowest end time in list_in |
246 | 3.1.2 Find lowest start time in list_out |
247 | 3.1.3 Use lowest of both as end time |
248 | 3.2 Number of events |
249 | 3.2.1 Find lowest number of events in list_in |
250 | 3.3 End position |
251 | 3.3.1 Find lowest end position in list_in |
252 | 3.3.2 Find lowest start position in list_out |
253 | 3.3.3 Use lowest of both as end position |
254 | |
255 | 4. Call process traceset middle |
256 | 4.1 Call process traceset middle (Use end criterion found in 3) |
3c502bdc |
257 | * note : end criterion can also be viewer's hook returning TRUE |
318585ee |
258 | 5. After process traceset middle |
22485005 |
259 | - if current context time > traceset.end time |
260 | - For each req in list_in |
261 | - Call end for req |
262 | - remove req from list_in |
318585ee |
263 | 5.1 For each req in list_in |
264 | - req.num -= count |
265 | - if req.num == 0 |
266 | - Call end for req |
267 | - remove req from list_in |
d87e4888 |
268 | - if current context time > req.end time |
318585ee |
269 | - Call end for req |
270 | - remove req from list_in |
271 | - if req.end pos == current pos |
272 | - Call end for req |
273 | - remove req from list_in |
3c502bdc |
274 | - if req.stop_flag == TRUE |
275 | - Call end for req |
276 | - remove req from list_in |
318585ee |
277 | |
278 | |
279 | |
280 | Notes : |
281 | End criterions for process traceset middle : |
282 | If the criterion is reached, event is out of boundaries and we return. |
97caad97 |
283 | Current time >= End time |
318585ee |
284 | Event count > Number of events |
285 | Current position >= End position |
3c502bdc |
286 | Last hook list called returned TRUE |
318585ee |
287 | |
288 | The >= for position is necessary to make ensure consistency between start time |
289 | requests and positions requests that happens to be at the exact same start time |
290 | and position. |
6ea2aecb |
291 | |
292 | |
293 | |
294 | Weaknesses |
295 | |
318585ee |
296 | - None (nearly?) :) |
6ea2aecb |
297 | |
298 | |
299 | Strengths |
300 | |
301 | - Removes the need for filtering of information supplied to the viewers. |
302 | |
303 | - Viewers have a better control on their data input. |
304 | |
305 | - Solves all the weaknesses idenfied in the actual boundaryless traceset |
306 | reading. |
69381fc7 |
307 | |
308 | |
309 | |
310 | |
311 | - Revised Events Requests Servicing Algorithm (v2) |
312 | |
313 | typedef LttvEventsRequestPrio guint; |
314 | |
315 | typedef struct _EventsRequest { |
316 | gpointer viewer_data; |
317 | gboolean servicing; /* service in progress: TRUE */ |
318 | LttvEventsRequestPrio prio; /* Ev. Req. priority */ |
319 | LttTime start_time; /* Unset : { 0, 0 } */ |
320 | LttvTracesetContextPosition *start_position; /* Unset : num_traces = 0 */ |
321 | gboolean stop_flag; /* Continue:TRUE Stop:FALSE */ |
322 | LttTime end_time; /* Unset : { 0, 0 } */ |
323 | guint num_events; /* Unset : G_MAXUINT */ |
324 | LttvTracesetContextPosition *end_position; /* Unset : num_traces = 0 */ |
325 | LttvHooks *before_traceset; /* Unset : NULL */ |
326 | LttvHooks *before_trace; /* Unset : NULL */ |
327 | LttvHooks *before_tracefile;/* Unset : NULL */ |
328 | LttvHooks *event; /* Unset : NULL */ |
329 | LttvHooksById *event_by_id; /* Unset : NULL */ |
330 | LttvHooks *after_tracefile; /* Unset : NULL */ |
331 | LttvHooks *after_trace; /* Unset : NULL */ |
332 | LttvHooks *after_traceset; /* Unset : NULL */ |
333 | LttvHooks *before_chunk; /* Unset : NULL */ |
334 | LttvHooks *after_chunk /* Unset : NULL */ |
335 | } EventsRequest; |
336 | |
337 | |
338 | The reads are splitted in chunks. After a chunk is over, we want to check if |
339 | there is a GTK Event pending and execute it. It can add or remove events |
340 | requests from the event requests list. If it happens, we want to start over |
341 | the algorithm from the beginning. |
342 | |
343 | Two levels of priority exists. High priority and low priority. High prio |
344 | requests are serviced first, even if lower priority requests has lower start |
345 | time or position. |
346 | |
347 | |
348 | Data structures necessary : |
349 | |
350 | List of requests added to context : list_in |
351 | List of requests not added to context : list_out |
352 | |
353 | Initial state : |
354 | |
355 | list_in : empty |
356 | list_out : many events requests |
357 | |
358 | |
359 | A. While list_in !empty and list_out !empty and !GTK Event pending |
360 | 1. If list_in is empty (need a seek) |
361 | 1.1 Add requests to list_in |
362 | 1.1.1 Find all time requests with the highest priority and lowest start |
363 | time in list_out (ltime) |
364 | 1.1.2 Find all position requests with the highest priority and lowest |
365 | position in list_out (lpos) |
366 | 1.1.3 If lpos.prio > ltime.prio |
367 | || (lpos.prio == ltime.prio && lpos.start time < ltime) |
368 | - Add lpos to list_in, remove them from list_out |
369 | 1.1.4 Else, (lpos.prio < ltime.prio |
370 | ||(lpos.prio == ltime.prio && lpos.start time >= ltime)) |
371 | - Add ltime to list_in, remove them from list_out |
372 | 1.2 Seek |
373 | 1.2.1 If first request in list_in is a time request |
e6359327 |
374 | - If first req in list_in start time != current time |
375 | - Seek to that time |
69381fc7 |
376 | 1.2.2 Else, the first request in list_in is a position request |
e6359327 |
377 | - If first req in list_in pos != current pos |
378 | - If the position is the same than the saved state, restore state |
379 | - Else, seek to that position |
69381fc7 |
380 | 1.3 Add hooks and call begin for all list_in members |
381 | 1.3.1 If !servicing |
382 | - begin hooks called |
383 | - servicing = TRUE |
384 | 1.3.2 call before_chunk |
385 | 1.3.3 events hooks added |
386 | 2. Else, list_in is not empty, we continue a read |
387 | 2.1 For each req of list_out |
388 | - if req.start time == current context time |
389 | - Add to list_in, remove from list_out |
390 | - If !servicing |
391 | - Call begin |
392 | - servicing = TRUE |
393 | - Call before_chunk |
394 | - events hooks added |
395 | - if req.start position == current position |
396 | - Add to list_in, remove from list_out |
397 | - If !servicing |
398 | - Call begin |
399 | - servicing = TRUE |
400 | - Call before_chunk |
401 | - events hooks added |
402 | |
403 | 3. Find end criterions |
404 | 3.1 End time |
405 | 3.1.1 Find lowest end time in list_in |
406 | 3.1.2 Find lowest start time in list_out (>= than current time*) |
407 | * To eliminate lower prio requests |
408 | 3.1.3 Use lowest of both as end time |
409 | 3.2 Number of events |
410 | 3.2.1 Find lowest number of events in list_in |
411 | 3.2.2 Use min(CHUNK_NUM_EVENTS, min num events in list_in) as num_events |
412 | 3.3 End position |
413 | 3.3.1 Find lowest end position in list_in |
414 | 3.3.2 Find lowest start position in list_out (>= than current |
415 | position) |
416 | 3.3.3 Use lowest of both as end position |
417 | |
418 | 4. Call process traceset middle |
419 | 4.1 Call process traceset middle (Use end criterion found in 3) |
420 | * note : end criterion can also be viewer's hook returning TRUE |
421 | 5. After process traceset middle |
422 | - if current context time > traceset.end time |
423 | - For each req in list_in |
424 | - Call end for req |
425 | - Remove events hooks for req |
426 | - remove req from list_in |
427 | 5.1 For each req in list_in |
428 | - req.num -= count |
429 | - if req.num == 0 |
430 | - Call end for req |
431 | - Remove events hooks for req |
432 | - remove req from list_in |
433 | - if current context time > req.end time |
434 | - Call end for req |
435 | - Remove events hooks for req |
436 | - remove req from list_in |
437 | - if req.end pos == current pos |
438 | - Call end for req |
439 | - Remove events hooks for req |
440 | - remove req from list_in |
441 | - if req.stop_flag == TRUE |
442 | - Call end for req |
443 | - Remove events hooks for req |
444 | - remove req from list_in |
445 | - if exists one events requests in list_out that has |
446 | higher priority and time != current time |
447 | - Use current position as start position for req |
448 | - Remove start time from req |
449 | - Call after_chunk for req |
450 | - Remove event hooks for req |
451 | - Put req back in list_out, remove from list_in |
452 | - Save current state into saved_state. |
453 | |
454 | B. When interrupted |
455 | 1. for each request in list_in |
456 | 1.1. Use current postition as start position |
457 | 1.2. Remove start time |
458 | 1.3. Call after_chunk |
459 | 1.4. Remove event hooks |
460 | 1.5. Put it back in list_out |
461 | 2. Save current state into saved_state. |
462 | 2.1 Free old saved state. |
463 | 2.2 save current state. |
464 | |
465 | |
466 | |
467 | |
468 | |
469 | Notes : |
470 | End criterions for process traceset middle : |
471 | If the criterion is reached, event is out of boundaries and we return. |
472 | Current time >= End time |
473 | Event count > Number of events |
474 | Current position >= End position |
475 | Last hook list called returned TRUE |
476 | |
477 | The >= for position is necessary to make ensure consistency between start time |
478 | requests and positions requests that happens to be at the exact same start time |
479 | and position. |
480 | |
481 | We only keep one saved state in memory. If, for example, a low priority |
482 | servicing is interrupted, a high priority is serviced, then the low priority |
483 | will use the saved state to start back where it was instead of seeking to the |
484 | time. In the very specific case where a low priority servicing is interrupted, |
485 | and then a high priority servicing on top of it is also interrupted, well, the |
486 | low priority will loose its state and will have to seek back. It should not |
487 | occur often. The solution to it would be to save one state per priority. |
488 | |
489 | |