events req servicing v2, with background computation, mod
[lttv.git] / ltt / branches / poly / doc / developer / lttvwindow_events_delivery.txt
CommitLineData
6ea2aecb 1Linux Trace Toolkit
2
3Mathieu Desnoyers 17-05-2004
4
5
6This document explains how the lttvwindow API could process the event requests
7of the viewers, merging event requests and hook lists to benefit from the fact
8that process_traceset can call multiple hooks for the same event.
9
10First, we will explain the detailed process of event delivery in the current
11framework. We will then study its strengths and weaknesses.
12
13In a second time, a framework where the events requests are dealt by the main
14window with fine granularity will be described. We will then discussed the
15advantages and inconvenients over the first framework.
16
17
181. (Actual) Boundaryless event reading
19
20Actually, viewers request events in a time interval from the main window. They
21also specify a (not so) maximum number of events to be delivered. In fact, the
22number of events to read only gives a stop point, from where only events with
23the same timestamp will be delivered.
24
25Viewers register hooks themselves in the traceset context. When merging read
26requests in the main window, all hooks registered by viewers will be called for
27the union of all the read requests, because the main window has no control on
28hook registration.
29
30The main window calls process_traceset on its own for all the intervals
31requested by all the viewers. It must not duplicate a read of the same time
32interval : it could be very hard to filter by viewers. So, in order to achieve
33this, time requests are sorted by start time, and process_traceset is called for
34each time request. We keep the last event time between each read : if the start
35time of the next read is lower than the time reached, we continue the reading
36from the actual position.
37
38We deal with specific number of events requests (infinite end time) by
39garantying that, starting from the time start of the request, at least that
40number of events will be read. As we can't do it efficiently without interacting
41very closely with process_traceset, we always read the specified number of
42events requested starting from the current position when we answer to a request
43based on the number of events.
44
45The viewers have to filter events delivered by traceset reading, because they
46can be asked by another viewer for a totally (or partially) different time
47interval.
48
49
50Weaknesses
51
52- process_middle does not guarantee the number of events read
53
54First of all, a viewer that requests events to process_traceset has no garantee
55that it will get exactly what it asked for. For example, a direct call to
56traceset_middle for a specific number of events will delived _at least_ that
57quantity of events, plus the ones that have the same timestamp that the last one
58has.
59
60- Border effects
61
62Viewer's writers will have to deal with a lot of border effects caused by the
318585ee 63particularities of the reading. They will be required to select the information
64they need from their input by filtering.
6ea2aecb 65
318585ee 66- Lack of encapsulation and difficulty of testing
6ea2aecb 67
68The viewer's writer will have to take into account all the border effects caused
69by the interaction with other modules. This means that event if a viewer works
70well alone or with another viewer, it's possible that new bugs arises when a new
318585ee 71viewer comes around. So, even if a perfect testbench works well for a viewer, it
72does not confirm that no new bug will arise when another viewer is loaded at the
73same moment asking for different time intervals.
6ea2aecb 74
75
76- Duplication of the work
77
78Time based filters and counters of events will have to be implemented at the
79viewer's side, which is a duplication of the functionnalities that would
80normally be expected from the tracecontext API.
81
82- Lack of control over the data input
83
84As we expect module's writers to prefer to be as close as possible from the raw
85datas, making them interact with a lower level library that gives them a data
86input that they only control by further filtering of the input is not
87appropriated. We should expect some reluctancy from them about using this API
88because of this lack of control on the input.
89
90- Speed cost
91
92All hooks of all viewers will be called for all the time intervals. So, if we
93have a detailed events list and a control flow view, asking both for different
94time intervals, the detailed events list will have to filter all the events
95delivered originally to the control flow view. This can be a case occuring quite
96often.
97
98
99
100Strengths
101
102- Simple concatenation of time intervals at the main window level.
103
104Having the opportunity of delivering more events than necessary to the viewers
105means that we can concatenate time intervals and number of events requested
106fairly easily, while being hard to determine if some specific cases will be
318585ee 107wrong, in depth testing being impossible.
6ea2aecb 108
109- No duplication of the tracecontext API
110
111Viewers deal directly with the tracecontext API for registering hooks, removing
112a layer of encapsulation.
113
114
115
116
117
1182. (Proposed) Strict boundaries events reading
119
120The idea behind this method is to provide exactly the events requested by the
121viewers to them, no more, no less.
122
6ea2aecb 123It uses the new API for process traceset suggested in the document
124process_traceset_strict_boundaries.txt.
125
126It also means that the lttvwindow API will have to deal with viewer's hooks.
127Those will not be allowed to add them directly in the context. They will give
128them to the lttvwindow API, along with the time interval or the position and
129number of events. The lttvwindow API will have to take care of adding and
130removing hooks for the different time intervals requested. That means that hooks
131insertion and removal will be done between each traceset processing based on
132the time intervals and event positions related to each hook. We must therefore
133provide a simple interface for hooks passing between the viewers and the main
318585ee 134window, make them easier to manage from the main window. A modification to the
135LttvHooks type solves this problem.
6ea2aecb 136
137
138Architecture
139
140Added to the lttvwindow API :
141
142
589a505d 143void lttvwindow_events_request
fc9fa653 144( MainWindow *main_win,
3c502bdc 145 EventsRequest *events_request);
6ea2aecb 146
8646cedb 147void lttvwindow_events_request
148( MainWindow *main_win,
149 EventsRequest events_request);
6ea2aecb 150
8646cedb 151void lttvwindow_events_request_remove_all
152( MainWindow *main_win,
153 gpointer viewer);
6ea2aecb 154
6ea2aecb 155
8646cedb 156Internal functions :
6ea2aecb 157
158- lttvwindow_process_pending_requests
159
318585ee 160
8646cedb 161Events Requests Removal
318585ee 162
8646cedb 163A new API function will be necessary to let viewers remove all event requests
164they have made previously. By allowing this, no more out of bound requests will
165be serviced : a viewer that sees its time interval changed before the first
166servicing is completed can clear its previous events requests and make a new
167one for the new interval needed, considering the finished chunks as completed
168area.
318585ee 169
8646cedb 170It is also very useful for dealing with the viewer destruction case : the viewer
171just has to remove its events requests from the main window before it gets
172destroyed.
318585ee 173
318585ee 174
8646cedb 175Permitted GTK Events Between Chunks
318585ee 176
8646cedb 177All GTK Events will be enabled between chunks. This is due to the fact that the
178background processing and a high priority request are seen as the same case.
179While a background processing is in progress, the whole graphical interface must
180be enabled.
318585ee 181
8646cedb 182We needed to deal with the coherence of background processing and diverse GTK
183events anyway. This algorithm provides a generalized way to deal with any type
184of request and any GTK events.
318585ee 185
318585ee 186
8646cedb 187Background Computation Request
318585ee 188
8646cedb 189The types of background computation that can be requested by a viewer : state
190computation (main window scope) or viewer specific background computation.
318585ee 191
8646cedb 192A background computation request is asked via lttvwindow_events_request, with a
193priority field set with a low priority.
6ea2aecb 194
8646cedb 195If a lttvwindow_events_request_remove_all is done on the viewer pointer, it will
196not affect the state computation as no viewer pointer will have been passed in
197the initial request. This is the expected result. For the background processings
198that call viewer's hooks, they will be removed.
6ea2aecb 199
200
206ea1f4 201A New "Redraw" Button
202
203It will be used to redraw the viewers entirely. It is useful to restart the
204servicing after a "stop" action.
205
206A New "Continue" Button
207
208It will tell the viewers to send requests for damaged areas. It is useful to
209complete the servicing after a "stop" action.
210
211
6ea2aecb 212
8646cedb 213Implementation
6ea2aecb 214
6ea2aecb 215
8646cedb 216- Type LttvHooks
6ea2aecb 217
8646cedb 218see hook_prio.txt
69381fc7 219
8646cedb 220The viewers will just have to pass hooks to the main window through this type,
221using the hook.h interface to manipulate it. Then, the main window will add
222them and remove them from the context to deliver exactly the events requested by
223each viewer through process traceset.
69381fc7 224
225
8646cedb 226- lttvwindow_events_request
69381fc7 227
8646cedb 228It adds the an EventsRequest struct to the array of time requests
229pending and registers a pending request for the next g_idle if none is
230registered. The viewer can access this structure during the read as its
231hook_data. Only the stop_flag can be changed by the viewer through the
232event hooks.
69381fc7 233
234typedef LttvEventsRequestPrio guint;
235
236typedef struct _EventsRequest {
237 gpointer viewer_data;
238 gboolean servicing; /* service in progress: TRUE */
239 LttvEventsRequestPrio prio; /* Ev. Req. priority */
240 LttTime start_time; /* Unset : { 0, 0 } */
241 LttvTracesetContextPosition *start_position; /* Unset : num_traces = 0 */
242 gboolean stop_flag; /* Continue:TRUE Stop:FALSE */
243 LttTime end_time; /* Unset : { 0, 0 } */
244 guint num_events; /* Unset : G_MAXUINT */
245 LttvTracesetContextPosition *end_position; /* Unset : num_traces = 0 */
246 LttvHooks *before_traceset; /* Unset : NULL */
247 LttvHooks *before_trace; /* Unset : NULL */
248 LttvHooks *before_tracefile;/* Unset : NULL */
249 LttvHooks *event; /* Unset : NULL */
250 LttvHooksById *event_by_id; /* Unset : NULL */
251 LttvHooks *after_tracefile; /* Unset : NULL */
252 LttvHooks *after_trace; /* Unset : NULL */
253 LttvHooks *after_traceset; /* Unset : NULL */
254 LttvHooks *before_chunk; /* Unset : NULL */
255 LttvHooks *after_chunk /* Unset : NULL */
256} EventsRequest;
257
258
8646cedb 259
260- lttvwindow_events_request_remove_all
261
262It removes all the events requests from the pool that has their "viewer" field
263maching the viewer pointer given in argument.
264
265It calls the traceset/trace/tracefile end hooks for each request removed.
266
267
268- lttvwindow_process_pending_requests
269
270This internal function gets called by g_idle, taking care of the pending
271requests. It is responsible for concatenation of time intervals and position
272requests. It does it with the following algorithm organizing process traceset
273calls. Here is the detailed description of the way it works :
274
275
276
277- Revised Events Requests Servicing Algorithm (v2)
278
69381fc7 279The reads are splitted in chunks. After a chunk is over, we want to check if
280there is a GTK Event pending and execute it. It can add or remove events
281requests from the event requests list. If it happens, we want to start over
282the algorithm from the beginning.
283
284Two levels of priority exists. High priority and low priority. High prio
285requests are serviced first, even if lower priority requests has lower start
286time or position.
287
288
289Data structures necessary :
290
291List of requests added to context : list_in
292List of requests not added to context : list_out
293
294Initial state :
295
296list_in : empty
297list_out : many events requests
298
299
300A. While list_in !empty and list_out !empty and !GTK Event pending
301 1. If list_in is empty (need a seek)
302 1.1 Add requests to list_in
303 1.1.1 Find all time requests with the highest priority and lowest start
304 time in list_out (ltime)
305 1.1.2 Find all position requests with the highest priority and lowest
306 position in list_out (lpos)
307 1.1.3 If lpos.prio > ltime.prio
308 || (lpos.prio == ltime.prio && lpos.start time < ltime)
309 - Add lpos to list_in, remove them from list_out
310 1.1.4 Else, (lpos.prio < ltime.prio
311 ||(lpos.prio == ltime.prio && lpos.start time >= ltime))
312 - Add ltime to list_in, remove them from list_out
313 1.2 Seek
314 1.2.1 If first request in list_in is a time request
e6359327 315 - If first req in list_in start time != current time
316 - Seek to that time
69381fc7 317 1.2.2 Else, the first request in list_in is a position request
e6359327 318 - If first req in list_in pos != current pos
319 - If the position is the same than the saved state, restore state
320 - Else, seek to that position
69381fc7 321 1.3 Add hooks and call begin for all list_in members
322 1.3.1 If !servicing
323 - begin hooks called
324 - servicing = TRUE
325 1.3.2 call before_chunk
326 1.3.3 events hooks added
327 2. Else, list_in is not empty, we continue a read
328 2.1 For each req of list_out
329 - if req.start time == current context time
330 - Add to list_in, remove from list_out
331 - If !servicing
332 - Call begin
333 - servicing = TRUE
334 - Call before_chunk
335 - events hooks added
336 - if req.start position == current position
337 - Add to list_in, remove from list_out
338 - If !servicing
339 - Call begin
340 - servicing = TRUE
341 - Call before_chunk
342 - events hooks added
343
344 3. Find end criterions
345 3.1 End time
346 3.1.1 Find lowest end time in list_in
347 3.1.2 Find lowest start time in list_out (>= than current time*)
348 * To eliminate lower prio requests
349 3.1.3 Use lowest of both as end time
350 3.2 Number of events
351 3.2.1 Find lowest number of events in list_in
352 3.2.2 Use min(CHUNK_NUM_EVENTS, min num events in list_in) as num_events
353 3.3 End position
354 3.3.1 Find lowest end position in list_in
355 3.3.2 Find lowest start position in list_out (>= than current
356 position)
357 3.3.3 Use lowest of both as end position
358
359 4. Call process traceset middle
360 4.1 Call process traceset middle (Use end criterion found in 3)
361 * note : end criterion can also be viewer's hook returning TRUE
362 5. After process traceset middle
363 - if current context time > traceset.end time
364 - For each req in list_in
365 - Call end for req
366 - Remove events hooks for req
367 - remove req from list_in
368 5.1 For each req in list_in
369 - req.num -= count
370 - if req.num == 0
371 - Call end for req
372 - Remove events hooks for req
373 - remove req from list_in
374 - if current context time > req.end time
375 - Call end for req
376 - Remove events hooks for req
377 - remove req from list_in
378 - if req.end pos == current pos
379 - Call end for req
380 - Remove events hooks for req
381 - remove req from list_in
382 - if req.stop_flag == TRUE
383 - Call end for req
384 - Remove events hooks for req
385 - remove req from list_in
386 - if exists one events requests in list_out that has
387 higher priority and time != current time
388 - Use current position as start position for req
389 - Remove start time from req
390 - Call after_chunk for req
391 - Remove event hooks for req
392 - Put req back in list_out, remove from list_in
393 - Save current state into saved_state.
394
395B. When interrupted
396 1. for each request in list_in
397 1.1. Use current postition as start position
398 1.2. Remove start time
399 1.3. Call after_chunk
400 1.4. Remove event hooks
401 1.5. Put it back in list_out
402 2. Save current state into saved_state.
403 2.1 Free old saved state.
404 2.2 save current state.
405
406
407
408
409
410Notes :
411End criterions for process traceset middle :
412If the criterion is reached, event is out of boundaries and we return.
413Current time >= End time
414Event count > Number of events
415Current position >= End position
416Last hook list called returned TRUE
417
418The >= for position is necessary to make ensure consistency between start time
419requests and positions requests that happens to be at the exact same start time
420and position.
421
422We only keep one saved state in memory. If, for example, a low priority
423servicing is interrupted, a high priority is serviced, then the low priority
424will use the saved state to start back where it was instead of seeking to the
425time. In the very specific case where a low priority servicing is interrupted,
426and then a high priority servicing on top of it is also interrupted, well, the
427low priority will loose its state and will have to seek back. It should not
428occur often. The solution to it would be to save one state per priority.
429
430
8646cedb 431
432
433
434
435Weaknesses
436
4bcbbd42 437- There is a possibility that we must use seek if more than one interruption
438 occurs, i.e. low priority interrupted by addition of high priority, and then
439 high priority interrupted. The seek will be necessary for the low priority.
440 It could be a good idea to keep one saved_state per priority ?
8646cedb 441
442
443Strengths
444
445- Removes the need for filtering of information supplied to the viewers.
446
447- Viewers have a better control on their data input.
448
449- Solves all the weaknesses idenfied in the actual boundaryless traceset
450reading.
451
452- Background processing available.
453
This page took 0.039797 seconds and 4 git commands to generate.