minor correction
[lttv.git] / ltt / branches / poly / doc / developer / lttvwindow_events_delivery.txt
... / ...
CommitLineData
1Linux Trace Toolkit
2
3Mathieu Desnoyers 17-05-2004
4
5
6This document explains how the lttvwindow API could process the event requests
7of the viewers, merging event requests and hook lists to benefit from the fact
8that process_traceset can call multiple hooks for the same event.
9
10First, we will explain the detailed process of event delivery in the current
11framework. We will then study its strengths and weaknesses.
12
13In a second time, a framework where the events requests are dealt by the main
14window with fine granularity will be described. We will then discussed the
15advantages and inconvenients over the first framework.
16
17
181. (Actual) Boundaryless event reading
19
20Actually, viewers request events in a time interval from the main window. They
21also specify a (not so) maximum number of events to be delivered. In fact, the
22number of events to read only gives a stop point, from where only events with
23the same timestamp will be delivered.
24
25Viewers register hooks themselves in the traceset context. When merging read
26requests in the main window, all hooks registered by viewers will be called for
27the union of all the read requests, because the main window has no control on
28hook registration.
29
30The main window calls process_traceset on its own for all the intervals
31requested by all the viewers. It must not duplicate a read of the same time
32interval : it could be very hard to filter by viewers. So, in order to achieve
33this, time requests are sorted by start time, and process_traceset is called for
34each time request. We keep the last event time between each read : if the start
35time of the next read is lower than the time reached, we continue the reading
36from the actual position.
37
38We deal with specific number of events requests (infinite end time) by
39garantying that, starting from the time start of the request, at least that
40number of events will be read. As we can't do it efficiently without interacting
41very closely with process_traceset, we always read the specified number of
42events requested starting from the current position when we answer to a request
43based on the number of events.
44
45The viewers have to filter events delivered by traceset reading, because they
46can be asked by another viewer for a totally (or partially) different time
47interval.
48
49
50Weaknesses
51
52- process_middle does not guarantee the number of events read
53
54First of all, a viewer that requests events to process_traceset has no garantee
55that it will get exactly what it asked for. For example, a direct call to
56traceset_middle for a specific number of events will delived _at least_ that
57quantity of events, plus the ones that have the same timestamp that the last one
58has.
59
60- Border effects
61
62Viewer's writers will have to deal with a lot of border effects caused by the
63particularities of the reading. They will be required to select the information
64they need from their input by filtering.
65
66- Lack of encapsulation and difficulty of testing
67
68The viewer's writer will have to take into account all the border effects caused
69by the interaction with other modules. This means that event if a viewer works
70well alone or with another viewer, it's possible that new bugs arises when a new
71viewer comes around. So, even if a perfect testbench works well for a viewer, it
72does not confirm that no new bug will arise when another viewer is loaded at the
73same moment asking for different time intervals.
74
75
76- Duplication of the work
77
78Time based filters and counters of events will have to be implemented at the
79viewer's side, which is a duplication of the functionnalities that would
80normally be expected from the tracecontext API.
81
82- Lack of control over the data input
83
84As we expect module's writers to prefer to be as close as possible from the raw
85datas, making them interact with a lower level library that gives them a data
86input that they only control by further filtering of the input is not
87appropriated. We should expect some reluctancy from them about using this API
88because of this lack of control on the input.
89
90- Speed cost
91
92All hooks of all viewers will be called for all the time intervals. So, if we
93have a detailed events list and a control flow view, asking both for different
94time intervals, the detailed events list will have to filter all the events
95delivered originally to the control flow view. This can be a case occuring quite
96often.
97
98
99
100Strengths
101
102- Simple concatenation of time intervals at the main window level.
103
104Having the opportunity of delivering more events than necessary to the viewers
105means that we can concatenate time intervals and number of events requested
106fairly easily, while being hard to determine if some specific cases will be
107wrong, in depth testing being impossible.
108
109- No duplication of the tracecontext API
110
111Viewers deal directly with the tracecontext API for registering hooks, removing
112a layer of encapsulation.
113
114
115
116
117
1182. (Proposed) Strict boundaries events reading
119
120The idea behind this method is to provide exactly the events requested by the
121viewers to them, no more, no less.
122
123It uses the new API for process traceset suggested in the document
124process_traceset_strict_boundaries.txt.
125
126It also means that the lttvwindow API will have to deal with viewer's hooks.
127Those will not be allowed to add them directly in the context. They will give
128them to the lttvwindow API, along with the time interval or the position and
129number of events. The lttvwindow API will have to take care of adding and
130removing hooks for the different time intervals requested. That means that hooks
131insertion and removal will be done between each traceset processing based on
132the time intervals and event positions related to each hook. We must therefore
133provide a simple interface for hooks passing between the viewers and the main
134window, make them easier to manage from the main window. A modification to the
135LttvHooks type solves this problem.
136
137
138Architecture
139
140Added to the lttvwindow API :
141
142
143void lttvwindow_events_request
144( Tab *tab,
145 const EventsRequest *events_request);
146
147void lttvwindow_events_request_remove_all
148( Tab *tab,
149 gconstpointer viewer);
150
151
152Internal functions :
153
154- lttvwindow_process_pending_requests
155
156
157Events Requests Removal
158
159A new API function will be necessary to let viewers remove all event requests
160they have made previously. By allowing this, no more out of bound requests will
161be serviced : a viewer that sees its time interval changed before the first
162servicing is completed can clear its previous events requests and make a new
163one for the new interval needed, considering the finished chunks as completed
164area.
165
166It is also very useful for dealing with the viewer destruction case : the viewer
167just has to remove its events requests from the main window before it gets
168destroyed.
169
170
171Permitted GTK Events Between Chunks
172
173All GTK Events will be enabled between chunks. This is due to the fact that the
174background processing and a high priority request are seen as the same case.
175While a background processing is in progress, the whole graphical interface must
176be enabled.
177
178We needed to deal with the coherence of background processing and diverse GTK
179events anyway. This algorithm provides a generalized way to deal with any type
180of request and any GTK events.
181
182
183Background Computation Request
184
185The types of background computation that can be requested by a viewer : state
186computation (main window scope) or viewer specific background computation.
187
188A background computation request is asked via lttvwindow_events_request, with a
189priority field set with a low priority.
190
191In the case of a background computation with viewer pointer field set to NULL,
192if a lttvwindow_events_request_remove_all is done on the viewer pointer, it will
193not affect the state computation as no viewer pointer will have been passed in
194the initial request. This is the expected result. For the background processings
195that call viewer's hooks, they will be removed.
196
197
198A New "Redraw" Button
199
200It will be used to redraw the viewers entirely. It is useful to restart the
201servicing after a "stop" action.
202
203A New "Continue" Button
204
205It will tell the viewers to send requests for damaged areas. It is useful to
206complete the servicing after a "stop" action.
207
208
209
210Tab change
211
212If a tab change occurs, we still want to do background processing.
213Events requests must be stocked in a list located in the same scope than the
214traceset context. Right now, this is tab scope. All functions called from the
215request servicing function must _not_ use the current_tab concept, as it may
216change. The idle function must the take a tab, and not the main window, as
217parameter.
218
219If a tab is removed, its associated idle events requests servicing function must
220also be removed.
221
222It now looks a lot more useful to give a Tab* to the viewer instead of a
223MainWindow*, as all the information needed by the viewer is located at the tab
224level. It will diminish the dependance upon the current tab concept.
225
226
227
228Idle function (lttvwindow_process_pending_requests)
229
230The idle function must return FALSE to be removed from the idle functions when
231no more events requests are pending. Otherwise, it returns TRUE. It will service
232requests until there is no more request left.
233
234
235
236
237Implementation
238
239
240- Type LttvHooks
241
242see hook_prio.txt
243
244The viewers will just have to pass hooks to the main window through this type,
245using the hook.h interface to manipulate it. Then, the main window will add
246them and remove them from the context to deliver exactly the events requested by
247each viewer through process traceset.
248
249
250- lttvwindow_events_request
251
252It adds the an EventsRequest struct to the list of events requests
253pending and registers a pending request for the next g_idle if none is
254registered. The viewer can access this structure during the read as its
255hook_data. Only the stop_flag can be changed by the viewer through the
256event hooks.
257
258typedef LttvEventsRequestPrio guint;
259
260typedef struct _EventsRequest {
261 gpointer viewer_data;
262 gboolean servicing; /* service in progress: TRUE */
263 LttvEventsRequestPrio prio; /* Ev. Req. priority */
264 LttTime start_time; /* Unset : { 0, 0 } */
265 LttvTracesetContextPosition *start_position; /* Unset : num_traces = 0 */
266 gboolean stop_flag; /* Continue:TRUE Stop:FALSE */
267 LttTime end_time; /* Unset : { 0, 0 } */
268 guint num_events; /* Unset : G_MAXUINT */
269 LttvTracesetContextPosition *end_position; /* Unset : num_traces = 0 */
270 LttvHooks *before_traceset; /* Unset : NULL */
271 LttvHooks *before_trace; /* Unset : NULL */
272 LttvHooks *before_tracefile;/* Unset : NULL */
273 LttvHooks *event; /* Unset : NULL */
274 LttvHooksById *event_by_id; /* Unset : NULL */
275 LttvHooks *after_tracefile; /* Unset : NULL */
276 LttvHooks *after_trace; /* Unset : NULL */
277 LttvHooks *after_traceset; /* Unset : NULL */
278 LttvHooks *before_request; /* Unset : NULL */
279 LttvHooks *after_request /* Unset : NULL */
280} EventsRequest;
281
282
283
284- lttvwindow_events_request_remove_all
285
286It removes all the events requests from the pool that has their "viewer" field
287maching the viewer pointer given in argument.
288
289It calls the traceset/trace/tracefile end hooks for each request removed if
290they are currently serviced.
291
292
293- lttvwindow_process_pending_requests
294
295This internal function gets called by g_idle, taking care of the pending
296requests. It is responsible for concatenation of time intervals and position
297requests. It does it with the following algorithm organizing process traceset
298calls. Here is the detailed description of the way it works :
299
300
301
302- Revised Events Requests Servicing Algorithm (v2)
303
304The reads are splitted in chunks. After a chunk is over, we want to check if
305there is a GTK Event pending and execute it. It can add or remove events
306requests from the event requests list. If it happens, we want to start over
307the algorithm from the beginning. The after traceset/trace/tracefile hooks are
308called after each interrupted chunk, and before traceset/trace/tracefile are
309called when the request processing resumes. Before and after request hooks are
310called respectively before and after the request processing.
311
312Two levels of priority exists. High priority and low priority. High prio
313requests are serviced first, even if lower priority requests has lower start
314time or position.
315
316
317Data structures necessary :
318
319List of requests added to context : list_in
320List of requests not added to context : list_out
321
322Initial state :
323
324list_in : empty
325list_out : many events requests
326
327
328A. While (list_in !empty or list_out !empty) and !GTK Event pending
329 1. If list_in is empty (need a seek)
330 1.1 Add requests to list_in
331 1.1.1 Find all time requests with the highest priority and lowest start
332 time in list_out (ltime)
333 1.1.2 Find all position requests with the highest priority and lowest
334 position in list_out (lpos)
335 1.1.3 If lpos.prio > ltime.prio
336 || (lpos.prio == ltime.prio && lpos.start time < ltime)
337 - Add lpos to list_in, remove them from list_out
338 1.1.4 Else, (lpos.prio < ltime.prio
339 ||(lpos.prio == ltime.prio && lpos.start time >= ltime))
340 - Add ltime to list_in, remove them from list_out
341 1.2 Seek
342 1.2.1 If first request in list_in is a time request
343 - If first req in list_in start time != current time
344 - Seek to that time
345 1.2.2 Else, the first request in list_in is a position request
346 - If first req in list_in pos != current pos
347 - If the position is the same than the saved state, restore state
348 - Else, seek to that position
349 1.3 Add hooks and call before request for all list_in members
350 1.3.1 If !servicing
351 - begin request hooks called
352 - servicing = TRUE
353 1.3.2 call before_traceset
354 1.3.3 events hooks added
355 2. Else, list_in is not empty, we continue a read
356 2.1 For each req of list_out
357 - if req.start time == current context time
358 - Add to list_in, remove from list_out
359 - If !servicing
360 - Call begin request
361 - servicing = TRUE
362 - Call before_traceset
363 - events hooks added
364 - if req.start position == current position
365 - Add to list_in, remove from list_out
366 - If !servicing
367 - Call begin request
368 - servicing = TRUE
369 - Call before_traceset
370 - events hooks added
371
372 3. Find end criterions
373 3.1 End time
374 3.1.1 Find lowest end time in list_in
375 3.1.2 Find lowest start time in list_out (>= than current time*)
376 * To eliminate lower prio requests
377 3.1.3 Use lowest of both as end time
378 3.2 Number of events
379 3.2.1 Find lowest number of events in list_in
380 3.2.2 Use min(CHUNK_NUM_EVENTS, min num events in list_in) as num_events
381 3.3 End position
382 3.3.1 Find lowest end position in list_in
383 3.3.2 Find lowest start position in list_out (>= than current
384 position)
385 3.3.3 Use lowest of both as end position
386
387 4. Call process traceset middle
388 4.1 Call process traceset middle (Use end criterion found in 3)
389 * note : end criterion can also be viewer's hook returning TRUE
390 5. After process traceset middle
391 - if current context time > traceset.end time
392 - For each req in list_in
393 - Remove events hooks for req
394 - Call end traceset for req
395 - Call end request for req
396 - remove req from list_in
397 5.1 For each req in list_in
398 - req.num -= count
399 - if req.num == 0
400 - Remove events hooks for req
401 - Call end traceset for req
402 - Call end request for req
403 - remove req from list_in
404 - if current context time > req.end time
405 - Remove events hooks for req
406 - Call end traceset for req
407 - Call end request for req
408 - remove req from list_in
409 - if req.end pos == current pos
410 - Remove events hooks for req
411 - Call end traceset for req
412 - Call end request for req
413 - remove req from list_in
414 - if req.stop_flag == TRUE
415 - Remove events hooks for req
416 - Call end traceset for req
417 - Call end request for req
418 - remove req from list_in
419 - if exists one events requests in list_out that has
420 higher priority and time != current time
421 - Use current position as start position for req
422 - Remove start time from req
423 - Call after_traceset for req
424 - Remove event hooks for req
425 - Put req back in list_out, remove from list_in
426 - Save current state into saved_state.
427
428B. When interrupted
429 1. for each request in list_in
430 1.1. Use current postition as start position
431 1.2. Remove start time
432 1.3. Call after_traceset
433 1.4. Remove event hooks
434 1.5. Put it back in list_out
435 2. Save current state into saved_state.
436 2.1 Free old saved state.
437 2.2 save current state.
438
439
440
441
442
443Notes :
444End criterions for process traceset middle :
445If the criterion is reached, event is out of boundaries and we return.
446Current time >= End time
447Event count > Number of events
448Current position >= End position
449Last hook list called returned TRUE
450
451The >= for position is necessary to make ensure consistency between start time
452requests and positions requests that happens to be at the exact same start time
453and position.
454
455We only keep one saved state in memory. If, for example, a low priority
456servicing is interrupted, a high priority is serviced, then the low priority
457will use the saved state to start back where it was instead of seeking to the
458time. In the very specific case where a low priority servicing is interrupted,
459and then a high priority servicing on top of it is also interrupted, well, the
460low priority will loose its state and will have to seek back. It should not
461occur often. The solution to it would be to save one state per priority.
462
463
464
465
466
467
468Weaknesses
469
470- There is a possibility that we must use seek if more than one interruption
471 occurs, i.e. low priority interrupted by addition of high priority, and then
472 high priority interrupted. The seek will be necessary for the low priority.
473 It could be a good idea to keep one saved_state per priority ?
474
475
476Strengths
477
478- Removes the need for filtering of information supplied to the viewers.
479
480- Viewers have a better control on their data input.
481
482- Solves all the weaknesses idenfied in the actual boundaryless traceset
483reading.
484
485- Background processing available.
486
This page took 0.023507 seconds and 4 git commands to generate.