move everything out of trunk
[lttv.git] / lttng-xenomai / LinuxTraceToolkitViewer-0.8.61-xenoltt / doc / developer / requests_servicing_schedulers.txt
1 Linux Trace Toolkit
2
3 Requests Servicing Schedulers
4
5
6 Mathieu Desnoyers, 07/06/2004
7
8
9 In the LTT graphical interface, two main types of events requests may occur :
10
11 - events requests made by a viewer concerning a traceset for a ad hoc
12 computation.
13 - events requests made by a viewer concerning a trace for a precomputation.
14
15
16 Ad Hoc Computation
17
18 The ad hoc computation must be serviced immediately : they are directly
19 responding to events requests that must be serviced to complete the graphical
20 widgets'data. This kind of computation may lead to incomplete result as long as
21 precomputation are not finished. Once precomputation is over, the widgets will
22 be redrawn if they needed such information. A ad hoc computation is done on a
23 traceset : the workspace of a tab.
24
25 Precomputation
26
27 Traces are global objects. Only one instance of a trace is opened for all the
28 program. Precomputation will append data to the traces attributes (states,
29 statistics). It must inform the widgets which asked for such states or
30 statistics of their availability. Only one precomputation must be launched for
31 each trace and no duplication of precomputation must be done.
32
33
34 Schedulers
35
36 There is one tracesetcontext per traceset. Each reference to a trace by a
37 traceset also has its own tracecontext. Each trace, by itself, has its own
38 tracecontext.
39
40 Let's define a scheduler as a g_idle events request servicing function.
41
42 There is one scheduler per traceset context (registered when there are requests
43 to answer). There is also one scheduler per autonomous trace context (not
44 related to any traceset context).
45
46 A scheduler processes requests for a specific traceset or trace by combining
47 time intervals of the requests. It is interruptible by any GTK event. A
48 precomputation scheduler has a lower priority than a ad hoc computation
49 scheduler. That means that no precomputation will be performed until there is
50 no more ad hoc compuation pending. When a scheduler is interrupted, it makes no
51 assumption about the presence or absence of the current requests in its pool
52 when it starts back.
53
54
55 Foreground Scheduler
56
57 There can be one foreground scheduler per traceset (one traceset per tab). It
58 simply calls the hooks given by the events requests of the viewers for the
59 specified time intervals.
60
61
62 Background Scheduler
63
64 Right now, to simplify the problem of the background scheduler, we assume that
65 the module that loads the extended statistics hooks has been loaded before the
66 data is requested and that it is not unloaded until the program stops. We will
67 eventually have to deal with the requests removal based on module load/unload,
68 but it complicates the problem quite a bit.
69
70 A background scheduler adds hooks located under a global attributes path
71 (specified by the viewer who makes the request) to the trace's traceset
72 context (the trace is specified by the viewer). Then, it processes the whole
73 trace with this context (and hooks).
74
75 Typically, a module that extends statistics will register hooks in the global
76 attributes tree under /computation/modulename/hook_name . A viewer
77 that needs these statistics for a set of traces does a background computation
78 request through a call to the main window API function. It must specify all
79 types of hooks that must be called for the specified trace.
80
81 The background computation requests for a trace are queued. When the idle
82 function kicks in to answer these requests, it add the hooks of all the requests
83 toghether in the context and starts the read. It also keeps a list of the
84 background requests currently serviced.
85
86 The read is done from start to end of the trace, calling all the hooks present
87 in the context. Only when the read is over, the after_request hooks of the
88 currently serviced requests are called and the requests are destroyed.
89
90 If there are requests in the waiting queue, they are all added to the current
91 pool and processed. It is important to understand that, while a processing is in
92 being done, no requests are added to the pool : they wait for their turn in the
93 queue.
94
95 Every hook that are added to the context by the scheduler comes from global
96 attributes, i.e.
97 /traces/#
98 in LttvTrace attributes : modulename/hook_name
99
100 They come with a flag telling either in_progress or ready. If the flag
101 ready is set, a viewer knows that the data it needs is already ready and he
102 doesn't have to make a request.
103
104 If the flag in_progress is set, that means that the data it needs is currently
105 being serviced, and it must wait for the current servicing to be finished. It
106 tells the lttvwindow API to call a hook when the actual servicing is over (there
107 is a special function for this, as it requires to modify the pool of requests
108 actually being serviced : we must make sure that no new reading hooks are
109 added!).
110
111
112
113
114
115 New Global Attributes
116
117 /traces/#
118 in LttvTrace attributes :
119
120 When a processing is fired, a variable
121 computation/modulename/in_progress is set.
122
123 When a processing finished, a variable
124 computation/modulename/in_progress is unset
125 computation/modulename/ready is set
126
127
128
129
130
131 Typical Use For a Viewer
132
133 When a viewer wants extended information, it must first check if it is ready.
134 if not :
135 Before a viewer makes a request, it must check the in_progress status of the
136 hooks.
137
138 If the in_progress is unset, it makes the request.
139
140 If the in_progress is set, it makes a special request for being informed of the
141 end of request.
142
143
144
145
146 Hooks Lists
147
148 In order to answer the problems of background processing, we need to add a
149 reference counter for each hook of a hook list. If the same hook is added twice,
150 it will be called only once, but it will need two "remove" to be really removed
151 from the list. Two hooks are identical if they have the same function pointer
152 and hook_data.
153
154
155
156
157
158
159 Implementation
160
161 Ad Hoc Computation
162
163 see lttvwindow_events_delivery.txt
164
165
166 Hooks Lists
167
168 need new ref_count field with each hook
169 lttv_hook_add and lttv_hook_add_list must compare addition with present and
170 increment ref counter if already present.
171
172 lttv_hook_remove and remove_with_data must decrement ref_count is >1, or remove
173 the element otherwise (==1).
174
175
176
177 Background Scheduler
178
179 Global traces
180
181 Two global attributes per trace :
182 traces/#
183 It is a pointer to the LttvTrace structure.
184 In the LttvTrace attributes :
185 state/
186 saved_states/
187 statistics/
188 modes/
189 cpu/
190 processes/
191 modulename1/
192 modulename2/
193 ...
194 computation/ /* Trace specific background computation hooks status */
195 state/
196 in_progress
197 ready
198 stats/
199 in_progress
200 ready
201 modulename1/
202 in_progress
203 ready
204 requests_queue/ /* Background computation requests */
205 requests_current/ /* Type : BackgroundRequest */
206 notify_queue/
207 notify_current/
208 computation_traceset/
209 computation_traceset_context/
210
211
212 computation/ /* Global background computation hooks */
213 state/
214 before_chunk_traceset
215 before_chunk_trace
216 before_chunk_tracefile
217 after_...
218 before_request
219 after_request
220 event_hook
221 event_hook_by_id
222 hook_adder
223 hook_remover
224 stats/
225 ...
226 modulename1/
227 ...
228
229 Hook Adder and Hook remover
230
231 Hook functions that takes a trace context as call data. They simply
232 add / remove the computation related hooks from the trace context.
233
234
235
236 Modify Traceset
237 Points to the global traces. Main window must open a new one only when no
238 instance of the pathname exists.
239
240 Modify trace opening / close to make them create and destroy
241 LttvBackgroundComputation (and call end requests hooks for servicing requests)
242 and global trace info when references to the trace is zero.
243
244
245
246 EventsRequest Structure
247
248 This structure is the element of the events requests pools. The owner field is
249 used as an ownership identifier. The viewer field is a pointer to the data
250 structure upon which the action applies. Typically, both will be pointers to
251 the viewer's data structure.
252
253 In a ad hoc events request, a pointer to the EventsRequest structure is used as
254 hook_data in the hook lists : it must have been added by the viewers.
255
256
257 Modify module load/unload
258
259 A module that registers global computation hooks in the global attributes upon
260 load should unregister them when unloaded. Also, it must remove every background
261 computation request for each trace that has its own module_name as GQuark.
262
263
264 Give an API for calculation modules
265
266 Must have an API for module which register calculation hooks. Unregistration
267 must also remove all requests made for these hooks.
268
269
270 Background Requests Servicing Algorithm (v1)
271
272
273 list_in : currently serviced requests
274 list_out : queue of requests waiting for processing
275
276 notification lists :
277 notify_in : currently checked notifications
278 notify_out : queue of notifications that comes along with next processing.
279
280
281 0.1 Lock traces
282 0.2 Sync tracefiles
283
284 1. Before processing
285 - if list_in is empty
286 - Add all requests in list_out to list_in, empty list_out
287 - for each request in list_in
288 - set hooks'in_progress flag to TRUE
289 - call before request hook
290 - seek trace to start
291 - Move all notifications from notify_out to notify_in.
292 - for each request in list_in
293 - Call before chunk hooks for list_in
294 - add hooks to context *note only one hook of each type added.
295
296 2. call process traceset middle for a chunk
297 (assert list_in is not empty! : should not even be called in that case)
298
299 3. After the chunk
300 3.1 call after_chunk hooks for list_in
301 - for each request in list_in
302 - Call after chunk hooks for list_in
303 - remove hooks from context *note : only one hook of each type
304 3.2 for each notify_in
305 - if current time >= notify time, call notify and remove from notify_in
306 - if current position >= notify position, call notify and remove from
307 notify_in
308 3.3 if end of trace reached
309 - for each request in list_in
310 - set hooks'in_progress flag to FALSE
311 - set hooks'ready flag to TRUE
312 - call after request hook
313 - remove request
314 - for each notifications in notify_in
315 - call notify and remove from notify_in
316 - reset the context
317 - if list_out is empty
318 return FALSE (scheduler stopped)
319 - else
320 return TRUE (scheduler still registered)
321 3.4 else
322 - return TRUE (scheduler still registered)
323
324 4. Unlock traces
This page took 0.036883 seconds and 4 git commands to generate.