update compat
[lttv.git] / tags / lttv-0.10.0-pre15-12082008 / doc / developer / lttv_guidetailed-event-list-redesign.txt
CommitLineData
5750899b 1
2Redesign of the GUI detailed event list
3
4Mathieu Desnoyers 08/2005
5
6The basic problem about this list is that it uses the number of events, not the
7time, as a vertical axis (for the Y axis scrollbar).
8
9Seeking in the traces is done by time. We have no clue of the number of events
10between two times without doing preparsing.
11
12If we want to fully reuse textDump, it's bettwer if we depend upon state
13computation. It would be good to make the viewer work with this information
14missing though.
15
16textDump's print_field should be put in a lttv/lttv core file, so we can use it
17as is in the detailed event list module without depending upon batchAnalysis.
18
19
20* With preparsing only :
21
22The problem then becomes simpler :
23
24We can precompute the event number while doing state computation and save it
25periodically with saved states. We can then use the event number in the trace
26as scrollbar value, which, when scrolled, would result into a search in the
27saved states by event number.
28
29How much time would it take to seek back to the wanted position from the last
30saved state ?
31
32compudj@dijkstra:~/local/bin$ ./lttv -m batchtest -1 -2 -t
33/home/compudj/traces/200MB
34** Message: Processing trace while counting events (12447572 events in 14.0173
35seconds)
36** Message: Processing trace while updating state (9.46535 seconds)
37
389.46535 s / 12447572 events * 50000 events = 0.038 s
39
4038 ms latency shouldn't be too noticeable by a user when scrolling.
41
42(note : counting events batchtest does also verify time flow integrity and get
43the position for each event (not optimal), that's why it takes 14s)
44
45As an optimisation, we could use a backing text buffer (an array of strings),
46where we would save the 50000 computed events between two consecutive saved
47states.
48
49Memory required : 50000 * 20 bytes/event = 1MB
50
51Which seems ok, but costy. In would be better, in fact, not to depend on the
52saved states interval for such a value : we could keep a 1000 events array, for
53instance (for 20KB cost, which is really better).
54
55The backing text buffer would, by itself, make sure it has a sufficient
56number of events so a scroll up/down of one page would be responded directly.
57That imply that a scroll up/down would first update the shown fields, and only
58afterward make the backing buffer resync its events in the background. In the
59case where the events were not directly available, it would have to update the
60buffer in the foreground and only then show the requested events.
61
62
63Important note : this design doesn't support filtering of events, which is
64 an important downside.
65
66
67
68* If we want the viewer to be able to show information without preparsing :
69
70This is the hardest the problem could get. We have to seek by time (even the
71scrollbar must seek by time), but increment/decrement event by event when using
72the scrollbar up/down, page up/page down. Let's call them "far scroll" and "near
73scroll", respectively.
74
75A far scroll must resync the trace to the time requested by the scrollbar value.
76
77A near scroll must sync the trace to a time that is prior to the requested
78event, show the events requested, and then sync the scrollbar value (without
79event updating) to the shown event time.
80
81* seek n events backward
82
83We have no information about how far back we must request events in the trace :
84
85The algorithm would look like :
86
87seek_n_events_backward(current time, current position, time_offset, filter)
88Returns : a TracesetPosition
89 - If the current time < beginning of trace, is means we cannot get any more
90 events, inform the requester that a list of less than n events is ready.
91 - Else, request a read to a the time_offset backward, calling the
92 per event hook, and calling the after_traceset hook when finished. The end
93 position would be the position of the current first event.
94
95per_event_hook
96 - if filter returns true
97 - Append the traceset position to a list of maximum size n. Remove the first
98 entries.
99
100after_traceset_hook
101 - if the list has a size less than n, invoke a seek_n_events_backward
102 subsequent iteration, for completing the list. The new time_offset is the
103 last time_offset used multiplied by 2. (can be done by tail recursion (if we
104 want to split this operation in multiple segments) or by an iterative
105 algorithm (seek_n_events_backward would be a while() calling its own
106 process_traceset_middle()).
107 - if the list a a size of n, it's complete : call the viewer get_print_events
108 hook.
109
110
111* seek n events forward
112
113seek_n_events_forward(current position, filter)
114 - Simple : seek to the current position, request read of trace calling an
115 event counting hook (starts at 0).
116
117event_counting_hook
118 - if filter returns true
119 - increment event count.
120 - if event count > requested count, inform that the current position if the
121 wanted position. Return TRUE, so the read will stop.
122
123
124* Printing events
125
126get_print_events
127 - seek to the position at the beginning of the list. End position is the
128 current one (not in the list! the one currently shown). Call a events
129 request between this positions, printing the fields to strings shown in the
130 viewer.
131
132
133
134seek_n_events backward and forward seems to be interesting algorithms that
135should be implemented in the tracecontext library. With those helpers, it would
136become simpler to implement a detailed event list not depending on state
137computation.
138
139
This page took 0.027923 seconds and 4 git commands to generate.