You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some parts of the training introduce a lot of tools, some with shared features, some with very specific features. It would be nice to have in the training slides one big picture at which we can go back anytime to distinguish properly all tools.
Moreover, such a graph would help to understand relationship between tools (for example : perf relying on tracepoints, kprobes, uprobes, ftrace...)
The text was updated successfully, but these errors were encountered:
Trainees keep asking during each session about a general graph hinting about what tool to use for what kind of problem, so the priority of this is pretty high.
A starting point could be to insert a very simple table in the "Choosing the right tool" at the end of the system wide profiling/tracing section, sorting solutions with the following criteria (maybe with a level, either green, yellow or red):
short term (ie dynamic) vs long term tools (static, in code) tools
constant issues VS sporadic issues
is the overhead noticeable/big
tailored for embedded use case (eg: remote setup with some data pushed to host, available in build systems, etc)
simple or difficult to learn/use (too subjective ?)
ia
added a commit
to ia/bootlin-training-materials
that referenced
this issue
Jan 17, 2024
Hello. Recently I subscribed to notifications from some Bootlin repositories here. Once I saw the comment above, I decided to make a suggestion because to me the answer to "how to list monitoring/perf tools" was too obvious. :)
Hence, I've made this quick & little PR. Now I'm just really very curious to get any feedback.
Some parts of the training introduce a lot of tools, some with shared features, some with very specific features. It would be nice to have in the training slides one big picture at which we can go back anytime to distinguish properly all tools.
Moreover, such a graph would help to understand relationship between tools (for example : perf relying on tracepoints, kprobes, uprobes, ftrace...)
The text was updated successfully, but these errors were encountered: