You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I briefly mentioned that I had some input for your DebugVisualizer. I've tried to put it in writing below. However, after I've read the requirements you've put together at github, I realize that some it is already covered by your requirements. So, feel free to pick and drop as you like.
Anyway, I've been working on a couple of fairly large projects involving distributed control systems relying on wired (in one case) or wireless (another case) network technologies. The systems involve(d) between 20 and 30 communicating controllers. In each case, I've had access to a mixture of
log files from each controller (except when reports came from end-users/customers)
wireless sniffer logs (ZigBee sniffer logs)
sniffer logs from (some of) the network wires (wireshark/tshark) --- note: In this (current) project, the resolution of some of the events is in the range of 10ns due to the use of the Precision Time Protocol (IEEE1588)
measurements from the control objects and sensors (output and input signals from/to the controllers)
In the one case, the log files for investigation came in from several hundred installations, and the challenge was to investigate whether the reported problems were already known or something new to be investigated further. In the other (current) case, the logs covered several days of testing from a single installation, and the issue was here to check that the latest bug fixes actually solved the identified problems and highlight new problems to be investigated. In such cases, it is essential that as much of the log file analysis is done automatically. In both projects, I've therefore created my own analysis scripts that traverse the logs to look for patterns that identify already known problems. This has worked reasonably well, although I probably still have some way to go before I've found the really good pattern for solving this kind of big-data/pattern analysis problems.
In the current project I work on, I've used Python. The analysis script invokes, for example, tshark to run various Lua script plugins inside tshark so that I can extract the information from Wireshark logs I'm interesting in. The script also uses matplotlib to automatically create plots of the some of the data from the identified problems. This eases manual verification of the analysis results, should it be needed.
The issue I'm having is, however, that my scripts are tailored to identify already known problems. Before reaching that state, I need to understand the problems. For each new problem discovered, there is a fair amount of manual analysis work. And this is where I'd ideally like a more visual tool to assist my work. I therefore found your graphical mock-up of the DebugVisualizer very interesting.
I've been thinking about adding something to the UI of sigrok.org (PulseView) as their UI seems to be offer some of what I was looking for, and I anyway want to be able to illustrate the events from my log files along with input/output to/from to controllers. I haven't done much to investigate this option, yet, however. But it looked promising.
I've also been thinking about setting up a JupyterLab notebook with access to my logs and some generic Python modules for analysis and printing of the log file contents. This would make it easy to share the workload among colleagues as the JupyterLab notebook can run on a central server and users only need a browser on their local machine. But this still seems a little tedious to work with as such a notebook is basically still just an interactive programming interface to Python (or Julia, R, C++/cling, etc.).
Some of my requested features of a debug visualization tool would be:
ability to be invoked from a script (i.e. run as a python module or via the command line) to generate graphs in the same layout as in the GUI
support for extension with new protocol decoders (e.g. to invoke tshark from the command line to decode pcap files or other tools to decode e.g. proprietary ZigBee sniffer logs)
handle multiple log files from different sources, with different time offsets, with different time resolution, and with different encodings at once (present data from all logs next to each other. It should be possible to manually adjust the time offsets between the logs)
both vertical and horizontal time axis
ability to annotate graphs with notes and save these notes in a file next to the log files (the annotation file should probably contain links/references to the annotated log files)
ability to save presentation/analysis settings (could perhaps be saved in a "session" or "report" file containing both annotations and other settings. If saved in, say, xml or json, my analysis scripts would be able to generate the same "session" files containing the result of the analysis and the debug visualizer could then be used to visualize the results of the analysis)
ability to create plugins that receive all data from all decoders/plugins (this should allow me to create plugins that run some of the analysis already done in my existing analysis scripts to, for example, show an acceleration profile from a tachometer based on the tshark sniffer logs and automatically insert markers in the graph where my analysis plugins finds that controller logs indicate miscomputation of the acceleration.)
plugins should ideally be written in a scripting language like Python (or JIT compiled C++ similar to what's provided by cling) as this would make it easy to extend the tool on a need basis
by default, events should be displayed as graphical icons or markers on the graph, but it should also be possible to make the tool show the full contents of select events (e.g. the full decoded wireshark package)
support for https://lttng.org/ (while browsing for lttng links, I just discovered Eclipse Trace Compass, which I think I will have a closer look at, too).
The text was updated successfully, but these errors were encountered:
(emailed to me)
Thank you for a very interesting talk at Embo++.
I briefly mentioned that I had some input for your DebugVisualizer. I've tried to put it in writing below. However, after I've read the requirements you've put together at github, I realize that some it is already covered by your requirements. So, feel free to pick and drop as you like.
Anyway, I've been working on a couple of fairly large projects involving distributed control systems relying on wired (in one case) or wireless (another case) network technologies. The systems involve(d) between 20 and 30 communicating controllers. In each case, I've had access to a mixture of
In the one case, the log files for investigation came in from several hundred installations, and the challenge was to investigate whether the reported problems were already known or something new to be investigated further. In the other (current) case, the logs covered several days of testing from a single installation, and the issue was here to check that the latest bug fixes actually solved the identified problems and highlight new problems to be investigated. In such cases, it is essential that as much of the log file analysis is done automatically. In both projects, I've therefore created my own analysis scripts that traverse the logs to look for patterns that identify already known problems. This has worked reasonably well, although I probably still have some way to go before I've found the really good pattern for solving this kind of big-data/pattern analysis problems.
In the current project I work on, I've used Python. The analysis script invokes, for example, tshark to run various Lua script plugins inside tshark so that I can extract the information from Wireshark logs I'm interesting in. The script also uses matplotlib to automatically create plots of the some of the data from the identified problems. This eases manual verification of the analysis results, should it be needed.
The issue I'm having is, however, that my scripts are tailored to identify already known problems. Before reaching that state, I need to understand the problems. For each new problem discovered, there is a fair amount of manual analysis work. And this is where I'd ideally like a more visual tool to assist my work. I therefore found your graphical mock-up of the DebugVisualizer very interesting.
I've been thinking about adding something to the UI of sigrok.org (PulseView) as their UI seems to be offer some of what I was looking for, and I anyway want to be able to illustrate the events from my log files along with input/output to/from to controllers. I haven't done much to investigate this option, yet, however. But it looked promising.
I've also been thinking about setting up a JupyterLab notebook with access to my logs and some generic Python modules for analysis and printing of the log file contents. This would make it easy to share the workload among colleagues as the JupyterLab notebook can run on a central server and users only need a browser on their local machine. But this still seems a little tedious to work with as such a notebook is basically still just an interactive programming interface to Python (or Julia, R, C++/cling, etc.).
Some of my requested features of a debug visualization tool would be:
The text was updated successfully, but these errors were encountered: