-
-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Profiling and Benchmarking for Codebase Optimization #1673
Comments
I will be taking on this task and welcome any suggestions from contributors, as their input is highly valued. I am using a GUI tool to analyze performance sampling data, which can be found here: https://github.com/KDAB/hotspot. After experimenting with the profiler, I observed the following results within a 30-second interval: What I feel after looking at this data, I am assuming functions which are taking a bit more time than others are these and I need to optimize them: Avogadro::QtOpenGL::GLWidget::qt_metacast (6.20%),
Avogadro::QtOpenGL::GLWidget::paintGL (3.30%),
Avogadro::Rendering::GLRenderer::render (3.22%),
Avogadro::Rendering::GroupNode::accept (2.93%),
Avogadro::Rendering::GeometryNode::accept (2.83%): Calling @cryos since he also did much of the rendering and may have some ideas. Thankyou:) |
What is happening in the 30 s interval you are profiling? Are you just interacting with the interface in a way similar to normal usage? Would it be possible to write a standardised procedure that for example opens a large file and simulates a user manipulating the view (pan, zoom, rotate etc.) so that you can have a consistent reproducible process that you're profiling? Improvements or detriments to speed are then easy to check for contributors, and it will help you know when you're making steps in the right direction with your graphics work. :) Since you opened this as a very general sounding issue, I assume we don't already have benchmark procedures like this as part of the test suite? I have never really looked at what's in In general it would be really nice to have a few such benchmarks for different aspects and functionalities - off the top of my head, one for program startup, one for opening or saving files, one for the navigation tool, and one for the draw tool would be great. (I'd be personally particularly interested in a startup benchmark since my main coding contributions would probably only ever be in the form of Python extensions.) |
Also only really interested in release mode as we know debug is slow due to extra checking and not to optimize a debug build. Agreed on wanting a decently big molecule opened and how things stack up. |
Thank you so much, @matterhorn103 and @cryos, for your thoughtful contributions to this discussion. Your insights mean a lot to me, and I genuinely look forward to continuing this journey together, exploring more on this important issue.
Actually your point makes a lot of sense.
Since you and @cryos are interested in benchmarking the startup time for rendering large molecules. In the screenshot below, I opened Avogadro, went to File, selected Import, chose Fetch from PDB, and entered "6vxx." (a large molecule "covid spike protien") I observed the program's performance for 60 seconds. Rendering this specific molecule has consistently posed challenges due to its slow processing speed. So, I got the results:- What are your thoughts on these functions shown in the screenshots? Do you think they are significant concerns? |
I'm afraid I know almost nothing about graphics/rendering and don't know that part of the codebase at all, so I can't be of much help. My point was more a general one about having an automated, standardized, reproducible benchmark so that you can assess the effect of your changes accurately. |
Describe the bug
We need to perform profiling and benchmarking on our codebase. @ghutchis identified and resolved an issue in the rendering code where shaders were being compiled repeatedly, causing frame drops and suboptimal performance. Now, we need to profile other parts of the codebase to further enhance efficiency and optimization.
The text was updated successfully, but these errors were encountered: