-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High overhead of power measurements for CPU inference measurements #236
Comments
The observed behaviour is consistent across several runs:
|
Hi Anton - On Jetson AGX, we can check our runs with and without power to see if we see an impact of the power related SW overheads in communicating to the director PC. I will comment back on this issue. |
@s-idgunji When running inference on the GPU, the CPU is typically not fully utilised, so you might not notice any difference. When running inference on the CPU in full-blast, the effect may be more pronounced. |
@psyhtest - Thanks, we were not able to notice a difference when we had workload going on GPU on the Xavier config that we submitted, but I agree that when you are using CPU only, then the effect may show up. We can independently try and replicate the behavior you observed. But we'll need to close on this soon and propose a fix that can go in so that we can validate prior the freeze date. |
Anton - RP4 is pretty low-end CPU. I don't see any way around some sort of overhead. Is the overhead consistent or are you saying it's unpredictable? |
The overhead seems to be consistent. What's worrying is that it shows up with ArmNN on Xavier which is like 8x more powerful than RPi4. |
Anton - Can you try on another system , perhaps Intel Celeron or Core based. Also , what can we suggest as a fix ? If we want to repro on our system at NVIDIA what specific config , benchmark/scenario would likely give the highest difference. We could test it out to see if it repros. |
Power WG Thoights: For v1.1: Try and solve it opportunistically. It is not a gating items. This is mainly due to the resources available to solve this problem. Main issue is there is no minimum bar for the software flow. |
MLPerf Power WG: This will be punted to after v1.1. issue will remain open. |
@psyhtest - perhaps you want to close this if the issue is not valid anymore, or we need to revisit in the WG. Quite an old issue. |
Is this issue still happening? |
During the MLPerf Inference v1.0 round, I noticed that the power workflow when used with CPU inference occasionally seemed to incur a rather high overhead (~10%), for example:
Xavier with power measurements:
Xavier without power measurements (compliance):
Here, ArmNN is faster than TFLite but takes a big hit under the power workflow. TFLite, however, is not affected.
The text was updated successfully, but these errors were encountered: