Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Patches missing for IntelTensorFlow_PerformanceAnalysis #96

Open
kaze31 opened this issue Feb 8, 2022 · 2 comments
Open

Patches missing for IntelTensorFlow_PerformanceAnalysis #96

kaze31 opened this issue Feb 8, 2022 · 2 comments

Comments

@kaze31
Copy link

kaze31 commented Feb 8, 2022

Summary

/opt/intel/oneapi/modelzoo/latest/models/docs/notebooks/perf_analysis/profiling/patches doesn't contain patches needed for execution of jupyter notebook in /opt/intel/oneapi/modelzoo/latest/models/docs/notebooks/perf_analysis/benchmark_perf_comparison.ipynb and benchmark_perf_timeline_analysis.ipynb

For functionality of the two jupyter notebooks under oneAPI-samples/AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_PerformanceAnalysis (benchmark_perf_comparison and benchmark_perf_timeline_analysis), the models should be patched to correctly produce timeline .json files. However, many models available in benchmark_perf_comparison don't got their corresponding patches in /opt/intel/oneapi/modelzoo/latest/models/docs/notebooks/perf_analysis/profiling/patches.

URL

IntelTensorFlow_PerformanceAnalysis: https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_PerformanceAnalysis
benchmark_perf_comparison: https://github.com/IntelAI/models/blob/master/docs/notebooks/perf_analysis/benchmark_perf_comparison.ipynb

Steps to reproduce

I followed the instructions in the IntelTensorFlow_PerformanceAnalysis, "Running the Sample" section to $cp -rf /opt/intel/oneapi/modelzoo/latest/models ~/ to get the models. Also followed all other instructions to prepare both environments and run the codes in the jupyter notebook. In execution, I chose topology 0: resnet50 infer fp32 and topology 1: resnet50v1_5 infer fp32.

Observed behavior

After execution of benchmark_perf_comparison notebook, a .json file containing the tensorflow timelines is expected to be produced. However, no json file is found. The problem is that the models have to be patched to make the json file, but both topology 0: resnet50 infer fp32 and topology 1: resnet50v1_5 infer fp32 don't have their corresponding patches in models/docs/notebooks/perf_analysis/profiling/patches. These are not the only case. In fact, most topologies in the 12 supported topologies don't have their corresponding patches.

Expected behavior

There should be corresponding patches in the models/docs/notebooks/perf_analysis/profiling/patches folder, so that a .json file containing the timeline can be output and be used in the following execution. I think either the supported topologies should be changed to match the existing patches, or additional patches should be added to match the supported topologies.

ashahba pushed a commit that referenced this issue Apr 1, 2022
…yTorch SPR) (#96)

* Add specs, docs, and dockerfiles for PyTorch SPR SSD-ResNet34 inference and training

* Update spec for train scripts

* Fix PRETRAINED_MODEL volume in run.sh

* Update training partial

* Copy in requirements

* remove numpy since we already have it, and add shm-size

* Addin missing new line

* Updates based on review feedback
@aice-support
Copy link

aice-support commented Jul 27, 2022

@aice-support

@kaze31
Thanks for reporting the issue.
This patch issue should be fixed in the below commit.
16d1e95
please let us know if you still face issue if any.

@sramakintel
Copy link
Contributor

@kaze31: did the workaround resolve your issues?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants