You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/opt/intel/oneapi/modelzoo/latest/models/docs/notebooks/perf_analysis/profiling/patches doesn't contain patches needed for execution of jupyter notebook in /opt/intel/oneapi/modelzoo/latest/models/docs/notebooks/perf_analysis/benchmark_perf_comparison.ipynb and benchmark_perf_timeline_analysis.ipynb
For functionality of the two jupyter notebooks under oneAPI-samples/AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_PerformanceAnalysis (benchmark_perf_comparison and benchmark_perf_timeline_analysis), the models should be patched to correctly produce timeline .json files. However, many models available in benchmark_perf_comparison don't got their corresponding patches in /opt/intel/oneapi/modelzoo/latest/models/docs/notebooks/perf_analysis/profiling/patches.
I followed the instructions in the IntelTensorFlow_PerformanceAnalysis, "Running the Sample" section to $cp -rf /opt/intel/oneapi/modelzoo/latest/models ~/ to get the models. Also followed all other instructions to prepare both environments and run the codes in the jupyter notebook. In execution, I chose topology 0: resnet50 infer fp32 and topology 1: resnet50v1_5 infer fp32.
Observed behavior
After execution of benchmark_perf_comparison notebook, a .json file containing the tensorflow timelines is expected to be produced. However, no json file is found. The problem is that the models have to be patched to make the json file, but both topology 0: resnet50 infer fp32 and topology 1: resnet50v1_5 infer fp32 don't have their corresponding patches in models/docs/notebooks/perf_analysis/profiling/patches. These are not the only case. In fact, most topologies in the 12 supported topologies don't have their corresponding patches.
Expected behavior
There should be corresponding patches in the models/docs/notebooks/perf_analysis/profiling/patches folder, so that a .json file containing the timeline can be output and be used in the following execution. I think either the supported topologies should be changed to match the existing patches, or additional patches should be added to match the supported topologies.
The text was updated successfully, but these errors were encountered:
…yTorch SPR) (#96)
* Add specs, docs, and dockerfiles for PyTorch SPR SSD-ResNet34 inference and training
* Update spec for train scripts
* Fix PRETRAINED_MODEL volume in run.sh
* Update training partial
* Copy in requirements
* remove numpy since we already have it, and add shm-size
* Addin missing new line
* Updates based on review feedback
@kaze31
Thanks for reporting the issue.
This patch issue should be fixed in the below commit. 16d1e95
please let us know if you still face issue if any.
Summary
/opt/intel/oneapi/modelzoo/latest/models/docs/notebooks/perf_analysis/profiling/patches doesn't contain patches needed for execution of jupyter notebook in /opt/intel/oneapi/modelzoo/latest/models/docs/notebooks/perf_analysis/benchmark_perf_comparison.ipynb and benchmark_perf_timeline_analysis.ipynb
For functionality of the two jupyter notebooks under oneAPI-samples/AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_PerformanceAnalysis (benchmark_perf_comparison and benchmark_perf_timeline_analysis), the models should be patched to correctly produce timeline .json files. However, many models available in benchmark_perf_comparison don't got their corresponding patches in /opt/intel/oneapi/modelzoo/latest/models/docs/notebooks/perf_analysis/profiling/patches.
URL
IntelTensorFlow_PerformanceAnalysis: https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_PerformanceAnalysis
benchmark_perf_comparison: https://github.com/IntelAI/models/blob/master/docs/notebooks/perf_analysis/benchmark_perf_comparison.ipynb
Steps to reproduce
I followed the instructions in the IntelTensorFlow_PerformanceAnalysis, "Running the Sample" section to $cp -rf /opt/intel/oneapi/modelzoo/latest/models ~/ to get the models. Also followed all other instructions to prepare both environments and run the codes in the jupyter notebook. In execution, I chose topology 0: resnet50 infer fp32 and topology 1: resnet50v1_5 infer fp32.
Observed behavior
After execution of benchmark_perf_comparison notebook, a .json file containing the tensorflow timelines is expected to be produced. However, no json file is found. The problem is that the models have to be patched to make the json file, but both topology 0: resnet50 infer fp32 and topology 1: resnet50v1_5 infer fp32 don't have their corresponding patches in models/docs/notebooks/perf_analysis/profiling/patches. These are not the only case. In fact, most topologies in the 12 supported topologies don't have their corresponding patches.
Expected behavior
There should be corresponding patches in the models/docs/notebooks/perf_analysis/profiling/patches folder, so that a .json file containing the timeline can be output and be used in the following execution. I think either the supported topologies should be changed to match the existing patches, or additional patches should be added to match the supported topologies.
The text was updated successfully, but these errors were encountered: