diff --git a/docs/articles_en/about-openvino/additional-resources/glossary.rst b/docs/articles_en/about-openvino/additional-resources/glossary.rst index 9aba2b395525c2..6120b0c9018a54 100644 --- a/docs/articles_en/about-openvino/additional-resources/glossary.rst +++ b/docs/articles_en/about-openvino/additional-resources/glossary.rst @@ -38,7 +38,6 @@ Acronyms and Abbreviations LRN Local Response Normalization mAP Mean Average Precision Intel® OneDNN Intel® OneAPI Deep Neural Network Library - `mo` Command-line tool for model conversion, CLI for ``tools.mo.convert_model`` (legacy) MVN Mean Variance Normalization NCDHW Number of images, Channels, Depth, Height, Width NCHW Number of images, Channels, Height, Width diff --git a/docs/articles_en/about-openvino/compatibility-and-support/supported-devices.rst b/docs/articles_en/about-openvino/compatibility-and-support/supported-devices.rst index c80dc388568004..c20e66e80ff2cd 100644 --- a/docs/articles_en/about-openvino/compatibility-and-support/supported-devices.rst +++ b/docs/articles_en/about-openvino/compatibility-and-support/supported-devices.rst @@ -31,11 +31,6 @@ OpenVINO offers the option of running automated inference with the following inf | :doc:`Automatic Batching <../../openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching>`: | automatically groups inference requests to improve device utilization. -| :doc:`(LEGACY) Multi-device Inference <./../../documentation/legacy-features/multi-device>`: -| executes inference on multiple devices. Currently, this mode is considered a legacy - solution. Using Automatic Device Selection instead is advised. - - Feature Support and API Coverage ################################# @@ -52,7 +47,6 @@ Feature Support and API Coverage :doc:`Preprocessing acceleration <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing>` Yes Yes No :doc:`Stateful models <../../openvino-workflow/running-inference/stateful-models>` Yes Yes Yes :doc:`Extensibility <../../documentation/openvino-extensibility>` Yes Yes No - :doc:`(LEGACY) Multi-device execution <./../../documentation/legacy-features/multi-device>` Yes Yes Partial ======================================================================================================================================== ======= ========== =========== diff --git a/docs/articles_en/about-openvino/performance-benchmarks/getting-performance-numbers.rst b/docs/articles_en/about-openvino/performance-benchmarks/getting-performance-numbers.rst index 936f1145a6b3b0..9ba82690b00395 100644 --- a/docs/articles_en/about-openvino/performance-benchmarks/getting-performance-numbers.rst +++ b/docs/articles_en/about-openvino/performance-benchmarks/getting-performance-numbers.rst @@ -103,7 +103,7 @@ General considerations Some image pre-processing can be baked into OpenVINO IR and accelerated accordingly. For more information, refer to - :doc:`Embedding Pre-processing <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-embedding-preprocessing-computation>` + :doc:`Preprocessing API <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`. and :doc:`General Runtime Optimizations <../../openvino-workflow/running-inference/optimize-inference/general-optimizations>`. @@ -192,7 +192,7 @@ execution breakdown. For example, the table below is part of performance counters for :doc:`CPU inference <../../openvino-workflow/running-inference/inference-devices-and-modes/cpu-device>`. -of a `TensorFlow implementation of ResNet-50 `__ +of a TensorFlow implementation of ResNet-50. Keep in mind that since the device is CPU, the ``realTime`` wall clock and the ``cpu`` time layers are the same. Information about layer precision is also stored in the performance counters. diff --git a/docs/articles_en/about-openvino/performance-benchmarks/performance-benchmarks-faq.rst b/docs/articles_en/about-openvino/performance-benchmarks/performance-benchmarks-faq.rst index 0f70c93e9c8b96..5495711bc0054a 100644 --- a/docs/articles_en/about-openvino/performance-benchmarks/performance-benchmarks-faq.rst +++ b/docs/articles_en/about-openvino/performance-benchmarks/performance-benchmarks-faq.rst @@ -15,13 +15,7 @@ Performance Information F.A.Q. .. dropdown:: Where can I find the models used in the performance benchmarks? - All models used are included in the GitHub repository of - :doc:`Open Model Zoo <../../documentation/legacy-features/model-zoo>`. - - .. important:: - - Due to the deprecation of Open Model Zoo, models in the OpenVINO IR format are now - published on `Hugging Face `__. + All models used are published on `Hugging Face `__. .. dropdown:: Will there be any new models added to the list used for benchmarking? @@ -35,7 +29,7 @@ Performance Information F.A.Q. open-source tool within the Intel® Distribution of OpenVINO™ toolkit called :doc:`benchmark_app <../../learn-openvino/openvino-samples/benchmark-tool>`. - For diffusers (Stable-Diffusion) and foundational models (aka LLMs) please use the OpenVINO GenAI + For diffusers (Stable-Diffusion) and foundational models (aka LLMs) please use the OpenVINO GenAI opensource repo `OpenVINO GenAI tools/llm_bench `__ For a simple instruction on testing performance, see the :doc:`Getting Performance Numbers Guide `. @@ -93,30 +87,6 @@ Performance Information F.A.Q. - BERT - question / answer - 128 - * - `efficientdet-d0 `__ - - Efficientdet - - classification - - 512x512 - * - `mask_rcnn_resnet50_atrous_coco `__ - - Mask R-CNN ResNet 50 Atrous - - object instance segmentation - - 800x1365 - * - `mobilenet-v2 `__ - - Mobilenet V2 PyTorch - - classification - - 224x224 - * - `resnet-50 `__ - - ResNet-50_v1_ILSVRC-2012 - - classification - - 224x224 - * - `ssd-mobilenet-v1-coco `__ - - ssd-mobilenet-V1-coco onnx model - - object detection - - 300x300 - * - `ssd-resnet34-1200-onnx `__ - - ssd-resnet34 onnx model - - object detection - - 1200x1200 * - `yolov8n `__ - Yolov8nano - object detection diff --git a/docs/articles_en/about-openvino/release-notes-openvino.rst b/docs/articles_en/about-openvino/release-notes-openvino.rst index 343c9e780f05dc..847520c567e0b3 100644 --- a/docs/articles_en/about-openvino/release-notes-openvino.rst +++ b/docs/articles_en/about-openvino/release-notes-openvino.rst @@ -1620,7 +1620,7 @@ Deprecation And Support Using deprecated features and components is not advised. They are available to enable a smooth transition to new solutions and will be discontinued in the future. To keep using discontinued features, you will have to revert to the last LTS OpenVINO version supporting them. -For more details, refer to the :doc:`OpenVINO Legacy Features and Components <../documentation/legacy-features>` +For more details, refer to the `OpenVINO Legacy Features and Components __` page. Discontinued in 2024 @@ -1678,7 +1678,7 @@ Deprecated and to be removed in the future * Model Optimizer will be discontinued with OpenVINO 2025.0. Consider using the :doc:`new conversion methods <../openvino-workflow/model-preparation/convert-model-to-ir>` instead. For more details, see the - :doc:`model conversion transition guide <../documentation/legacy-features/transition-legacy-conversion-api>`. + `model conversion transition guide `__. * OpenVINO property Affinity API will be discontinued with OpenVINO 2025.0. It will be replaced with CPU binding configurations (``ov::hint::enable_cpu_pinning``). * OpenVINO Model Server components: @@ -1707,10 +1707,6 @@ Deprecated and to be removed in the future * See alternative: `Machine Translation Python* Demo `__ - * `Open Model Zoo Tools Tutorial `__ - - * No alternatives, demonstrates deprecated tools. - * `Super Resolution with OpenVINO™ `__ * See alternative: `Super Resolution with PaddleGAN and OpenVINO `__ diff --git a/docs/articles_en/about-openvino/release-notes-openvino/release-policy.rst b/docs/articles_en/about-openvino/release-notes-openvino/release-policy.rst index 44ca052ee8e7b9..34107c60b73139 100644 --- a/docs/articles_en/about-openvino/release-notes-openvino/release-policy.rst +++ b/docs/articles_en/about-openvino/release-notes-openvino/release-policy.rst @@ -179,7 +179,7 @@ Additional Information * Binary distribution: * Download from `OpenVINO storage `__ - * `pypi.org `__ + * `pypi.org `__ * `DockerHub* `__ diff --git a/docs/articles_en/assets/images/MO_connection_example_1.svg b/docs/articles_en/assets/images/MO_connection_example_1.svg deleted file mode 100644 index 9e975041032891..00000000000000 --- a/docs/articles_en/assets/images/MO_connection_example_1.svg +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fd1e2d8f82ce07f5d463d6480293935443785979fe16b555cd8e60fb2f253928 -size 55232 diff --git a/docs/articles_en/assets/images/MO_conversion_pipeline.svg b/docs/articles_en/assets/images/MO_conversion_pipeline.svg deleted file mode 100644 index e0448b06dda139..00000000000000 --- a/docs/articles_en/assets/images/MO_conversion_pipeline.svg +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:db6f798882e0301f0cf83f1eba90560b5151266612fef2bc5f16a12cf192f0a0 -size 128446 diff --git a/docs/articles_en/assets/images/MO_graph_after_extractors.svg b/docs/articles_en/assets/images/MO_graph_after_extractors.svg deleted file mode 100644 index 7ee1ebe7c1761a..00000000000000 --- a/docs/articles_en/assets/images/MO_graph_after_extractors.svg +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e9d5ee3d23d232fc10072189c0bf18d76f5d5d7217091d81a1ac465d129c034e -size 88648 diff --git a/docs/articles_en/assets/images/MO_graph_after_loader.svg b/docs/articles_en/assets/images/MO_graph_after_loader.svg deleted file mode 100644 index 380db77679be7f..00000000000000 --- a/docs/articles_en/assets/images/MO_graph_after_loader.svg +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e882e25b5117e4d17a3b94944f58470c0337fafa5afc2ec6aa01f498c442c5f3 -size 73933 diff --git a/docs/articles_en/assets/images/MO_graph_before_partial_inference.svg b/docs/articles_en/assets/images/MO_graph_before_partial_inference.svg deleted file mode 100644 index b312a0314b0b55..00000000000000 --- a/docs/articles_en/assets/images/MO_graph_before_partial_inference.svg +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7799a6c30352fa74d7d98f993d9ad7b148d975d96778762df410d69133abf8a8 -size 158171 diff --git a/docs/articles_en/assets/images/MO_ports_example_1.svg b/docs/articles_en/assets/images/MO_ports_example_1.svg deleted file mode 100644 index 778ee6fd3ecb7a..00000000000000 --- a/docs/articles_en/assets/images/MO_ports_example_1.svg +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8340d5ca434fe74d19f397c1acd0c92b4ad3b16a563975dc1603a6bf8ef03eb6 -size 55262 diff --git a/docs/articles_en/assets/images/MO_ports_example_2.svg b/docs/articles_en/assets/images/MO_ports_example_2.svg deleted file mode 100644 index 288ce970b3664f..00000000000000 --- a/docs/articles_en/assets/images/MO_ports_example_2.svg +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:aed3820019aa5b9d4741c146bd4596e6850ea714e6e44fefe6cccf4707e5f152 -size 55270 diff --git a/docs/articles_en/assets/images/MO_transformations_graph.svg b/docs/articles_en/assets/images/MO_transformations_graph.svg deleted file mode 100644 index 093365f92a8e8d..00000000000000 --- a/docs/articles_en/assets/images/MO_transformations_graph.svg +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:edbc2911e5aa5a672d8ebaf82b3d06f6915e44b8760ac18f88fba1d2e99fddd6 -size 349693 diff --git a/docs/articles_en/assets/images/deploy_encrypted_model.svg b/docs/articles_en/assets/images/deploy_encrypted_model.svg index 61d0dbe710994e..fa897731b54fef 100644 --- a/docs/articles_en/assets/images/deploy_encrypted_model.svg +++ b/docs/articles_en/assets/images/deploy_encrypted_model.svg @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:6f802b1396fafdc8a80c03c4931d4b6290cc10451961ddba5edcef1c8227833b -size 44097 +oid sha256:454a531a9b2d2883ac9a6beb01ce7ecdd7ec69ea2c68d63b39b65f3780c957fe +size 54772 diff --git a/docs/articles_en/assets/images/training_extensions_framework.png b/docs/articles_en/assets/images/training_extensions_framework.png index 3cbbac7fdbfba8..b518aa584a96fc 100644 --- a/docs/articles_en/assets/images/training_extensions_framework.png +++ b/docs/articles_en/assets/images/training_extensions_framework.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:2b3932d0cf0071c629e1013f3e17a9f8abda800eb01c50b3e826a42127e42da7 -size 48770 +oid sha256:4c8069733dbd51ff2bd47b47e7d2a7083dac55d9faf66dfb61b897d65eb0a545 +size 47828 diff --git a/docs/articles_en/documentation.rst b/docs/articles_en/documentation.rst index 5be7bb9dbc30fb..c1dd34f5373429 100644 --- a/docs/articles_en/documentation.rst +++ b/docs/articles_en/documentation.rst @@ -13,7 +13,6 @@ Documentation API Reference OpenVINO IR format and Operation Sets - Legacy Features Tool Ecosystem OpenVINO Extensibility OpenVINO™ Security diff --git a/docs/articles_en/documentation/legacy-features.rst b/docs/articles_en/documentation/legacy-features.rst deleted file mode 100644 index 2457d28cf24c15..00000000000000 --- a/docs/articles_en/documentation/legacy-features.rst +++ /dev/null @@ -1,130 +0,0 @@ -Legacy Features and Components -============================== - -.. meta:: - :description: A list of deprecated OpenVINO™ components. - -.. toctree:: - :maxdepth: 1 - :hidden: - - OpenVINO Development Tools package - Model Optimizer / Conversion API - Open Model ZOO - legacy-features/multi-device - - -Since OpenVINO has grown very rapidly in recent years, a number of its features -and components have been replaced by other solutions. Some of them are still -supported to assure OpenVINO users are given enough time to adjust their projects, -before the features are fully discontinued. - -This section will give you an overview of these major changes and tell you how -you can proceed to get the best experience and results with the current OpenVINO -offering. - - -| **OpenVINO Development Tools Package** -| *New solution:* OpenVINO Runtime includes all supported components -| *Old solution:* discontinuation planned for OpenVINO 2025.0 -| -| OpenVINO Development Tools used to be the OpenVINO package with tools for - advanced operations on models, such as Model conversion API, Benchmark Tool, - Accuracy Checker, Annotation Converter, Post-Training Optimization Tool, - and Open Model Zoo tools. Most of these tools have been either removed, - replaced by other solutions, or moved to the OpenVINO Runtime package. -| :doc:`See how to install Development Tools ` - - -| **Model Optimizer / Conversion API** -| *New solution:* Direct model support and OpenVINO Converter (OVC) -| *Old solution:* Legacy Conversion API discontinuation planned for OpenVINO 2025.0 -| -| The role of Model Optimizer and later the Conversion API was largely reduced - when all major model frameworks became supported directly. For converting model - files explicitly, it has been replaced with a more light-weight and efficient - solution, the OpenVINO Converter (launched with OpenVINO 2023.1). -| :doc:`See how to use OVC <../openvino-workflow/model-preparation>` -| :doc:`See how to transition from the legacy solution ` - - -| **Open Model ZOO** -| *New solution:* users are encouraged to use public model repositories -| *Old solution:* discontinuation planned for OpenVINO 2025.0 -| -| Open Model ZOO provided a collection of models prepared for use with OpenVINO, - and a small set of tools enabling a level of automation for the process. - Since the tools have been mostly replaced by other solutions and several - other model repositories have recently grown in size and popularity, - Open Model ZOO will no longer be maintained. You may still use its resources - until they are fully removed. -| :doc:`See the Open Model ZOO documentation ` -| `Check the OMZ GitHub project `__ -| As for public model databases, `Hugging Face `__ has - become the recommended model source for OpenVINO. - - -| **Multi-Device Execution** -| *New solution:* Automatic Device Selection -| *Old solution:* Legacy Multi-Device Execution discontinuation planned for OpenVINO 2025.0 -| -| The behavior and results of the Multi-Device Execution mode are covered by the ``CUMULATIVE_THROUGHPUT`` - option of the Automatic Device Selection. The only difference is that ``CUMULATIVE_THROUGHPUT`` uses - the devices specified by AUTO, which means that adding devices manually is not mandatory, - while with MULTI, the devices had to be specified before the inference. -| :doc:`Check the Automatic Device Selection <../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>` -| :doc:`Check the legacy solution ` - -Discontinued: -############# - -.. dropdown:: Caffe, and Kaldi model formats - - | *New solution:* conversion to ONNX via external tools - | *Old solution:* model support discontinued with OpenVINO 2024.0 - | `The last version supporting Apache MXNet, Caffe, and Kaldi model formats `__ - | :doc:`See the currently supported frameworks <../openvino-workflow/model-preparation>` - -.. dropdown:: Post-training Optimization Tool (POT) - - | *New solution:* Neural Network Compression Framework (NNCF) now offers the same functionality - | *Old solution:* POT discontinued with OpenVINO 2024.0 - | :doc:`See how to use NNCF for model optimization <../openvino-workflow/model-optimization>` - | `Check the NNCF GitHub project, including documentation `__ - -.. dropdown:: Inference API 1.0 - - | *New solution:* API 2.0 launched in OpenVINO 2022.1 - | *Old solution:* discontinued with OpenVINO 2024.0 - | `2023.2 is the last version supporting API 1.0 `__ - -.. dropdown:: Compile tool - - | *New solution:* the tool is no longer needed - | *Old solution:* discontinued with OpenVINO 2023.0 - | If you need to compile a model for inference on a specific device, use the following script: - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - .. doxygensnippet:: docs/articles_en/assets/snippets/export_compiled_model.py - :language: python - :fragment: [export_compiled_model] - - .. tab-item:: C++ - :sync: cpp - - .. doxygensnippet:: docs/articles_en/assets/snippets/export_compiled_model.cpp - :language: cpp - :fragment: [export_compiled_model] - -.. dropdown:: TensorFlow integration (OVTF) - - | *New solution:* Direct model support and OpenVINO Converter (OVC) - | *Old solution:* discontinued in OpenVINO 2023.0 - | - | OpenVINO now features a native TensorFlow support, with no need for explicit model - conversion. - diff --git a/docs/articles_en/documentation/legacy-features/install-dev-tools.rst b/docs/articles_en/documentation/legacy-features/install-dev-tools.rst deleted file mode 100644 index 4b0160e11c9082..00000000000000 --- a/docs/articles_en/documentation/legacy-features/install-dev-tools.rst +++ /dev/null @@ -1,259 +0,0 @@ -Install OpenVINO™ Development Tools -===================================== - - -.. meta:: - :description: Learn how to install OpenVINO™ Development Tools on Windows, - Linux, and macOS operating systems, using a PyPi package. - -OpenVINO Development Tools is a set of utilities that make it easy to develop and -optimize models and applications for OpenVINO. It provides the following tools: - -* Model conversion API -* Benchmark Tool -* Accuracy Checker and Annotation Converter -* Model Downloader and other Open Model Zoo tools - -The instructions on this page show how to install OpenVINO Development Tools. If you are a -Python developer, it only takes a few simple steps to install the tools with PyPI. If you -are developing in C/C++, OpenVINO Runtime must be installed separately before installing -OpenVINO Development Tools. - -In both cases, Python 3.9 - 3.12 needs to be installed on your system before starting. - -.. note:: - - From the 2022.1 release, the OpenVINO™ Development Tools can only be installed via PyPI. - -.. _python_developers: - -For Python Developers -##################### - -If you are a Python developer, follow the steps in the -:ref:`Installing OpenVINO Development Tools ` section on this page to -install it. Installing OpenVINO Development Tools will also install OpenVINO Runtime as -a dependency, so you don’t need to install OpenVINO Runtime separately. This option is -recommended for new users. - -.. _cpp_developers: - -For C/C++ Developers -####################### - -If you are a C/C++ developer, you must first install OpenVINO Runtime separately to set -up the C/C++ libraries, sample code, and dependencies for building applications with -OpenVINO. These files are not included with the PyPI distribution. See the -:doc:`Selector Tool <../../get-started/install-openvino>` page to install OpenVINO Runtime -from an archive file for your operating system. - -Once OpenVINO Runtime is installed, you may install OpenVINO Development Tools for access -to tools like ``mo``, Model Downloader, Benchmark Tool, and other utilities that will help -you optimize your model and develop your application. Follow the steps in the -:ref:`Installing OpenVINO Development Tools ` section on this page -to install it. - -.. _install_dev_tools: - -Installing OpenVINO™ Development Tools -###################################### - -Follow these step-by-step instructions to install OpenVINO Development Tools on your computer. -There are two options to install OpenVINO Development Tools: installation into an existing -environment with a deep learning framework that was used for model training or creation; -or installation into a new environment. - -Installation into an Existing Environment with the Source Deep Learning Framework -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -To install OpenVINO Development Tools (see the :ref:`Install the Package ` -section of this article) into an existing environment with the deep learning framework used -for the model training or creation, run the following command: - -.. code-block:: sh - - pip install openvino-dev - - -Installation in a New Environment -+++++++++++++++++++++++++++++++++ - -If you do not have an environment with a deep learning framework for the input model or you -encounter any compatibility issues between OpenVINO and your version of deep learning -framework, you may install OpenVINO Development Tools with validated versions of -frameworks into a new environment. - -Step 1. Set Up Python Virtual Environment ------------------------------------------ - -Create a virtual Python environment to avoid dependency conflicts. To create a virtual -environment, use the following command: - -.. tab-set:: - - .. tab-item:: Windows - :sync: windows - - .. code-block:: sh - - python -m venv openvino_env - - .. tab-item:: Linux and macOS - :sync: linux-and-macos - - .. code-block:: sh - - python3 -m venv openvino_env - - - -Step 2. Activate Virtual Environment ------------------------------------- - -Activate the newly created Python virtual environment by issuing this command: - -.. tab-set:: - - .. tab-item:: Windows - :sync: windows - - .. code-block:: sh - - openvino_env\Scripts\activate - - .. tab-item:: Linux and macOS - :sync: linux-and-macos - - .. code-block:: sh - - source openvino_env/bin/activate - -.. important:: - - The above command must be re-run every time a new command terminal window is opened. - - -Step 3. Set Up and Update PIP to the Highest Version ----------------------------------------------------- - -Make sure `pip` is installed in your environment and upgrade it to the latest version by -issuing the following command: - -.. code-block:: sh - - python -m pip install --upgrade pip - - -.. _install_the_package: - -Step 4. Install the Package ---------------------------- - -To install and configure the components of the development package together with validated -versions of specific frameworks, use the commands below. - -.. code-block:: sh - - pip install openvino-dev[extras] - - -where the ``extras`` parameter specifies the source deep learning framework for the input model -and is one or more of the following values separated with "," : ``onnx``, ``pytorch``, -``tensorflow``, ``tensorflow2``. - -For example, to install and configure dependencies required for working with TensorFlow 2.x -and ONNX models, use the following command: - -.. code-block:: sh - - pip install openvino-dev[tensorflow2,onnx] - - -.. note:: - - Model conversion API support for TensorFlow 1.x environment has been deprecated. Use the - ``tensorflow2`` parameter to install a TensorFlow 2.x environment that can convert both - TensorFlow 1.x and 2.x models. If your model isn't compatible with the TensorFlow 2.x - environment, use the `tensorflow` parameter to install the TensorFlow 1.x environment. - The TF 1.x environment is provided only for legacy compatibility reasons. - -For more details on the openvino-dev PyPI package, see -`pypi.org `__ . - -Step 5. Test the Installation ------------------------------- - -To verify the package is properly installed, run the command below (this may take a few seconds): - -.. code-block:: sh - - mo -h - -You will see the help message for ``mo`` if installation finished successfully. If you get an -error, refer to the :doc:`Troubleshooting Guide <../../get-started/troubleshooting-install-config>` -for possible solutions. - -Congratulations! You finished installing OpenVINO Development Tools with C/C++ capability. -Now you can start exploring OpenVINO's functionality through example C/C++ applications. -See the "What's Next?" section to learn more! - -What's Next? -############ - -Learn more about OpenVINO and use it in your own application by trying out some of these examples! - -Get started with Python -+++++++++++++++++++++++ - -.. image:: ../../assets/images/get_started_with_python.gif - :width: 400 - -Try the `Python Quick Start Example <../../notebooks/vision-monodepth-with-output.html>`__ -to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook -inside your web browser. - -Visit the :doc:`Tutorials <../../learn-openvino/interactive-tutorials-python>` page for more -Jupyter Notebooks to get you started with OpenVINO, such as: - -* `OpenVINO Python API Tutorial <../../notebooks/openvino-api-with-output.html>`__ -* `Basic image classification program with Hello Image Classification <../../notebooks/hello-world-with-output.html>`__ -* `Convert a PyTorch model and use it for image background removal <../../notebooks/vision-background-removal-with-output.html>`__ - -Get started with C++ -++++++++++++++++++++ - -.. image:: ../../assets/images/get_started_with_cpp.jpg - :width: 400 - - -Try the :doc:`C++ Quick Start Example <../../learn-openvino/openvino-samples/get-started-demos>` -for step-by-step instructions on building and running a basic image classification C++ application. - -Visit the :doc:`Samples <../../learn-openvino/openvino-samples>` page for other C++ -example applications to get you started with OpenVINO, such as: - -* :doc:`Basic object detection with the Hello Reshape SSD C++ sample <../../learn-openvino/openvino-samples/hello-reshape-ssd>` -* :doc:`Object classification sample <../../learn-openvino/openvino-samples/hello-classification>` - -Learn OpenVINO Development Tools -++++++++++++++++++++++++++++++++ - -* Explore a variety of pre-trained deep learning models in the - :doc:`Open Model Zoo ` and deploy them in demo applications to see how they work. - - .. important:: - - Due to the deprecation of Open Model Zoo, models in the OpenVINO IR format are now - published on `Hugging Face `__. - -* Want to import a model from another framework and optimize its performance with OpenVINO? - Visit the :doc:`Convert a Model ` page. -* Accelerate your model's speed even further with quantization and other compression techniques - using :doc:`Neural Network Compression Framework (NNCF) <../../openvino-workflow/model-optimization-guide/quantizing-models-post-training>`. -* Benchmark your model's inference speed with one simple command using the - :doc:`Benchmark Tool <../../learn-openvino/openvino-samples/benchmark-tool>`. - -Additional Resources -#################### - -- `Intel® Distribution of OpenVINO™ toolkit home page `__ diff --git a/docs/articles_en/documentation/legacy-features/model-zoo.rst b/docs/articles_en/documentation/legacy-features/model-zoo.rst deleted file mode 100644 index 4b761e6c7df831..00000000000000 --- a/docs/articles_en/documentation/legacy-features/model-zoo.rst +++ /dev/null @@ -1,31 +0,0 @@ -Model Zoo -========= - -.. _model zoo: - -.. note:: - - Since the deprecation of Open Model Zoo, OpenVINO has significantly extended its presence on the - `Hugging Face `__ model repository. It is currently - the recommended source of optimized OpenVINO IR models. - -Open Model Zoo for OpenVINO™ toolkit delivers a wide variety of free, pre-trained deep learning -models and demo applications that provide full application templates to help you implement deep -learning in Python, C++, or OpenCV Graph API (G-API). - -Models, demos and full documentation are available in the -`Open Model Zoo GitHub repo `__ -and licensed under Apache License Version 2.0. - -Browse through over 200 neural network models, both -`public `__ and from -`Intel `__, and pick the right one for your solution. -Types include object detection, classification, image segmentation, handwriting recognition, -text to speech, pose estimation, and others. The Intel models have already been converted -to work with OpenVINO™ toolkit, while public models can easily be converted using the -:doc:`OpenVINO Model Conversion API <../../openvino-workflow/model-preparation>` utility. - -Open Model Zoo offers a -`comprehensive set of demos `__ that you can adapt for implementing specific deep -learning scenarios in your applications. - diff --git a/docs/articles_en/documentation/legacy-features/multi-device.rst b/docs/articles_en/documentation/legacy-features/multi-device.rst deleted file mode 100644 index 594f496287d714..00000000000000 --- a/docs/articles_en/documentation/legacy-features/multi-device.rst +++ /dev/null @@ -1,155 +0,0 @@ -Multi-device execution -====================== - - -.. meta:: - :description: The Multi-Device execution mode in OpenVINO Runtime assigns - multiple available computing devices to particular inference - requests to execute in parallel. - -.. danger:: - - The Multi-device execution mode described here has been **deprecated**. - - It's functionality is now fully covered by the :ref:`CUMULATIVE_THROUGHPUT ` - option of the :doc:`Automatic Device Selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>` mode. - This way, all available devices in the system can be used without the need to specify them. - -How MULTI Works -#################### - -The Multi-Device execution mode, or MULTI for short, acts as a "virtual" or a "proxy" device, which does not bind to a specific type of hardware. Instead, it assigns available computing devices to particular inference requests, which are then executed in parallel. - -The potential gains from using Multi-Device execution are: - -* improved throughput from using multiple devices at once, -* increase in performance stability due to multiple devices sharing inference workload. - -Importantly, the Multi-Device mode does not change the application logic, so it does not require you to explicitly compile the model on every device or create and balance inference requests. It appears to use a typical device but internally handles the actual hardware. - -Note that the performance increase in this mode comes from utilizing multiple devices at once. This means that you need to provide the devices with enough inference requests to keep them busy, otherwise you will not benefit much from using MULTI. - - -Using the Multi-Device Mode -########################### - -Following the OpenVINO™ naming convention, the Multi-Device mode is assigned the label of “MULTI.” The only configuration option available for it is a prioritized list of devices to use: - - -+----------------------------+---------------------------------+------------------------------------------------------------+ -| Property | Property values | Description | -+============================+=================================+============================================================+ -| | | MULTI: | | Specifies the devices available for selection. | -| | | comma-separated, no spaces | | The device sequence will be taken as priority | -+----------------------------+---------------------------------+ | from high to low. | -| ``ov::device::priorities`` | | device names | | Priorities can be set directly as a string. | -| | | comma-separated, no spaces | | -+----------------------------+---------------------------------+------------------------------------------------------------+ - - -Specifying the device list explicitly is required by MULTI, as it defines the devices available for inference and sets their priorities. - -Note that OpenVINO™ Runtime enables you to use “GPU” as an alias for “GPU.0” in function calls. -More details on enumerating devices can be found in :doc:`Inference Devices and Modes <../../openvino-workflow/running-inference/inference-devices-and-modes>`. - -The following commands are accepted by the API: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. doxygensnippet:: docs/articles_en/assets/snippets/ov_multi.py - :language: python - :fragment: [MULTI_0] - - .. tab-item:: C++ - :sync: cpp - - .. doxygensnippet:: docs/articles_en/assets/snippets/MULTI0.cpp - :language: cpp - :fragment: [part0] - - -To check what devices are present in the system, you can use the Device API. For information on how to do it, check :doc:`Query device properties and configuration <../../openvino-workflow/running-inference/inference-devices-and-modes/query-device-properties>`. - - -Configuring Individual Devices and Creating the Multi-Device On Top -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -As mentioned previously, executing inference with MULTI may be set up by configuring individual devices before creating the "MULTI" device on top. It may be considered for performance reasons. - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. doxygensnippet:: docs/articles_en/assets/snippets/ov_multi.py - :language: python - :fragment: [MULTI_4] - - .. tab-item:: C++ - :sync: cpp - - .. doxygensnippet:: docs/articles_en/assets/snippets/MULTI4.cpp - :language: cpp - :fragment: [part4] - - -Alternatively, you can combine all the individual device settings into a single config file and load it for MULTI to parse. See the code example in the next section. - -Querying the Optimal Number of Inference Requests -+++++++++++++++++++++++++++++++++++++++++++++++++ - -When using MULTI, you don't need to sum over included devices yourself, you can query the optimal number of requests directly, -using the :doc:`configure devices <../../openvino-workflow/running-inference/inference-devices-and-modes/query-device-properties>` property: - -.. tab-set:: - - .. tab-item:: C++ - - .. doxygensnippet:: docs/articles_en/assets/snippets/MULTI5.cpp - :language: cpp - :fragment: [part5] - - -Using the Multi-Device with OpenVINO Samples and Benchmarking Performance -######################################################################### - -To see how the Multi-Device execution is used in practice and test its performance, take a look at OpenVINO's Benchmark Application which presents the optimal performance of the plugin without the need for additional settings, like the number of requests or CPU threads. -Here is an example command to evaluate performance of CPU + GPU: - -.. code-block:: sh - - ./benchmark_app –d MULTI:CPU,GPU –m -i -niter 1000 - - -For more information, refer to the :doc:`Benchmark Tool <../../../learn-openvino/openvino-samples/benchmark-tool>` article. - - -.. note:: - - You can keep using the FP16 IR without converting it to FP32, even if some of the listed devices do not support it. The conversion will be done automatically for you. - - No demos are yet fully optimized for MULTI, by means of supporting the ``ov::optimal_number_of_infer_requests`` property, using the GPU streams/throttling, and so on. - - -Performance Considerations for the Multi-Device Execution -######################################################### - -For best performance when using the MULTI execution mode you should consider a few recommendations: - -- MULTI usually performs best when the fastest device is specified first in the device candidate list. This is particularly important when the request-level parallelism is not sufficient (e.g. the number of requests is not enough to saturate all devices). -- Just like with any throughput-oriented execution mode, it is highly recommended to query the optimal number of inference requests directly from the instance of the ``ov:compiled_model``. Refer to the code of the previously mentioned ``benchmark_app`` for more details. -- Execution on certain device combinations, for example CPU+GPU, performs better with certain knobs. Refer to the ``benchmark_app`` code for details. One specific example is disabling GPU driver polling, which in turn requires multiple GPU streams to balance out slower communication of inference completion from the device to the host. -- The MULTI logic always attempts to save on copying data between device-agnostic and user-facing inference requests, and device-specific 'worker' requests that are being actually scheduled behind the scene. To facilitate the copy savings, it is recommended to run the requests in the order in which they were created. -- While performance of accelerators combines well with MULTI, the CPU+GPU execution may introduce certain performance issues. It is due to the devices sharing some resources, like power or bandwidth. Enabling the GPU throttling hint, which saves a CPU thread for CPU inference, is an example of a recommended solution addressing this issue. - - -Additional Resources -#################### - -- :doc:`Inference Devices and Modes <../../openvino-workflow/running-inference/inference-devices-and-modes>` -- :doc:`Automatic Device Selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>` - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api.rst deleted file mode 100644 index e031c10e7e4e08..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api.rst +++ /dev/null @@ -1,863 +0,0 @@ -Transition from Legacy Conversion API -===================================== - - -.. meta:: - :description: Transition guide from MO / mo.convert_model() to OVC / ov.convert_model(). - -.. toctree:: - :maxdepth: 1 - :hidden: - - transition-legacy-conversion-api/legacy-conversion-api - transition-legacy-conversion-api/legacy-model-optimizer-extensibility - -In the 2023.1 OpenVINO release OpenVINO Model Converter was introduced with the corresponding -Python API: ``openvino.convert_model`` method. ``ovc`` and ``openvino.convert_model`` represent -a lightweight alternative of ``mo`` and ``openvino.tools.mo.convert_model`` which are considered -legacy API now. In this article, all the differences between ``mo`` and ``ovc`` are summarized -and the transition guide from the legacy API to the new API is provided. - -Parameters Comparison -##################### - -The comparison of parameters between ov.convert_model() / OVC and mo.convert_model() / MO. - -.. list-table:: - :widths: 20 25 55 - :header-rows: 1 - - * - mo.convert_model() / MO - - ov.convert_model() / OVC - - Differences description - * - input_model - - input_model - - Along with model object or path to input model ov.convert_model() accepts list of model parts, for example, the path to TensorFlow weights plus the path to TensorFlow checkpoint. OVC tool accepts an unnamed input model. - * - output_dir - - output_model - - output_model in OVC tool sets both output model name and output directory. - * - model_name - - output_model - - output_model in OVC tool sets both output model name and output directory. - * - input - - input - - ov.convert_model() accepts tuples for setting multiple parameters. OVC tool 'input' does not have type setting and freezing functionality. ov.convert_model() does not allow input cut. - * - output - - output - - ov.convert_model() does not allow output cut. - * - input_shape - - N/A - - Not available in ov.convert_model() / OVC. Can be replaced by ``input`` parameter. - * - example_input - - example_input - - No differences. - * - batch - - N/A - - Not available in ov.convert_model() / OVC. Can be replaced by model reshape functionality. See details below. - * - mean_values - - N/A - - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. - * - scale_values - - N/A - - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. - * - scale - - N/A - - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. - * - reverse_input_channels - - N/A - - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. - * - source_layout - - N/A - - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. - * - target_layout - - N/A - - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. - * - layout - - N/A - - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. - * - compress_to_fp16 - - compress_to_fp16 - - OVC provides 'compress_to_fp16' for command line tool only, as compression is performed during saving a model to IR (Intermediate Representation). - * - extensions - - extension - - No differences. - * - transform - - N/A - - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. - * - transformations_config - - N/A - - Not available in ov.convert_model() / OVC. - * - static_shape - - N/A - - Not available in ov.convert_model() / OVC. - * - freeze_placeholder_with_value - - N/A - - Not available in ov.convert_model() / OVC. - * - use_legacy_frontend - - N/A - - Not available in ov.convert_model() / OVC. - * - use_legacy_frontend - - N/A - - Not available in ov.convert_model() / OVC. - * - silent - - verbose - - OVC / ov.convert_model provides 'verbose' parameter instead of 'silent' for printing of detailed conversion information if 'verbose' is set to True. - * - log_level - - N/A - - Not available in ov.convert_model() / OVC. - * - version - - version - - N/A - * - progress - - N/A - - Not available in ov.convert_model() / OVC. - * - stream_output - - N/A - - Not available in ov.convert_model() / OVC. - * - share_weights - - share_weights - - No differences. - * - framework - - N/A - - Not available in ov.convert_model() / OVC. - * - help / -h - - help / -h - - OVC provides help parameter only in command line tool. - * - example_output - - output - - OVC / ov.convert_model 'output' parameter includes capabilities of MO 'example_output' parameter. - * - input_model_is_text - - N/A - - Not available in ov.convert_model() / OVC. - * - input_checkpoint - - input_model - - All supported model formats can be passed to 'input_model'. - * - input_meta_graph - - input_model - - All supported model formats can be passed to 'input_model'. - * - saved_model_dir - - input_model - - All supported model formats can be passed to 'input_model'. - * - saved_model_tags - - N/A - - Not available in ov.convert_model() / OVC. - * - tensorflow_custom_operations_config_update - - N/A - - Not available in ov.convert_model() / OVC. - * - tensorflow_object_detection_api_pipeline_config - - N/A - - Not available in ov.convert_model() / OVC. - * - tensorboard_logdir - - N/A - - Not available in ov.convert_model() / OVC. - * - tensorflow_custom_layer_libraries - - N/A - - Not available in ov.convert_model() / OVC. - * - input_symbol - - N/A - - Not available in ov.convert_model() / OVC. - * - nd_prefix_name - - N/A - - Not available in ov.convert_model() / OVC. - * - pretrained_model_name - - N/A - - Not available in ov.convert_model() / OVC. - * - save_params_from_nd - - N/A - - Not available in ov.convert_model() / OVC. - * - legacy_mxnet_model - - N/A - - Not available in ov.convert_model() / OVC. - * - enable_ssd_gluoncv - - N/A - - Not available in ov.convert_model() / OVC. - * - input_proto - - N/A - - Not available in ov.convert_model() / OVC. - * - caffe_parser_path - - N/A - - Not available in ov.convert_model() / OVC. - * - k - - N/A - - Not available in ov.convert_model() / OVC. - * - disable_omitting_optional - - N/A - - Not available in ov.convert_model() / OVC. - * - enable_flattening_nested_params - - N/A - - Not available in ov.convert_model() / OVC. - * - counts - - N/A - - Not available in ov.convert_model() / OVC. - * - remove_output_softmax - - N/A - - Not available in ov.convert_model() / OVC. - * - remove_memory - - N/A - - Not available in ov.convert_model() / OVC. - -Transition from Legacy API to New API -############################################################################ - -mo.convert_model() provides a wide range of preprocessing parameters. Most of these parameters have analogs in OVC or can be replaced with functionality from ``ov.PrePostProcessor`` class. -Here is the guide to transition from legacy model preprocessing to new API preprocessing. - - -``input_shape`` -################ - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: py - :force: - - from openvino.tools import mo - - ov_model = mo.convert_model(model, input_shape=[[1, 3, 100, 100],[1]]) - - - .. code-block:: py - :force: - - import openvino as ov - - ov_model = ov.convert_model(model, input=[[1, 3, 100, 100],[1]]) - - .. tab-item:: CLI - :sync: cli - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: sh - :force: - - mo --input_model MODEL_NAME --input_shape [1,3,100,100],[1] --output_dir OUTPUT_DIR - - - .. code-block:: sh - :force: - - ovc MODEL_NAME --input [1,3,100,100],[1] --output_model OUTPUT_MODEL - -``batch`` -########## - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: py - :force: - - from openvino.tools import mo - - ov_model = mo.convert_model(model, batch=2) - - - .. code-block:: py - :force: - - import openvino as ov - - ov_model = ov.convert_model(model) - input_shape = ov_model.inputs[0].partial_shape - input_shape[0] = 2 # batch size - ov_model.reshape(input_shape) - - .. tab-item:: CLI - :sync: cli - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: sh - :force: - - mo --input_model MODEL_NAME --batch 2 --output_dir OUTPUT_DIR - - - Not available in OVC tool. Switch to the **Python** tab. - -``mean_values`` -################ - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: py - :force: - - from openvino.tools import mo - - ov_model = mo.convert_model(model, mean_values=[0.5, 0.5, 0.5]) - - - .. code-block:: py - :force: - - import openvino as ov - - ov_model = ov.convert_model(model) - - prep = ov.preprocess.PrePostProcessor(ov_model) - prep.input(input_name).tensor().set_layout(ov.Layout("NHWC")) - prep.input(input_name).preprocess().mean([0.5, 0.5, 0.5]) - ov_model = prep.build() - - There is currently no heuristic for automatic detection of the channel to which mean, scale or reverse channels should be applied. ``Layout`` needs to be explicitly specified with "C" channel. For example "NHWC", "NCHW", "?C??". See also :doc:`Layout API overview <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/layout-api-overview>`. - - .. tab-item:: CLI - :sync: cli - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: sh - :force: - - mo --input_model MODEL_NAME --mean_values [0.5,0.5,0.5] --output_dir OUTPUT_DIR - - - Not available in OVC tool. Switch to the **Python** tab. - -``scale_values`` -################# - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: py - :force: - - from openvino.tools import mo - - ov_model = mo.convert_model(model, scale_values=[255., 255., 255.]) - - - .. code-block:: py - :force: - - import openvino as ov - - ov_model = ov.convert_model(model) - - prep = ov.preprocess.PrePostProcessor(ov_model) - prep.input(input_name).tensor().set_layout(ov.Layout("NHWC")) - prep.input(input_name).preprocess().scale([255., 255., 255.]) - ov_model = prep.build() - - There is currently no heuristic for automatic detection of the channel to which mean, scale or reverse channels should be applied. ``Layout`` needs to be explicitly specified with "C" channel. For example "NHWC", "NCHW", "?C??". See also :doc:`Layout API overview <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/layout-api-overview>`. - - .. tab-item:: CLI - :sync: cli - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: sh - :force: - - mo --input_model MODEL_NAME --scale_values [255,255,255] --output_dir OUTPUT_DIR - - - Not available in OVC tool. Switch to the **Python** tab. - -``reverse_input_channels`` -########################### - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: py - :force: - - from openvino.tools import mo - - ov_model = mo.convert_model(model, reverse_input_channels=True) - - - .. code-block:: py - :force: - - import openvino as ov - - ov_model = ov.convert_model(model) - - prep = ov.preprocess.PrePostProcessor(ov_model) - prep.input(input_name).tensor().set_layout(ov.Layout("NHWC")) - prep.input(input_name).preprocess().reverse_channels() - ov_model = prep.build() - - There is currently no heuristic for automatic detection of the channel to which mean, scale or reverse channels should be applied. ``Layout`` needs to be explicitly specified with "C" channel. For example "NHWC", "NCHW", "?C??". See also :doc:`Layout API overview <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/layout-api-overview>`. - - .. tab-item:: CLI - :sync: cli - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: sh - :force: - - mo --input_model MODEL_NAME --reverse_input_channels --output_dir OUTPUT_DIR - - - Not available in OVC tool. Switch to the **Python** tab. - -``source_layout`` -################## - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: py - :force: - - import openvino as ov - from openvino.tools import mo - - ov_model = mo.convert_model(model, source_layout={input_name: ov.Layout("NHWC")}) - - - .. code-block:: py - :force: - - import openvino as ov - - ov_model = ov.convert_model(model) - - prep = ov.preprocess.PrePostProcessor(ov_model) - prep.input(input_name).model().set_layout(ov.Layout("NHWC")) - ov_model = prep.build() - - .. tab-item:: CLI - :sync: cli - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: sh - :force: - - mo --input_model MODEL_NAME --source_layout input_name(NHWC) --output_dir OUTPUT_DIR - - - Not available in OVC tool. Switch to the **Python** tab. - -``target_layout`` -################## - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: py - :force: - - import openvino as ov - from openvino.tools import mo - - ov_model = mo.convert_model(model, target_layout={input_name: ov.Layout("NHWC")}) - - - .. code-block:: py - :force: - - import openvino as ov - - ov_model = ov.convert_model(model) - - prep = ov.preprocess.PrePostProcessor(ov_model) - prep.input(input_name).tensor().set_layout(ov.Layout("NHWC")) - ov_model = prep.build() - - .. tab-item:: CLI - :sync: cli - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: sh - :force: - - mo --input_model MODEL_NAME --target_layout input_name(NHWC) --output_dir OUTPUT_DIR - - - Not available in OVC tool. Switch to the **Python** tab. - -``layout`` -########### - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: py - :force: - - from openvino.tools import mo - - ov_model = mo.convert_model(model, layout={input_name: mo.LayoutMap("NCHW", "NHWC")}) - - - .. code-block:: py - :force: - - import openvino as ov - - ov_model = ov.convert_model(model) - - prep = ov.preprocess.PrePostProcessor(ov_model) - prep.input(input_name).model().set_layout(ov.Layout("NCHW")) - prep.input(input_name).tensor().set_layout(ov.Layout("NHWC")) - ov_model = prep.build() - - .. tab-item:: CLI - :sync: cli - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: sh - :force: - - mo --input_model MODEL_NAME --layout "input_name(NCHW->NHWC)" --output_dir OUTPUT_DIR - - - Not available in OVC tool. Switch to the **Python** tab. - -``transform`` -############## - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: py - :force: - - from openvino.tools import mo - - ov_model = mo.convert_model(model, transform=[('LowLatency2', {'use_const_initializer': False}), 'Pruning', ('MakeStateful', {'param_res_names': {'input_name': 'output_name'}})]) - - - .. code-block:: py - :force: - - import openvino as ov - from openvino._offline_transformations import apply_low_latency_transformation, apply_pruning_transformation, apply_make_stateful_transformation - - ov_model = ov.convert_model(model) - apply_low_latency_transformation(model, use_const_initializer=False) - apply_pruning_transformation(model) - apply_make_stateful_transformation(model, param_res_names={'input_name': 'output_name'}) - - .. tab-item:: CLI - :sync: cli - - .. list-table:: - :header-rows: 1 - - * - Legacy API - - New API - * - .. code-block:: sh - :force: - - mo --input_model MODEL_NAME --transform LowLatency2[use_const_initializer=False],Pruning,MakeStateful[param_res_names={'input_name':'output_name'}] --output_dir OUTPUT_DIR - - - Not available in OVC tool. Switch to the **Python** tab. - -Cutting Off Parts of a Model -############################ - -Performing surgery by cutting model inputs and outputs from a model is no longer available in the new conversion API. Instead, we recommend performing the cut in the original framework. -Below are examples of model cutting of TensorFlow protobuf, TensorFlow SavedModel, and ONNX formats with the legacy conversion API, compared to achieving the same cut with tools provided by the Tensorflow and ONNX frameworks. -For PyTorch, TensorFlow 2 Keras, and PaddlePaddle, we recommend changing the original model code to perform the model cut. - -Note: This guide does not cover the cutting a model by input port of an operation that MO tool provides using `input` and `output` options, for example, `--input 1:name_op`. - -``PyTorch`` -########### - -Model cut for PyTorch is not available in legacy API. - -When it is needed to remove a whole module from the model it is possible to replace such modules with `Identity`. Below is the example of removing `conv1` and `bn1` modules at the input and `fc` module at the output of the resnet50 model. - -.. code-block:: py - :force: - - import openvino as ov - import torch - import torchvision - from torch.nn import Identity - - # Load pretrained model - model = torchvision.models.resnet50(weights='DEFAULT') - - # input cut - model.conv1 = Identity() - model.bn1 = Identity() - - # output cut - model.fc = Identity() - - # convert and compile the model - ov_model = ov.convert_model(model, input=([-1,64,-1,-1], torch.float32)) - compiled_model = ov.compile_model(ov_model) - -When it is needed to remove one or more outputs from the model it is possible to create a wrapper for the model and only output the needed output. Below is the example of removing second output from the model. - -.. code-block:: py - :force: - - import openvino as ov - import torch - - # Example of model with multiple outputs - class Model(torch.nn.Module): - def __init__(self): - super(Model, self).__init__() - self.linear1 = torch.nn.Linear(100, 200) - self.activation1 = torch.nn.ReLU() - self.linear2 = torch.nn.Linear(200, 10) - self.activation2 = torch.nn.Sigmoid() - - def forward(self, x): - x = self.linear1(x) - x = self.activation1(x) - y = self.linear2(x) - y = self.activation2(y) - return x, y - - # New model, where some outputs are cut - class CutModel(torch.nn.Module): - def __init__(self): - super(CutModel, self).__init__() - self.model = Model() - - def forward(self, x): - - # get first output - x, _ = self.model(x) - - return x - - # Model with output cut - cut_model = CutModel() - - # convert and compile the model - ov_model = ov.convert_model(cut_model, input=([-1,-1,-1], torch.float32)) - compiled_model = ov.compile_model(ov_model) - - -``TensorFlow protobuf format / tf.Graph / tf.GraphDef`` -####################################################### - -Legacy API. - -.. code-block:: py - :force: - - import openvino as ov - import openvino.tools.mo as mo - - import tensorflow as tf - - def load_graph(model_path): - graph_def = tf.compat.v1.GraphDef() - with open(model_path, "rb") as f: - graph_def.ParseFromString(f.read()) - with tf.compat.v1.Graph().as_default() as graph: - tf.graph_util.import_graph_def(graph_def, name="") - return graph - - # Load TF model - graph = load_graph("/path_to_model/HugeCTR.pb") - - # Convert the model with input and output cut - input_name = "concat" - output_name = "MatVec_3/Squeeze" - ov_model = mo.convert_model(graph, input=(input_name, [-1, -1]), output=output_name) - - # Compile the model - compiled_model = ov.compile_model(ov_model) - -Model cut in original FW. - -.. code-block:: py - :force: - - import openvino as ov - import tensorflow as tf - - from tensorflow.python.tools.strip_unused_lib import strip_unused - - def load_graph(model_path): - graph_def = tf.compat.v1.GraphDef() - with open(model_path, "rb") as f: - graph_def.ParseFromString(f.read()) - with tf.compat.v1.Graph().as_default() as graph: - tf.graph_util.import_graph_def(graph_def, name="") - return graph - - # Load TF model - graph = load_graph("/path_to_model/HugeCTR.pb") - - # Cut the model - input_name = "concat" - output_name = "MatVec_3/Squeeze" - graph_def = graph.as_graph_def() - new_graph_def = strip_unused(graph_def, [input_name], [output_name], tf.float32.as_datatype_enum) - - # Convert and compile model - ov_model = ov.convert_model(new_graph_def, input=[-1, -1]) - cmp_model = ov.compile_model(ov_model) - - -``TensorFlow SavedModel format`` -################################ - -Model cut for SavedModel format is not available in legacy API. - -Example of model cut in original FW. - -.. code-block:: py - :force: - - import openvino as ov - import tensorflow_hub as hub - - import tensorflow as tf - from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2 - from tensorflow.python.tools.strip_unused_lib import strip_unused - - # Load TF model - model = hub.load("https://tfhub.dev/svampeatlas/vision/embedder/fungi_V2/1?tf-hub-format=compressed") - - # Convert model to GraphDef - model_func = model.signatures["default"] - frozen_func = convert_variables_to_constants_v2(model_func) - graph_def = frozen_func.graph.as_graph_def() - - # Cut the model - input_name = 'InceptionV4/InceptionV4/Conv2d_2b_3x3/Relu' - output_name = 'InceptionV4/InceptionV4/Mixed_7c/concat' - new_graph_def = strip_unused(graph_def, [input_name], [output_name], tf.float32.as_datatype_enum) - - # Convert and compile the model - ov_model = ov.convert_model(new_graph_def) - compiled_model = ov.compile_model(ov_model) - - -``ONNX`` -######## - - -Legacy API. - -.. code-block:: py - :force: - - import openvino as ov - import openvino.tools.mo as mo - - input_path = "/path_to_model/yolov8x.onnx" - - # Convert model and perform input and output cut - input_name = "/model.2/Concat_output_0" - output_name = "/model.22/Concat_3_output_0" - ov_model = mo.convert_model(input_path, input=input_name, output=output_name) - - # Compile model - ov.compile_model(ov_model) - -Model cut in original FW. - -.. code-block:: py - :force: - - import onnx - import openvino as ov - - input_path = "/path_to_model/yolov8x.onnx" - - # Cut the model - input_name = "/model.2/Concat_output_0" - output_name = "/model.22/Concat_3_output_0" - cut_model_path = "/path_to_model/yolov8x_cut.onnx" - onnx.utils.extract_model(input_path, cut_model_path, [input_name], [output_name]) - - # Convert model - ov_model = ov.convert_model(cut_model_path) - - # Compile model - ov.compile_model(ov_model) - - -Supported Frameworks in MO vs OVC -################################# - -ov.convert_model() and OVC tool support conversion from PyTorch, TF, TF Lite, ONNX, PaddlePaddle. - - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api.rst deleted file mode 100644 index 5302c7912995f6..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api.rst +++ /dev/null @@ -1,188 +0,0 @@ -Legacy Conversion API -===================== - - -.. toctree:: - :maxdepth: 1 - :hidden: - - Setting Input Shapes - Troubleshooting Reshape Errors - Cutting Off Parts of a Model - Embedding Preprocessing Computation - Compressing a Model to FP16 - Convert Models Represented as Python Objects - Model Optimizer Frequently Asked Questions - Supported Model Formats - -.. meta:: - :description: Model conversion (MO) furthers the transition between training and - deployment environments, it adjusts deep learning models for - optimal execution on target devices. - -.. note:: - This part of the documentation describes a legacy approach to model conversion. Starting with OpenVINO 2023.1, a simpler alternative API for model conversion is available: ``openvino.convert_model`` and OpenVINO Model Converter ``ovc`` CLI tool. Refer to :doc:`Model preparation <../../../openvino-workflow/model-preparation>` for more details. If you are still using `openvino.tools.mo.convert_model` or `mo` CLI tool, you can still refer to this documentation. However, consider checking the :doc:`transition guide <../transition-legacy-conversion-api>` to learn how to migrate from the legacy conversion API to the new one. Depending on the model topology, the new API can be a better option for you. - -To convert a model to OpenVINO model format (``ov.Model``), you can use the following command: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model(INPUT_MODEL) - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model INPUT_MODEL - - -If the out-of-the-box conversion (only the ``input_model`` parameter is specified) is not successful, use the parameters mentioned below to override input shapes and cut the model: - -- ``input`` and ``input_shape`` - the model conversion API parameters used to override original input shapes for model conversion, - - For more information about the parameters, refer to the :doc:`Setting Input Shapes ` guide. - -- ``input`` and ``output`` - the model conversion API parameters used to define new inputs and outputs of the converted model to cut off unwanted parts (such as unsupported operations and training sub-graphs), - - For a more detailed description, refer to the :doc:`Cutting Off Parts of a Model ` guide. - -- ``mean_values``, ``scales_values``, ``layout`` - the parameters used to insert additional input pre-processing sub-graphs into the converted model, - - For more details, see the :doc:`Embedding Preprocessing Computation ` article. - -- ``compress_to_fp16`` - a compression parameter in ``mo`` command-line tool, which allows generating IR with constants (for example, weights for convolutions and matrix multiplications) compressed to ``FP16`` data type. - - For more details, refer to the :doc:`Compression of a Model to FP16 ` guide. - -To get the full list of conversion parameters, run the following command: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model(help=True) - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --help - - -Examples of model conversion parameters -####################################### - -Below is a list of separate examples for different frameworks and model conversion parameters: - -1. Launch model conversion for a TensorFlow MobileNet model in the binary protobuf format: - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("MobileNet.pb") - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model MobileNet.pb - - - Launch model conversion for a TensorFlow BERT model in the SavedModel format with three inputs. Specify input shapes explicitly where the batch size and the sequence length equal 2 and 30 respectively: - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("BERT", input_shape=[[2,30],[2,30],[2,30]]) - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --saved_model_dir BERT --input_shape [2,30],[2,30],[2,30] - - - For more information, refer to the :doc:`Converting a TensorFlow Model ` guide. - -2. Launch model conversion for an ONNX OCR model and specify new output explicitly: - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("ocr.onnx", output="probabilities") - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model ocr.onnx --output probabilities - - - For more information, refer to the :doc:`Converting an ONNX Model ` guide. - - .. note:: - - PyTorch models must be exported to the ONNX format before conversion into IR. More information can be found in :doc:`Converting a PyTorch Model `. - -3. Launch model conversion for a PaddlePaddle UNet model and apply mean-scale normalization to the input: - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("unet.pdmodel", mean_values=[123,117,104], scale=255) - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255 - - - For more information, refer to the :doc:`Converting a PaddlePaddle Model ` guide. - -- To get conversion recipes for specific TensorFlow, ONNX, and PyTorch models, refer to the :doc:`Model Conversion Tutorials `. -- For more information about IR, see :doc:`Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™ <../../openvino-ir-format/operation-sets>`. -- For more information about support of neural network models trained with various frameworks, see :doc:`OpenVINO Extensibility Mechanism <../../openvino-extensibility>` - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-compressing-model-to-fp16.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-compressing-model-to-fp16.rst deleted file mode 100644 index c9e93036a3a7c2..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-compressing-model-to-fp16.rst +++ /dev/null @@ -1,53 +0,0 @@ -[LEGACY] Compressing a Model to FP16 -============================================= - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Conversion Parameters <../../../../openvino-workflow/model-preparation/conversion-parameters>` article. - -By default, when IR is saved all relevant floating-point weights are compressed to ``FP16`` data type during model conversion. -It results in creating a "compressed ``FP16`` model", which occupies about half of -the original space in the file system. The compression may introduce a minor drop in accuracy, -but it is negligible for most models. -In case if accuracy drop is significant user can disable compression explicitly. - -To disable compression, use the ``compress_to_fp16=False`` option: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.runtime import save_model - ov_model = save_model(INPUT_MODEL, compress_to_fp16=False) - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model INPUT_MODEL --compress_to_fp16=False - - -For details on how plugins handle compressed ``FP16`` models, see -:doc:`Inference Devices and Modes <../../../../openvino-workflow/running-inference/inference-devices-and-modes>`. - -.. note:: - - ``FP16`` compression is sometimes used as the initial step for ``INT8`` quantization. - Refer to the :doc:`Post-training optimization <../../../../openvino-workflow/model-optimization-guide/quantizing-models-post-training>` guide for more - information about that. - - -.. note:: - - Some large models (larger than a few GB) when compressed to ``FP16`` may consume an overly large amount of RAM on the loading - phase of the inference. If that is the case for your model, try to convert it without compression: - ``convert_model(INPUT_MODEL, compress_to_fp16=False)`` or ``convert_model(INPUT_MODEL)`` - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-convert-models-as-python-objects.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-convert-models-as-python-objects.rst deleted file mode 100644 index 4921dc6bfa221f..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-convert-models-as-python-objects.rst +++ /dev/null @@ -1,150 +0,0 @@ -[LEGACY] Convert Models Represented as Python Objects -============================================================= - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Model Preparation <../../../../openvino-workflow/model-preparation>` article. - -Model conversion API is represented by ``convert_model()`` method in openvino.tools.mo namespace. ``convert_model()`` is compatible with types from openvino.runtime, like PartialShape, Layout, Type, etc. - -``convert_model()`` has the ability available from the command-line tool, plus the ability to pass Python model objects, such as a PyTorch model or TensorFlow Keras model directly, without saving them into files and without leaving the training environment (Jupyter Notebook or training scripts). In addition to input models consumed directly from Python, ``convert_model`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO. - -.. note:: - - Model conversion can be performed only when you install - :doc:`the development tools <../../../legacy-features/install-dev-tools>`, which provide - both the ``convert_model()`` method and ``mo`` command-line tool. - The functionality from this article is applicable for ``convert_model()`` only and it is - not present in command-line tool. - - -``convert_model()`` returns an openvino.runtime.Model object which can be compiled and inferred or serialized to IR. - -Example of converting a PyTorch model directly from memory: - -.. code-block:: py - :force: - - import torchvision - from openvino.tools.mo import convert_model - - model = torchvision.models.resnet50(weights='DEFAULT') - ov_model = convert_model(model) - -The following types are supported as an input model for ``convert_model()``: - -* PyTorch - ``torch.nn.Module``, ``torch.jit.ScriptModule``, ``torch.jit.ScriptFunction``. Refer to the :doc:`Converting a PyTorch Model <[legacy]-supported-model-formats/[legacy]-convert-pytorch>` article for more details. -* TensorFlow / TensorFlow 2 / Keras - ``tf.keras.Model``, ``tf.keras.layers.Layer``, ``tf.compat.v1.Graph``, ``tf.compat.v1.GraphDef``, ``tf.Module``, ``tf.function``, ``tf.compat.v1.session``, ``tf.train.checkpoint``. Refer to the :doc:`Converting a TensorFlow Model <[legacy]-supported-model-formats/[legacy]-convert-tensorflow>` article for more details. - -``convert_model()`` accepts all parameters available in the MO command-line tool. Parameters can be specified by Python classes or string analogs, similar to the command-line tool. - -Example of using native Python classes to set ``input_shape``, ``mean_values`` and ``layout``: - -.. code-block:: py - :force: - - from openvino.runtime import PartialShape, Layout - from openvino.tools.mo import convert_model - - ov_model = convert_model(model, input_shape=PartialShape([1,3,100,100]), mean_values=[127, 127, 127], layout=Layout("NCHW")) - -Example of using strings for setting ``input_shape``, ``mean_values`` and ``layout``: - -.. code-block:: py - :force: - - from openvino.runtime import Layout - from openvino.tools.mo import convert_model - - ov_model = convert_model(model, input_shape="[1,3,100,100]", mean_values="[127,127,127]", layout="NCHW") - - -The ``input`` parameter can be set by a ``tuple`` with a name, shape, and type. The input name of the type string is required in the tuple. The shape and type are optional. -The shape can be a ``list`` or ``tuple`` of dimensions (``int`` or ``openvino.runtime.Dimension``), or ``openvino.runtime.PartialShape``, or ``openvino.runtime.Shape``. The type can be of numpy type or ``openvino.runtime.Type``. - -Example of using a tuple in the ``input`` parameter to cut a model: - -.. code-block:: py - :force: - - from openvino.tools.mo import convert_model - - ov_model = convert_model(model, input=("input_name", [3], np.float32)) - -For complex cases, when a value needs to be set in the ``input`` parameter, the ``InputCutInfo`` class can be used. ``InputCutInfo`` accepts four parameters: ``name``, ``shape``, ``type``, and ``value``. - -``InputCutInfo("input_name", [3], np.float32, [0.5, 2.1, 3.4])`` is equivalent of ``InputCutInfo(name="input_name", shape=[3], type=np.float32, value=[0.5, 2.1, 3.4])``. - -Supported types for ``InputCutInfo``: - -* name: ``string``. -* shape: ``list`` or ``tuple`` of dimensions (``int`` or ``openvino.runtime.Dimension``), ``openvino.runtime.PartialShape``, ``openvino.runtime.Shape``. -* type: ``numpy type``, ``openvino.runtime.Type``. -* value: ``numpy.ndarray``, ``list`` of numeric values, ``bool``. - -Example of using ``InputCutInfo`` to freeze an input with value: - -.. code-block:: py - :force: - - from openvino.tools.mo import convert_model, InputCutInfo - - ov_model = convert_model(model, input=InputCutInfo("input_name", [3], np.float32, [0.5, 2.1, 3.4])) - -To set parameters for models with multiple inputs, use ``list`` of parameters. -Parameters supporting ``list``: - -* input -* input_shape -* layout -* source_layout -* dest_layout -* mean_values -* scale_values - -Example of using lists to set shapes, types and layout for multiple inputs: - -.. code-block:: py - :force: - - from openvino.runtime import Layout - from openvino.tools.mo import convert_model, LayoutMap - - ov_model = convert_model(model, input=[("input1", [1,3,100,100], np.float32), ("input2", [1,3,100,100], np.float32)], layout=[Layout("NCHW"), LayoutMap("NCHW", "NHWC")]) - -``layout``, ``source_layout`` and ``dest_layout`` accept an ``openvino.runtime.Layout`` object or ``string``. - -Example of using the ``Layout`` class to set the layout of a model input: - -.. code-block:: py - :force: - - from openvino.runtime import Layout - from openvino.tools.mo import convert_model - - ov_model = convert_model(model, source_layout=Layout("NCHW")) - -To set both source and destination layouts in the ``layout`` parameter, use the ``LayoutMap`` class. ``LayoutMap`` accepts two parameters: ``source_layout`` and ``target_layout``. - -``LayoutMap("NCHW", "NHWC")`` is equivalent to ``LayoutMap(source_layout="NCHW", target_layout="NHWC")``. - -Example of using the ``LayoutMap`` class to change the layout of a model input: - -.. code-block:: py - :force: - - from openvino.tools.mo import convert_model, LayoutMap - - ov_model = convert_model(model, layout=LayoutMap("NCHW", "NHWC")) - -Example of using the ``serialize`` method to save the converted model to OpenVINO IR: - -.. code-block:: py - :force: - - from openvino.runtime import serialize - - serialize(ov_model, "model.xml") - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-cutting-parts-of-a-model.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-cutting-parts-of-a-model.rst deleted file mode 100644 index 0406602a6e51fa..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-cutting-parts-of-a-model.rst +++ /dev/null @@ -1,585 +0,0 @@ -[LEGACY] Cutting Off Parts of a Model -================================================ - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - -Sometimes, it is necessary to remove parts of a model when converting it to OpenVINO IR. This chapter describes how to do it, using model conversion API parameters. Model cutting applies mostly to TensorFlow models, which is why TensorFlow will be used in this chapter's examples, but it may be also useful for other frameworks. - -Purpose of Model Cutting -######################## - -The following examples are the situations when model cutting is useful or even required: - -* A model has pre- or post-processing parts that cannot be translated to existing OpenVINO operations. -* A model has a training part that is convenient to be kept in the model but not used during inference. -* A model is too complex be converted at once, because it contains a lot of unsupported operations that cannot be easily implemented as custom layers. -* A problem occurs with model conversion or inference in OpenVINO™ Runtime. To identify the issue, limit the conversion scope by iterative search for problematic areas in the model. -* A single custom layer or a combination of custom layers is isolated for debugging purposes. - -.. note:: - - Internally, when you run model conversion API, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list. If your topology contains such kind of layers, model conversion API classifies them as custom. - -Model conversion API parameters -############################### - -Model conversion API provides ``input`` and ``output`` command-line options to specify new entry and exit nodes, while ignoring the rest of the model: - -* ``input`` option accepts a list of layer names of the input model that should be treated as new entry points to the model. See the full list of accepted types for input on :doc:`Model Conversion Python API <[legacy]-convert-models-as-python-objects>` page. -* ``output`` option accepts a list of layer names of the input model that should be treated as new exit points from the model. - -The ``input`` option is required for cases unrelated to model cutting. For example, when the model contains several inputs and ``input_shape`` or ``mean_values`` options are used, the ``input`` option specifies the order of input nodes for correct mapping between multiple items provided in ``input_shape`` and ``mean_values`` and the inputs in the model. - -Model cutting is illustrated with the Inception V1 model, found in the ``models/research/slim`` repository. To proceed with this chapter, make sure you do the necessary steps to :doc:`prepare the model for model conversion <[legacy]-setting-input-shapes>`. - -Default Behavior without input and output -######################################### - -The input model is converted as a whole if neither ``input`` nor ``output`` command line options are used. All ``Placeholder`` operations in a TensorFlow graph are automatically identified as entry points. The ``Input`` layer type is generated for each of them. All nodes that have no consumers are automatically identified as exit points. - -For Inception_V1, there is one ``Placeholder``: input. If the model is viewed in TensorBoard, the input operation is easy to find: - -.. image:: ../../../../assets/images/inception_v1_std_input.svg - :alt: Placeholder in Inception V1 - -``Reshape`` is the only output operation, which is enclosed in a nested name scope of ``InceptionV1/Logits/Predictions``, under the full name of ``InceptionV1/Logits/Predictions/Reshape_1``. - -In TensorBoard, along with some of its predecessors, it looks as follows: - -.. image:: ../../../../assets/images/inception_v1_std_output.svg - :alt: TensorBoard with predecessors - -Convert this model to ``ov.Model``: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("inception_v1.pb", batch=1) - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model inception_v1.pb -b 1 --output_dir - - -``ov.Model`` can be serialized with the ``ov.serialize()`` method to Intermediate Representation which can be used for model structure exploring. -In IR, the structure of a model has the following layers: - -.. code-block:: xml - :force: - - - - - 1 - 3 - 224 - 224 - - - - - -The ``input`` layer is converted from the TensorFlow graph ``Placeholder`` operation ``input`` and has the same name. - -The ``-b`` option is used here for conversion to override a possible undefined batch size (coded as -1 in TensorFlow models). If a model was frozen with a defined batch size, you may omit this option in all the examples. - -The last layer in the model is ``InceptionV1/Logits/Predictions/Reshape_1``, which matches an output operation in the TensorFlow graph: - -.. code-block:: xml - :force: - - - - - - 1 - 1001 - - - - - 1 - 1001 - - - - - -Due to automatic identification of inputs and outputs, providing the ``input`` and ``output`` options to convert the whole model is not required. The following commands are equivalent for the Inception V1 model: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("inception_v1.pb", batch=1) - - ov_model = convert_model("inception_v1.pb", batch=1, input="input", output="InceptionV1/Logits/Predictions/Reshape_1") - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model inception_v1.pb -b 1 --output_dir - - mo --input_model inception_v1.pb -b 1 --input input --output InceptionV1/Logits/Predictions/Reshape_1 --output_dir - - -The Intermediate Representations are identical for both conversions. The same is true if the model has multiple inputs and/or outputs. - -Model Cutting -#################### - -Now, consider how to cut some parts of the model off. This chapter describes the first convolution block ``InceptionV1/InceptionV1/Conv2d_1a_7x7`` of the Inception V1 model to illustrate cutting: - -.. image:: ../../../../assets/images/inception_v1_first_block.svg - :alt: Inception V1 first convolution block - -Cutting at the End -++++++++++++++++++++ - -If you want to cut your model at the end, you have the following options: - -1. The following command cuts off the rest of the model after the ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu``, making this node the last in the model: - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("inception_v1.pb", batch=1, output="InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu") - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model inception_v1.pb -b 1 --output=InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir - - - The resulting Intermediate Representation has three layers: - - .. code-block:: xml - :force: - - - - - - - ... - - - - - - ... - - - ... - - - - - - - - - ... - - - ... - - - - - - - - - - - As shown in the TensorBoard picture, the original model has more nodes than its Intermediate Representation. Model conversion, using ``convert_model()``, consists of a set of model transformations, including fusing of batch normalization ``InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm`` with convolution ``InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution``, which is why it is not present in the final model. This is not an effect of the ``output`` option, it is the typical behavior of model conversion API for batch normalizations and convolutions. The effect of the ``output`` is that the ``ReLU`` layer becomes the last one in the converted model. - -2. The following command cuts the edge that comes from 0 output port of the ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu`` and the rest of the model, making this node the last one in the model: - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("inception_v1.pb", batch=1, output="InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu:0") - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model inception_v1.pb -b 1 --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu:0 --output_dir - - - The resulting Intermediate Representation has three layers, which are the same as in the previous case: - - .. code-block:: xml - :force: - - - - - - - ... - - - - - - ... - - - ... - - - - - - - - - ... - - - ... - - - - - - - - - - - This type of cutting is useful for cutting multiple output edges. - -3. The following command cuts the edge that comes to 0 input port of the ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu`` and the rest of the model including ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu``, deleting this node and making the previous node ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D`` the last in the model: - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("inception_v1.pb", batch=1, output="0:InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu") - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model inception_v1.pb -b 1 --output=0:InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir - - - The resulting Intermediate Representation has two layers, which are the same as the first two layers in the previous case: - - .. code-block:: xml - :force: - - - - - - - ... - - - - - - ... - - - ... - - - - - - - - - - - - - -Cutting from the Beginning -++++++++++++++++++++++++++ - -If you want to go further and cut the beginning of the model, leaving only the ``ReLU`` layer, you have the following options: - -1. Use the following parameters, where ``input`` and ``output`` specify the same node in the graph: - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("inception_v1.pb", batch=1, output="InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu", input="InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu") - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model=inception_v1.pb -b 1 --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --input InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir - - - The resulting Intermediate Representation looks as follows: - - .. code-block:: xml - :force: - - - - - - - ... - - - - - ... - - - ... - - - - - - - - - - ``Input`` layer is automatically created to feed the layer that is converted from the node specified in ``input``, which is ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu`` in this case. ``convert_model()`` does not replace the ``ReLU`` node by the ``Input`` layer. It produces such ``ov.Model`` to make the node the first executable node in the final Intermediate Representation. Therefore, model conversion creates enough ``Inputs`` to feed all input ports of the node that is passed in ``input``. - - Even though ``input_shape`` is not specified in the command line, the shapes for layers are inferred from the beginning of the original TensorFlow model to the point, at which the new input is defined. It has the same shape ``[1,64,112,112]`` as the model converted as a whole or without cutting off the beginning. - -2. Cut the edge incoming to layer by port number. To specify the incoming port, use the following notation ``input=port:input_node``. To cut everything before ``ReLU`` layer, cut the edge incoming to port 0 of ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu`` node: - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("inception_v1.pb", batch=1, input="0:InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu", output="InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu") - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model inception_v1.pb -b 1 --input 0:InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir - - - The resulting Intermediate Representation looks as follows: - - .. code-block:: xml - :force: - - - - - - - ... - - - - - ... - - - ... - - - - - - - - - - ``Input`` layer is automatically created to feed the layer that is converted from the node specified in ``input``, which is ``InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu`` in this case. ``convert_model()`` does not replace the ``ReLU`` node by the ``Input`` layer, it produces such ``ov.Model`` to make the node be the first executable node in the final Intermediate Representation. Therefore, ``convert_model()`` creates enough ``Inputs`` to feed all input ports of the node that is passed in ``input``. - - Even though ``input_shape`` is not specified in the command line, the shapes for layers are inferred from the beginning of the original TensorFlow model to the point, at which the new input is defined. It has the same shape ``[1,64,112,112]`` as the model converted as a whole or without cutting off the beginning. - -3. Cut edge outcoming from layer by port number. To specify the outcoming port, use the following notation ``input=input_node:port``. To cut everything before ``ReLU`` layer, cut edge from ``InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/batchnorm/add_1`` node to ``ReLU``: - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("inception_v1.pb", batch=1, input="InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/batchnorm/add_1:0", output="InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu") - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model inception_v1.pb -b 1 --input InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/batchnorm/add_1:0 --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir - - - The resulting Intermediate Representation looks as follows: - - .. code-block:: xml - :force: - - - - - - - ... - - - - - ... - - - ... - - layer> - - - - - - - -Inputs with Multiple Input Ports -################################ - -There are operations that contain more than one input port. In the example considered here, the convolution ``InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution`` is such operation. When ``input_shape`` is not provided, a new ``Input`` layer is created for each dynamic input port for the node. If a port is evaluated to a constant blob, this constant remains in the model and a corresponding input layer is not created. TensorFlow convolution used in this model contains two ports: - -* port 0: input tensor for convolution (dynamic) -* port 1: convolution weights (constant) - -Following this behavior, ``convert_model()`` creates an ``Input`` layer for port 0 only, leaving port 1 as a constant. Thus, the result of: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("inception_v1.pb", batch=1, input="InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution") - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model inception_v1.pb -b 1 --input InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --output_dir - - -is identical to the result of conversion of the model as a whole, because this convolution is the first executable operation in Inception V1. - -Different behavior occurs when ``input_shape`` is also used as an attempt to override the input shape: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("inception_v1.pb", input="InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution", input_shape=[1,224,224,3]) - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model inception_v1.pb--input=InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --input_shape [1,224,224,3] --output_dir - - -An error occurs (for more information, see the :ref:`Model Conversion FAQ `): - -.. code-block:: sh - - [ ERROR ] Node InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution has more than 1 input and input shapes were provided. - Try not to provide input shapes or specify input port with PORT:NODE notation, where PORT is an integer. - For more information, see FAQ #30 - -When ``input_shape`` is specified and the node contains multiple input ports, you need to provide an input port index together with an input node name. The input port index is specified in front of the node name with ``‘:’`` as a separator (``PORT:NODE``). In this case, the port index 0 of the node ``InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution`` should be specified as ``0:InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution``. - -The correct command line is: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("inception_v1.pb", input="0:InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution", input_shape=[1,224,224,3]) - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model inception_v1.pb --input 0:InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --input_shape=[1,224,224,3] --output_dir - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-embedding-preprocessing-computation.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-embedding-preprocessing-computation.rst deleted file mode 100644 index 1e1fe61e717eb3..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-embedding-preprocessing-computation.rst +++ /dev/null @@ -1,253 +0,0 @@ -[LEGACY] Embedding Preprocessing Computation -===================================================== - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Conversion Parameters <../../../../openvino-workflow/model-preparation/conversion-parameters>` article. - -Input data for inference can be different from the training dataset and requires -additional preprocessing before inference. To accelerate the whole pipeline including -preprocessing and inference, model conversion API provides special parameters such as ``mean_values``, -``scale_values``, ``reverse_input_channels``, and ``layout``. - -Based on these parameters, model conversion API generates OpenVINO IR with additionally inserted sub-graphs -to perform the defined preprocessing. This preprocessing block can perform mean-scale -normalization of input data, reverting data along channel dimension, and changing -the data layout. See the following sections for details on the parameters, or the -:doc:`Overview of Preprocessing API <../../../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing>` -for the same functionality in OpenVINO Runtime. - -Specifying Layout -################# - -You may need to set input layouts, as it is required by some preprocessing, for -example, setting a batch, applying mean or scales, and reversing input channels (BGR<->RGB). - -Layout defines the meaning of dimensions in shape and can be specified for both -inputs and outputs. Some preprocessing requires to set input layouts, for example, -setting a batch, applying mean or scales, and reversing input channels (BGR<->RGB). - -For the layout syntax, check the :doc:`Layout API overview <../../../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/layout-api-overview>`. -To specify the layout, you can use the ``layout`` option followed by the layout value. - -For example, the following command specifies the ``NHWC`` layout for a Tensorflow -``nasnet_large`` model that was exported to the ONNX format: - - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("tf_nasnet_large.onnx", layout="nhwc") - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model tf_nasnet_large.onnx --layout nhwc - - -Additionally, if a model has more than one input or needs both input and output -layouts specified, you need to provide the name of each input or output to apply the layout. - -For example, the following command specifies the layout for an ONNX ``Yolo v3 Tiny`` -model with its first input ``input_1`` in ``NCHW`` layout and second input ``image_shape`` -having two dimensions: batch and size of the image expressed as the ``N?`` layout: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("yolov3-tiny.onnx", layout={"input_1": "nchw", "image_shape": "n?"}) - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model yolov3-tiny.onnx --layout input_1(nchw),image_shape(n?) - - -Changing Model Layout -##################### - -Changing the model layout may be necessary if it differs from the one presented by input data. -Use either ``layout`` or ``source_layout`` with ``target_layout`` to change the layout. - -For example, for the same ``nasnet_large`` model mentioned previously, you can use -the following commands to provide data in the ``NCHW`` layout: - - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("tf_nasnet_large.onnx", source_layout="nhwc", target_layout="nchw") - - ov_model = convert_model("tf_nasnet_large.onnx", layout="nhwc->nchw") - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model tf_nasnet_large.onnx --source_layout nhwc --target_layout nchw - - mo --input_model tf_nasnet_large.onnx --layout "nhwc->nchw" - - -Again, if a model has more than one input or needs both input and output layouts -specified, you need to provide the name of each input or output to apply the layout. - -For example, to provide data in the ``NHWC`` layout for the `Yolo v3 Tiny` model -mentioned earlier, use the following commands: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("yolov3-tiny.onnx", source_layout={"input_1": "nchw", "image_shape": "n?"}, target_layout={"input_1": "nhwc"}) - - ov_model = convert_model("yolov3-tiny.onnx", layout={"input_1": "nchw->nhwc", "image_shape": "n?"} - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model yolov3-tiny.onnx --source_layout "input_1(nchw),image_shape(n?)" --target_layout "input_1(nhwc)" - - mo --input_model yolov3-tiny.onnx --layout "input_1(nchw->nhwc),image_shape(n?)" - - -Specifying Mean and Scale Values -################################ - -Neural network models are usually trained with the normalized input data. This -means that the input data values are converted to be in a specific range, for example, -``[0, 1]`` or ``[-1, 1]``. Sometimes, the mean values (mean images) are subtracted -from the input data values as part of the preprocessing. - -There are two cases of how the input data preprocessing is implemented. - -* The input preprocessing operations are a part of a model. - - In this case, the application does not perform a separate preprocessing step: - everything is embedded into the model itself. ``convert_model()`` will generate the - ov.Model with required preprocessing operations, and no ``mean`` and - ``scale`` parameters are required. -* The input preprocessing operations are not a part of a model and the preprocessing - is performed within the application which feeds the model with input data. - - In this case, information about mean/scale values should be provided to ``convert_model()`` - to embed it to the generated ``ov.Model``. - -Model conversion API represented by ``convert_model()`` provides command-line parameters -to specify the values: ``mean_values``, ``scale_values``, ``scale``. Using these parameters, -model conversion API embeds the corresponding preprocessing block for mean-value -normalization of the input data and optimizes this block so that the preprocessing -takes negligible time for inference. - -For example, the following command runs model conversion for the PaddlePaddle UNet -model and applies mean-scale normalization to the input data: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("unet.pdmodel", mean_values=[123,117,104], scale=255) - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255 - - -Reversing Input Channels -######################## - -Sometimes, input images for your application can be of the RGB (or BGR) format -and the model is trained on images of the BGR (or RGB) format, which is in the -opposite order of color channels. In this case, it is important to preprocess the -input images by reverting the color channels before inference. - -To embed this preprocessing step into ``ov.Model``, model conversion API provides the -``reverse_input_channels`` command-line parameter to shuffle the color channels. - -The ``reverse_input_channels`` parameter can be used to preprocess the model -input in the following cases: - -* Only one dimension in the input shape has a size equal to ``3``. -* One dimension has an undefined size and is marked as ``C`` channel using ``layout`` parameters. - -Using the ``reverse_input_channels`` parameter, model conversion API embeds the corresponding -preprocessing block for reverting the input data along channel dimension and optimizes -this block so that the preprocessing takes only negligible time for inference. - -For example, the following command launches model conversion for the TensorFlow AlexNet -model and embeds the ``reverse_input_channel`` preprocessing block into OpenVINO IR: - - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("alexnet.pb", reverse_input_channels=True) - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model alexnet.pb --reverse_input_channels - - -.. note:: - - If both mean and scale values are specified, the mean is subtracted first and - then the scale is applied regardless of the order of options in the command-line. - Input values are *divided* by the scale value(s). If the ``reverse_input_channels`` - option is also used, ``reverse_input_channels`` will be applied first, then ``mean`` - and after that ``scale``. The data flow in the model looks as follows: - ``Parameter -> ReverseInputChannels -> Mean apply-> Scale apply -> the original body of the model``. - -Additional Resources -#################### - -* :doc:`Overview of Preprocessing API <../../../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing>` - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-model-optimizer-faq.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-model-optimizer-faq.rst deleted file mode 100644 index f035101d715e9b..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-model-optimizer-faq.rst +++ /dev/null @@ -1,947 +0,0 @@ -[LEGACY] Model Optimizer Frequently Asked Questions -=========================================================== - - -.. important:: - - All of the issues below refer to :doc:`legacy functionalities <../legacy-model-optimizer-extensibility>`. - -If your question is not covered by the topics below, use the -`OpenVINO Support page `__, -where you can participate in a free forum discussion. - -.. warning:: - - Note that OpenVINO support for Apache MXNet, Caffe, and Kaldi has been discontinued. - -.. _question-1: - -Q1. What does the message "[ ERROR ]: Current caffe.proto does not contain field" mean? -##################################################################################################################################################### - -**A:** Internally, Model Optimizer uses a protobuf library to parse and load Caffe models. This library requires a file grammar and a generated parser. For a Caffe fallback, Model Optimizer uses a Caffe-generated parser for a Caffe-specific ``.proto`` file (which is usually located in the ``src/caffe/proto`` directory). Make sure that you install exactly the same version of Caffe (with Python interface) as that was used to create the model. - -If you just want to experiment with Model Optimizer and test a Python extension for working with your custom -layers without building Caffe, add the layer description to the ``caffe.proto`` file and generate a parser for it. - -For example, to add the description of the ``CustomReshape`` layer, which is an artificial layer not present in any ``caffe.proto`` files: - -1. Add the following lines to the ``caffe.proto`` file: - - .. code-block:: shell - - package mo_caffe; // To avoid conflict with Caffe system, it is highly recommended to specify different package name. - ... - message LayerParameter { - // Other layers parameters description. - ... - optional CustomReshapeParameter custom_reshape_param = 546; // 546 - ID is any number not present in caffe.proto. - } - // The lines from here to the end of the file are describing contents of this parameter. - message CustomReshapeParameter { - optional BlobShape shape = 1; // Just use the same parameter type as some other Caffe layers. - } - - -2. Generate a new parser: - - .. code-block:: shell - - cd /openvino/tools/mo/front/caffe/proto - python3 generate_caffe_pb2.py --input_proto /src/caffe/proto/caffe.proto - - - where ``PATH_TO_CUSTOM_CAFFE`` is the path to the root directory of custom Caffe. - -3. Now, Model Optimizer is able to load the model into memory and start working with your extensions if there are any. - - However, since your model has custom layers, you must register them as custom. To learn more about it, refer to the :doc:`[Legacy] Custom Layers in Model Optimizer <../legacy-model-optimizer-extensibility>`. - -.. _question-2: - -Q2. How do I create a bare caffemodel, if I have only prototxt? -##################################################################################################################################################### - -**A:** You need the Caffe Python interface. In this case, do the following: - -.. code-block:: shell - - python3 - import caffe - net = caffe.Net('/my_net.prototxt', caffe.TEST) - net.save('/my_net.caffemodel') - - -.. _question-3: - -Q3. What does the message "[ ERROR ]: Unable to create ports for node with id" mean? -##################################################################################################################################################### - -**A:** Most likely, Model Optimizer does not know how to infer output shapes of some layers in the given topology. -To lessen the scope, compile the list of layers that are custom for Model Optimizer: present in the topology, -absent in the :doc:`list of supported operations <../../../../about-openvino/compatibility-and-support/supported-operations>` for the target framework. -Then, refer to available options in the corresponding section in the :doc:`[Legacy] Custom Layers in Model Optimizer <../legacy-model-optimizer-extensibility>` page. - -.. _question-7: - -Q7. What does the message "Invalid proto file: there is neither 'layer' nor 'layers' top-level messages" mean? -##################################################################################################################################################### - -**A:** The structure of any Caffe topology is described in the ``caffe.proto`` file of any Caffe version. For example, the following ``.proto`` file in Model Optimizer is used by default: ``mo/front/caffe/proto/my_caffe.proto``, with the structure: - -.. code-block:: sh - - message NetParameter { - // ... some other parameters - // The layers that make up the net. Each of their configurations, including - // connectivity and behavior, is specified as a LayerParameter. - repeated LayerParameter layer = 100; // ID 100 so layers are printed last. - // DEPRECATED: use 'layer' instead. - repeated V1LayerParameter layers = 2; - } - - -This means that any topology should contain layers as top-level structures in ``prototxt``. For example, see the `LeNet topology `__. - -.. _question-8: - -Q8. What does the message "Old-style inputs (via 'input_dims') are not supported. Please specify inputs via 'input_shape'" mean? -##################################################################################################################################################### - -**A:** The structure of any Caffe topology is described in the ``caffe.proto`` file for any Caffe version. For example, the following ``.proto`` file in Model Optimizer is used by default: ``mo/front/caffe/proto/my_caffe.proto``, with the structure: - -.. code-block:: sh - - message NetParameter { - - optional string name = 1; // consider giving the network a name - // DEPRECATED. See InputParameter. The input blobs to the network. - repeated string input = 3; - // DEPRECATED. See InputParameter. The shape of the input blobs. - repeated BlobShape input_shape = 8; - // 4D input dimensions -- deprecated. Use "input_shape" instead. - // If specified, for each input blob there should be four - // values specifying the num, channels, height and width of the input blob. - // Thus, there should be a total of (4 * #input) numbers. - repeated int32 input_dim = 4; - // ... other parameters - } - - -Therefore, the input layer of the provided model must be specified in one of the following styles: - -* - - .. code-block:: sh - - input: "data" - input_shape - { - dim: 1 - dim: 3 - dim: 227 - dim: 227 - } - - -* - - .. code-block:: sh - - input: "data" - input_shape - { - dim: 1 - dim: 3 - dim: 600 - dim: 1000 - } - input: "im_info" - input_shape - { - dim: 1 - dim: 3 - } - -* - - .. code-block:: sh - - layer - { - name: "data" - type: "Input" - top: "data" - input_param {shape: {dim: 1 dim: 3 dim: 600 dim: 1000}} - } - layer - { - name: "im_info" - type: "Input" - top: "im_info" - input_param {shape: {dim: 1 dim: 3}} - } - -* - - .. code-block:: sh - - input: "data" - input_dim: 1 - input_dim: 3 - input_dim: 500 - - -However, if your model contains more than one input, Model Optimizer is able to convert the model with inputs specified in one of the first three forms in the above list. The 4th form is not supported for multi-input topologies. - -.. _question-9: - -Q9. What does the message "Mean file for topologies with multiple inputs is not supported" mean? -##################################################################################################################################################### - -**A:** Model Optimizer does not support mean file processing for topologies with more than one input. In this case, you need to perform preprocessing of the inputs for a generated Intermediate Representation in OpenVINO Runtime to perform subtraction for every input of your multi-input model. See the :doc:`Overview of Preprocessing <../../../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing>` for details. - -.. _question-11: - -Q11. What does the message "Invalid prototxt file: value error" mean? -##################################################################################################################################################### - -**A:** There are multiple reasons why Model Optimizer does not accept a Caffe topology. See FAQs :ref:`#7 ` and :ref:`#20 `. - -.. _question-12: - -Q12. What does the message "Error happened while constructing caffe.Net in the Caffe fallback function" mean? -##################################################################################################################################################### - -**A:** Model Optimizer tried to infer a specified layer via the Caffe framework. However, it cannot construct a net using the Caffe Python interface. Make sure that your ``caffemodel`` and ``prototxt`` files are correct. To ensure that the problem is not in the ``prototxt`` file, see FAQ :ref:`#2 `. - -.. _question-13: - -Q13. What does the message "Cannot infer shapes due to exception in Caffe" mean? -##################################################################################################################################################### - -**A:** Model Optimizer tried to infer a custom layer via the Caffe framework, but the model could not be inferred using Caffe. This might happen if you try to convert the model with some noise weights and biases, which conflict with layers that have dynamic shapes. You should write your own extension for every custom layer your topology might have. For more details, refer to the :doc:`[Legacy] Model Optimizer Extensibility <../legacy-model-optimizer-extensibility>` page. - -.. _question-14: - -Q14. What does the message "Cannot infer shape for node {} because there is no Caffe available. Please register python infer function for op or use Caffe for shape inference" mean? -#################################################################################################################################################################################### - -**A:** Your model contains a custom layer and you have correctly registered it with the ``CustomLayersMapping.xml`` file. These steps are required to offload shape inference of the custom layer with the help of the system Caffe. However, Model Optimizer could not import a Caffe package. Make sure that you have built Caffe with a ``pycaffe`` target and added it to the ``PYTHONPATH`` environment variable. At the same time, it is highly recommended to avoid dependency on Caffe and write your own Model Optimizer extension for your custom layer. For more information, refer to FAQ :ref:`#44 `. - -.. _question-15: - -Q15. What does the message "Framework name can not be deduced from the given options. Use --framework to choose one of Caffe, TensorFlow, MXNet" mean? -###################################################################################################################################################### - -**A:** You have run Model Optimizer without a flag ``--framework caffe|tf``. Model Optimizer tries to deduce the framework by the extension of input model file (``.pb`` for TensorFlow, ``.caffemodel`` for Caffe, ``.params`` for Apache MXNet). Your input model might have a different extension and you need to explicitly set the source framework. For example, use ``--framework caffe``. - -.. _question-16: - -Q16. What does the message "Input shape is required to convert MXNet model. Please provide it with --input_shape" mean? -##################################################################################################################################################### - -**A:** Input shape was not provided. That is mandatory for converting an MXNet model to the OpenVINO Intermediate Representation, because MXNet models do not contain information about input shapes. Use the ``--input_shape`` flag to specify it. For more information about using the ``--input_shape``, refer to FAQ :ref:`#56 `. - -.. _question-17: - -.. _question-18: - -.. _question-19: - -Q19. What does the message "Both --scale and --scale_values are defined. Specify either scale factor or scale values per input channels" mean? -##################################################################################################################################################### - -**A:** The ``--scale`` option sets a scaling factor for all channels, while ``--scale_values`` sets a scaling factor per each channel. Using both of them simultaneously produces ambiguity, so you must use only one of them. For more information, refer to the **Using Framework-Agnostic Conversion Parameters** section: for :doc:`Converting a TensorFlow Model <[legacy]-supported-model-formats/[legacy]-convert-tensorflow>`. - -.. _question-20: - -Q20. What does the message "Cannot find prototxt file: for Caffe please specify --input_proto - a protobuf file that stores topology and --input_model that stores pre-trained weights" mean? -############################################################################################################################################################################################## - -**A:** Model Optimizer cannot find a ``.prototxt`` file for a specified model. By default, it must be located in the same directory as the input model with the same name (except extension). If any of these conditions is not satisfied, use ``--input_proto`` to specify the path to the ``.prototxt`` file. - -.. _question-21: - -.. _question-22: - -Q22. What does the message "Failed to create directory .. . Permission denied!" mean? -##################################################################################################################################################### - -**A:** Model Optimizer cannot create a directory specified via ``--output_dir``. Make sure that you have enough permissions to create the specified directory. - -.. _question-23: - -Q23. What does the message "Discovered data node without inputs and value" mean? -##################################################################################################################################################### - -**A:** One of the layers in the specified topology might not have inputs or values. Make sure that the provided ``caffemodel`` and ``protobuf`` files are correct. - -.. _question-24: - -Q24. What does the message "Part of the nodes was not translated to IE. Stopped" mean? -##################################################################################################################################################### - -**A:** Some of the operations are not supported by OpenVINO Runtime and cannot be translated to OpenVINO Intermediate Representation. You can extend Model Optimizer by allowing generation of new types of operations and implement these operations in the dedicated OpenVINO plugins. For more information, refer to the :doc:`OpenVINO Extensibility Mechanism <../../../openvino-extensibility>` guide. - -.. _question-25: - -Q25. What does the message "While creating an edge from .. to .. : node name is undefined in the graph. Check correctness of the input model" mean? -##################################################################################################################################################### - -**A:** Model Optimizer cannot build a graph based on a specified model. Most likely, it is incorrect. - -.. _question-26: - -Q26. What does the message "Node does not exist in the graph" mean? -##################################################################################################################################################### - -**A:** You might have specified an output node via the ``--output`` flag that does not exist in a provided model. Make sure that the specified output is correct and this node exists in the current model. - -.. _question-27: - -Q27. What does the message "--input parameter was provided. Other inputs are needed for output computation. Provide more inputs or choose another place to cut the net" mean? -############################################################################################################################################################################## - -**A:** Most likely, Model Optimizer tried to cut the model by a specified input. However, other inputs are needed. - -.. _question-28: - -Q28. What does the message "Placeholder node does not have an input port, but input port was provided" mean? -##################################################################################################################################################### - -**A:** You might have specified a placeholder node with an input node, while the placeholder node does not have it in the model. - -.. _question-29: - -Q29. What does the message "Port index is out of number of available input ports for node" mean? -##################################################################################################################################################### - -**A:** This error occurs when an incorrect input port is specified with the ``--input`` command line argument. When using ``--input``, you may optionally specify an input port in the form: ``X:node_name``, where ``X`` is an integer index of the input port starting from 0 and ``node_name`` is the name of a node in the model. This error occurs when the specified input port ``X`` is not in the range 0..(n-1), where n is the number of input ports for the node. Specify a correct port index, or do not use it if it is not needed. - -.. _question-30: - -Q30. What does the message "Node has more than 1 input and input shapes were provided. Try not to provide input shapes or specify input port with PORT:NODE notation, where PORT is an integer" mean? -###################################################################################################################################################################################################### - -**A:** This error occurs when an incorrect combination of the ``--input`` and ``--input_shape`` command line options is used. Using both ``--input`` and ``--input_shape`` is valid only if ``--input`` points to the ``Placeholder`` node, a node with one input port or ``--input`` has the form ``PORT:NODE``, where ``PORT`` is an integer port index of input for node ``NODE``. Otherwise, the combination of ``--input`` and ``--input_shape`` is incorrect. - - -.. _question-31: - -Q31. What does the message "Input port > 0 in --input is not supported if --input_shape is not provided. Node: NAME_OF_THE_NODE. Omit port index and all input ports will be replaced by placeholders. Or provide --input_shape" mean? -####################################################################################################################################################################################################################################### - -**A:** When using the ``PORT:NODE`` notation for the ``--input`` command line argument and ``PORT`` > 0, you should specify ``--input_shape`` for this input. This is a limitation of the current Model Optimizer implementation. - -.. note:: It is no longer relevant message since the limitation on input port index for model truncation has been resolved. - -.. _question-32: - -Q32. What does the message "No or multiple placeholders in the model, but only one shape is provided, cannot set it" mean? -##################################################################################################################################################### - -**A:** You might have provided only one shape for the placeholder, while there are none or multiple inputs in the model. Make sure that you have provided the correct data for placeholder nodes. - -.. _question-33: - -Q33. What does the message "The amount of input nodes for port is not equal to 1" mean? -##################################################################################################################################################### - -**A:** This error occurs when the ``SubgraphMatch.single_input_node`` function is used for an input port that supplies more than one node in a sub-graph. The ``single_input_node`` function can be used only for ports that has a single consumer inside the matching sub-graph. When multiple nodes are connected to the port, use the ``input_nodes`` function or ``node_by_pattern`` function instead of ``single_input_node``. For more details, refer to the **Graph Transformation Extensions** section in the :doc:`[Legacy] Model Optimizer Extensibility <../legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-graph-transformation-extensions>` guide. - -.. _question-34: - -Q34. What does the message "Output node for port has already been specified" mean? -##################################################################################################################################################### - -**A:** This error occurs when the ``SubgraphMatch._add_output_node`` function is called manually from user's extension code. This is an internal function, and you should not call it directly. - -.. _question-35: - -Q35. What does the message "Unsupported match kind.... Match kinds "points" or "scope" are supported only" mean? -##################################################################################################################################################### - -**A:** While using configuration file to implement a TensorFlow front replacement extension, an incorrect match kind was used. Only ``points`` or ``scope`` match kinds are supported. For more details, refer to the :doc:`[Legacy] Model Optimizer Extensibility <../legacy-model-optimizer-extensibility>` guide. - -.. _question-36: - -Q36. What does the message "Cannot write an event file for the TensorBoard to directory" mean? -##################################################################################################################################################### - -**A:** Model Optimizer tried to write an event file in the specified directory but failed to do that. That could happen when the specified directory does not exist or you do not have permissions to write in it. - -.. _question-37: - -Q37. What does the message "There is no registered 'infer' function for node with op = .. . Please implement this function in the extensions" mean? -##################################################################################################################################################### - -**A** Most likely, you tried to extend Model Optimizer with a new primitive, but you did not specify an infer function. For more information on extensions, see the :doc:`OpenVINO Extensibility Mechanism <../../../openvino-extensibility>` guide. - -.. _question-38: - -Q38. What does the message "Stopped shape/value propagation at node" mean? -##################################################################################################################################################### - -**A:** Model Optimizer cannot infer shapes or values for the specified node. It can happen because of the following reasons: a bug exists in the custom shape infer function, the node inputs have incorrect values/shapes, or the input shapes are incorrect. - -.. _question-39: - -Q39. What does the message "The input with shape .. does not have the batch dimension" mean? -##################################################################################################################################################### - -**A:** Batch dimension is the first dimension in the shape and it should be equal to 1 or undefined. In your case, it is not either equal to 1 or undefined, which is why the ``-b`` shortcut produces undefined and unspecified behavior. To resolve the issue, specify full shapes for each input with the ``--input_shape`` option. Run Model Optimizer with the ``--help`` option to learn more about the notation for input shapes. - -.. _question-40: - -Q40. What does the message "Not all output shapes were inferred or fully defined for node" mean? -##################################################################################################################################################### - -**A:** Most likely, the shape is not defined (partially or fully) for the specified node. You can use ``--input_shape`` with positive integers to override model input shapes. - -.. _question-41: - -Q41. What does the message "Shape for tensor is not defined. Can not proceed" mean? -##################################################################################################################################################### - -**A:** This error occurs when the ``--input`` command-line option is used to cut a model and ``--input_shape`` is not used to override shapes for a node, so a shape for the node cannot be inferred by Model Optimizer. You need to help Model Optimizer by specifying shapes with ``--input_shape`` for each node specified with the ``--input`` command-line option. - -.. _question-42: - -Q42. What does the message "Module TensorFlow was not found. Please install TensorFlow 1.2 or higher" mean? -##################################################################################################################################################### - -**A:** To convert TensorFlow models with Model Optimizer, TensorFlow 1.2 or newer must be installed. For more information on prerequisites, see the :doc:`Configuring Model Optimizer <../legacy-conversion-api>` guide. - -.. _question-43: - -Q43. What does the message "Cannot read the model file: it is incorrect TensorFlow model file or missing" mean? -##################################################################################################################################################### - -**A:** The model file should contain a frozen TensorFlow graph in the text or binary format. Make sure that ``--input_model_is_text`` is provided for a model in the text format. By default, a model is interpreted as binary file. - -.. _question-44: - -Q44. What does the message "Cannot pre-process TensorFlow graph after reading from model file. File is corrupt or has unsupported format" mean? -##################################################################################################################################################### - -**A:** Most likely, there is a problem with the specified file for the model. The file exists, but it has an invalid format or is corrupted. - -.. _question-45: - -Q45. What does the message "Found custom layer. Model Optimizer does not support this layer. Please, register it in CustomLayersMapping.xml or implement extension" mean? -########################################################################################################################################################################## - -**A:** This means that the layer ``{layer_name}`` is not supported in Model Optimizer. You will find a list of all unsupported layers in the corresponding section. You should implement the extensions for this layer. See :doc:`OpenVINO Extensibility Mechanism <../../../openvino-extensibility>` for more information. - -.. _question-46: - -Q46. What does the message "Custom replacement configuration file does not exist" mean? -##################################################################################################################################################### - -**A:** A path to the custom replacement configuration file was provided with the ``--transformations_config`` flag, but the file could not be found. Make sure the specified path is correct and the file exists. - -.. _question-47: - -Q47. What does the message "Extractors collection have case insensitive duplicates" mean? -##################################################################################################################################################### - -**A:** When extending Model Optimizer with new primitives, keep in mind that their names are case-insensitive. Most likely, another operation with the same name is already defined. For more information, see the :doc:`OpenVINO Extensibility Mechanism <../../../openvino-extensibility>` guide. - -.. _question-48: - -Q48. What does the message "Input model name is not in an expected format, cannot extract iteration number" mean? -##################################################################################################################################################### - -**A:** Model Optimizer cannot load an MXNet model in the specified file format. Make sure you use the ``.json`` or ``.param`` format. - -.. _question-49: - -Q49. What does the message "Cannot convert type of placeholder because not all of its outputs are 'Cast' to float operations" mean? -##################################################################################################################################################### - -**A:** There are models where ``Placeholder`` has the UINT8 type and the first operation after it is 'Cast', which casts the input to FP32. Model Optimizer detected that the ``Placeholder`` has the UINT8 type, but the next operation is not 'Cast' to float. Model Optimizer does not support such a case. Make sure you change the model to have ``Placeholder`` for FP32. - -.. _question-50: - -Q50. What does the message "Data type is unsupported" mean? -##################################################################################################################################################### - -**A:** Model Optimizer cannot read the value with the specified data type. Currently, the following types are supported: bool, float16, float32, double, int8, int16, int32, int64, uint8, uint16, uint32, uint64, str. - -.. _question-51: - -Q51. What does the message "No node with name ..." mean? -##################################################################################################################################################### - -**A:** Model Optimizer tried to access a node that does not exist. This could happen if you have incorrectly specified placeholder, input or output node name. - -.. _question-52: - -Q52. What does the message "Module MXNet was not found. Please install MXNet 1.0.0" mean? -##################################################################################################################################################### - -**A:** To convert MXNet models with Model Optimizer, Apache MXNet 1.0.0 must be installed. For more information about prerequisites, see the :doc:`Configuring Model Optimizer <../legacy-conversion-api>` guide. - -.. _question-53: - -Q53. What does the message "The following error happened while loading MXNet model .." mean? -##################################################################################################################################################### - -**A:** Most likely, there is a problem with loading of the MXNet model. Make sure the specified path is correct, the model exists and is not corrupted, and you have sufficient permissions to work with it. - -.. _question-54: - -Q54. What does the message "The following error happened while processing input shapes: .." mean? -##################################################################################################################################################### - -**A:** Make sure inputs are defined and have correct shapes. You can use ``--input_shape`` with positive integers to override model input shapes. - -.. _question-55: - -Q55. What does the message "Attempt to register of custom name for the second time as class. Note that custom names are case-insensitive" mean? -##################################################################################################################################################### - -**A:** When extending Model Optimizer with new primitives, keep in mind that their names are case-insensitive. Most likely, another operation with the same name is already defined. For more information, see the :doc:`OpenVINO Extensibility Mechanism <../../../openvino-extensibility>` guide. - -.. _question-56: - -Q56. What does the message "Both --input_shape and --batch were provided. Please, provide only one of them" mean? -##################################################################################################################################################### - -**A:** Specifying the batch and the input shapes at the same time is not supported. You must specify a desired batch as the first value of the input shape. - -.. _question-57: - -Q57. What does the message "Input shape .. cannot be parsed" mean? -##################################################################################################################################################### - -**A:** The specified input shape cannot be parsed. Define it in one of the following ways: - -* - - .. code-block:: shell - - mo --input_model .caffemodel --input_shape (1,3,227,227) - -* - - .. code-block:: shell - - mo --input_model .caffemodel --input_shape [1,3,227,227] - -* In case of multi input topology you should also specify inputs: - - .. code-block:: shell - - mo --input_model /path-to/your-model.caffemodel --input data,rois --input_shape (1,3,227,227),(1,6,1,1) - - -Keep in mind that there is no space between and inside the brackets for input shapes. - -.. _question-58: - -Q58. What does the message "Please provide input layer names for input layer shapes" mean? -##################################################################################################################################################### - -**A:** When specifying input shapes for several layers, you must provide names for inputs, whose shapes will be overwritten. Additional information for ``--input_shape`` is in FAQ :ref:`#56 `. - -.. _question-59: - -Q59. What does the message "Values cannot be parsed" mean? -##################################################################################################################################################### - -**A:** Mean values for the given parameter cannot be parsed. It should be a string with a list of mean values. For example, in '(1,2,3)', 1 stands for the RED channel, 2 for the GREEN channel, 3 for the BLUE channel. - -.. _question-60: - -Q60. What does the message ".. channels are expected for given values" mean? -##################################################################################################################################################### - -**A:** The number of channels and the number of given values for mean values do not match. The shape should be defined as '(R,G,B)' or '[R,G,B]'. The shape should not contain undefined dimensions (? or -1). The order of values is as follows: (value for a RED channel, value for a GREEN channel, value for a BLUE channel). - -.. _question-61: - -Q61. What does the message "You should specify input for each mean value" mean? -##################################################################################################################################################### - -**A:** Most likely, you didn't specify inputs using ``--mean_values``. Specify inputs with the ``--input`` flag. For usage examples, refer to the FAQ :ref:`#62 `. - -.. _question-62: - -Q62. What does the message "You should specify input for each scale value" mean? -##################################################################################################################################################### - -**A:** Most likely, you didn't specify inputs using ``--scale_values``. Specify inputs with the ``--input`` flag. For usage examples, refer to the FAQ :ref:`#63 `. - -.. _question-63: - -Q63. What does the message "Number of inputs and mean values does not match" mean? -##################################################################################################################################################### - -**A:** The number of specified mean values and the number of inputs must be equal. - -.. _question-64: - -Q64. What does the message "Number of inputs and scale values does not match" mean? -##################################################################################################################################################### - -**A:** The number of specified scale values and the number of inputs must be equal. - -.. _question-65: - -Q65. What does the message "No class registered for match kind ... Supported match kinds are .. " mean? -##################################################################################################################################################### - -**A:** A replacement defined in the configuration file for sub-graph replacement, using node names patterns or start/end nodes, has the ``match_kind`` attribute. The attribute may have only one of the values: ``scope`` or ``points``. If a different value is provided, this error is displayed. - -.. _question-66: - -Q66. What does the message "No instance(s) is(are) defined for the custom replacement" mean? -##################################################################################################################################################### - -**A:** A replacement defined in the configuration file for sub-graph replacement, using node names patterns or start/end nodes, has the ``instances`` attribute. This attribute is mandatory. This error will occur if the attribute is missing. For more details, refer to the **Graph Transformation Extensions** section in the :doc:`[Legacy] Model Optimizer Extensibility <../legacy-model-optimizer-extensibility>` guide. - -.. _question-67: - -Q67. What does the message "The instance must be a single dictionary for the custom replacement with id .." mean? -##################################################################################################################################################### - -**A:** A replacement defined in the configuration file for sub-graph replacement, using start/end nodes, has the ``instances`` attribute. For this type of replacement, the instance must be defined with a dictionary with two keys ``start_points`` and ``end_points``. Values for these keys are lists with the start and end node names, respectively. For more details, refer to the **Graph Transformation Extensions** section in the :doc:`[Legacy] Model Optimizer Extensibility <../legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-graph-transformation-extensions>` guide. - -.. _question-68: - -Q68. What does the message "No instances are defined for replacement with id .. " mean? -##################################################################################################################################################### - -**A:** A replacement for the specified id is not defined in the configuration file. For more information, refer to the FAQ :ref:`#65 `. - -.. _question-69: - -Q69. What does the message "Custom replacements configuration file .. does not exist" mean? -##################################################################################################################################################### - -**A:** The path to a custom replacement configuration file was provided with the ``--transformations_config`` flag, but it cannot be found. Make sure the specified path is correct and the file exists. - -.. _question-70: - -Q70. What does the message "Failed to parse custom replacements configuration file .." mean? -##################################################################################################################################################### - -**A:** The file for custom replacement configuration provided with the ``--transformations_config`` flag cannot be parsed. In particular, it should have a valid JSON structure. For more details, refer to the `JSON Schema Reference `__ page. - -.. _question-71: - -Q71. What does the message "One of the custom replacements in the configuration file .. does not contain attribute 'id'" mean? -##################################################################################################################################################### - -**A:** Every custom replacement should declare a set of mandatory attributes and their values. For more details, refer to FAQ :ref:`#71 `. - -.. _question-72: - -Q72. What does the message "File .. validation failed" mean? -##################################################################################################################################################### - -**A:** The file for custom replacement configuration provided with the ``--transformations_config`` flag cannot pass validation. Make sure you have specified ``id``, ``instances``, and ``match_kind`` for all the patterns. - -.. _question-73: - -Q73. What does the message "Cannot update the file .. because it is broken" mean? -##################################################################################################################################################### - -**A:** The custom replacement configuration file provided with the ``--tensorflow_custom_operations_config_update`` cannot be parsed. Make sure that the file is correct and refer to FAQ :ref:`#68 `, :ref:`#69 `, :ref:`#70 `, and :ref:`#71 `. - -.. _question-74: - -Q74. What does the message "End node .. is not reachable from start nodes: .." mean? -##################################################################################################################################################### - -**A:** This error occurs when you try to make a sub-graph match. It is detected that between the start and end nodes that were specified as inputs/outputs for the subgraph to find, there are nodes marked as outputs but there is no path from them to the input nodes. Make sure the subgraph you want to match does actually contain all the specified output nodes. - -.. _question-75: - -Q75. What does the message "Sub-graph contains network input node .." mean? -##################################################################################################################################################### - -**A:** The start or end node for the sub-graph replacement using start/end nodes is specified incorrectly. Model Optimizer finds internal nodes of the sub-graph strictly "between" the start and end nodes, and then adds all input nodes to the sub-graph (and the inputs of their inputs, etc.) for these "internal" nodes. This error reports that Model Optimizer reached input node during this phase. This means that the start/end points are specified incorrectly in the configuration file. For more details, refer to the **Graph Transformation Extensions** section in the :doc:`[Legacy] Model Optimizer Extensibility <../legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-graph-transformation-extensions>` guide. - -.. _question-76: - -Q76. What does the message "... elements of ... were clipped to infinity while converting a blob for node [...] to ..." mean? -##################################################################################################################################################### - -**A:** This message may appear when the ``--compress_to_fp16`` command-line option is used. This option implies compression of all the model weights, biases, and other constant values to FP16. If a value of a constant is out of the range of valid FP16 values, the value is converted to positive or negative infinity. It may lead to incorrect results of inference or may not be a problem, depending on the model. The number of such elements and the total number of elements in the constant value is printed out together with the name of the node, where this value is used. - -.. _question-77: - -Q77. What does the message "... elements of ... were clipped to zero while converting a blob for node [...] to ..." mean? -##################################################################################################################################################### - -**A:** This message may appear when the ``--compress_to_fp16`` command-line option is used. This option implies conversion of all blobs in the mode to FP16. If a value in the blob is so close to zero that it cannot be represented as a valid FP16 value, it is converted to a true zero FP16 value. Depending on the model, it may lead to incorrect results of inference or may not be a problem. The number of such elements and the total number of elements in the blob are printed out together with a name of the node, where this blob is used. - -.. _question-78: - -Q78. What does the message "The amount of nodes matched pattern ... is not equal to 1" mean? -##################################################################################################################################################### - -**A:** This error occurs when the ``SubgraphMatch.node_by_pattern`` function is used with a pattern that does not uniquely identify a single node in a sub-graph. Try to extend the pattern string to make unambiguous match to a single sub-graph node. For more details, refer to the **Graph Transformation Extensions** section in the :doc:`[Legacy] Model Optimizer Extensibility <../legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-graph-transformation-extensions>` guide. - -.. _question-79: - -Q79. What does the message "The topology contains no "input" layers" mean? -##################################################################################################################################################### - -**A:** Your Caffe topology ``.prototxt`` file is intended for training. Model Optimizer expects a deployment-ready ``.prototxt`` file. To fix the problem, prepare a deployment-ready ``.prototxt`` file. Preparation of a deploy-ready topology usually results in removing ``data`` layer(s), adding ``input`` layer(s), and removing loss layer(s). - -.. _question-80: - -Q80. What does the message "Warning: please expect that Model Optimizer conversion might be slow" mean? -##################################################################################################################################################### - -**A:** You are using an unsupported Python version. Use only versions 3.4 - 3.6 for the C++ ``protobuf`` implementation that is supplied with OpenVINO toolkit. You can still boost the conversion speed by building the protobuf library from sources. For complete instructions about building ``protobuf`` from sources, see the appropriate section in the :doc:`Converting a Model to Intermediate Representation <../legacy-conversion-api>` guide. - -.. _question-81: - -Q81. What does the message "Arguments --nd_prefix_name, --pretrained_model_name and --input_symbol should be provided. Please provide all or do not use any." mean? -#################################################################################################################################################################### - -**A:** This error occurs if you did not provide the ``--nd_prefix_name``, ``--pretrained_model_name``, and ``--input_symbol`` parameters. -Model Optimizer requires both ``.params`` and ``.nd`` model files to merge into the result file (``.params``). -Topology description (``.json`` file) should be prepared (merged) in advance and provided with the ``--input_symbol`` parameter. - -If you add additional layers and weights that are in ``.nd`` files to your model, Model Optimizer can build a model -from one ``.params`` file and two additional ``.nd`` files (``*_args.nd``, ``*_auxs.nd``). -To do that, provide both CLI options or do not pass them if you want to convert an MXNet model without additional weights. - -.. _question-82: - -Q82. What does the message "You should specify input for mean/scale values" mean? -##################################################################################################################################################### - -**A:** When the model has multiple inputs and you want to provide mean/scale values, you need to pass those values for each input. More specifically, the number of passed values should be the same as the number of inputs of the model. -For more information, refer to the :doc:`Converting a Model to Intermediate Representation <[legacy]-setting-input-shapes>` guide. - -.. _question-83: - -Q83. What does the message "Input with name ... not found!" mean? -##################################################################################################################################################### - -**A:** When you passed the mean/scale values and specify names of input layers of the model, you might have used the name that does not correspond to any input layer. Make sure that you list only names of the input layers of your model when passing values with the ``--input`` option. -For more information, refer to the :doc:`Converting a Model to Intermediate Representation <[legacy]-setting-input-shapes>` guide. - -.. _question-84: - -Q84. What does the message "Specified input json ... does not exist" mean? -##################################################################################################################################################### - -**A:** Most likely, ``.json`` file does not exist or has a name that does not match the notation of Apache MXNet. Make sure the file exists and has a correct name. - -.. _question-85: - -Q85. What does the message "Unsupported Input model file type ... Model Optimizer support only .params and .nd files format" mean? -##################################################################################################################################################### - -**A:** Model Optimizer for Apache MXNet supports only ``.params`` and ``.nd`` files formats. Most likely, you specified an unsupported file format in ``--input_model``. - -.. _question-86: - -Q86. What does the message "Operation ... not supported. Please register it as custom op" mean? -##################################################################################################################################################### - -**A:** Model Optimizer tried to load the model that contains some unsupported operations. -If you want to convert model that contains unsupported operations, you need to prepare extension for all such operations. -For more information, refer to the :doc:`OpenVINO Extensibility Mechanism <../../../openvino-extensibility>` guide. - -.. _question-87: - -Q87. What does the message "Can not register Op ... Please, call function 'register_caffe_python_extractor' with parameter 'name'" mean? -##################################################################################################################################################### - -**A:** This error appears if the class of implementation of ``Op`` for Python Caffe layer could not be used by Model Optimizer. Python layers should be handled differently comparing to ordinary Caffe layers. - -In particular, you need to call the function ``register_caffe_python_extractor`` and pass ``name`` as the second argument of the function. -The name should be the compilation of the layer name with the module name separated by a dot. - -For example, your topology contains this layer with type ``Python``: - -.. code-block:: py - :force: - - layer { - name: 'proposal' - type: 'Python' - ... - python_param { - module: 'rpn.proposal_layer' - layer: 'ProposalLayer' - param_str: "'feat_stride': 16" - } - } - - -The first step is to implement an extension for this layer in Model Optimizer as an ancestor of ``Op`` class: - -.. code-block:: py - :force: - - class ProposalPythonExampleOp(Op): - op = 'Proposal' - - def __init__(self, graph: nx.MultiDiGraph, attrs: dict): - ... - - -It is mandatory to call two functions right after the implementation of that class: - -.. code-block:: py - :force: - - class ProposalPythonExampleOp(Op): - ... - - register_caffe_python_extractor(ProposalPythonExampleOp, 'rpn.proposal_layer.ProposalLayer') - Op.excluded_classes.append(ProposalPythonExampleOp) - - -Note that the first call ``register_caffe_python_extractor(ProposalPythonExampleOp, 'rpn.proposal_layer.ProposalLayer')`` registers an extension of the layer in Model Optimizer, which will be found by the specific name (mandatory to join module name and layer name): ``rpn.proposal_layer.ProposalLayer``. - -The second call prevents Model Optimizer from using this extension as if it is an extension for -a layer with type ``Proposal``. Otherwise, this layer can be chosen as an implementation of extension that can lead to potential issues. -For more information, refer to the :doc:`OpenVINO Extensibility Mechanism <../../../openvino-extensibility>` guide. - -.. _question-88: - -Q88. What does the message "Model Optimizer is unable to calculate output shape of Memory node .." mean? -##################################################################################################################################################### - -**A:** Model Optimizer supports only ``Memory`` layers, in which ``input_memory`` goes before ``ScaleShift`` or the ``FullyConnected`` layer. -This error message means that in your model the layer after input memory is not of the ``ScaleShift`` or ``FullyConnected`` type. -This is a known limitation. - -.. _question-89: - -Q89. What do the messages "File ... does not appear to be a Kaldi file (magic number does not match)", "Kaldi model should start with tag" mean? -######################################################################################################################################################### - -**A:** These error messages mean that Model Optimizer does not support your Kaldi model, because the ``checksum`` of the model is not -16896 (the model should start with this number), or the model file does not contain the ```` tag as a starting one. -Make sure that you provide a path to a true Kaldi model and try again. - -.. _question-90: - -Q90. What do the messages "Expect counts file to be one-line file." or "Expect counts file to contain list of integers" mean? -##################################################################################################################################################### - -**A:** These messages mean that the file counts you passed contain not one line. The count file should start with -``[`` and end with ``]``, and integer values should be separated by spaces between those brackets. - -.. _question-91: - -Q91. What does the message "Model Optimizer is not able to read Kaldi model .." mean? -##################################################################################################################################################### - -**A:** There are multiple reasons why Model Optimizer does not accept a Kaldi topology, including: -the file is not available or does not exist. Refer to FAQ :ref:`#88 `. - -.. _question-92: - -Q92. What does the message "Model Optimizer is not able to read counts file .." mean? -##################################################################################################################################################### - -**A:** There are multiple reasons why Model Optimizer does not accept a counts file, including: -the file is not available or does not exist. Refer to FAQ :ref:`#89 `. - -.. _question-93: - -Q93. What does the message "For legacy MXNet models Model Optimizer does not support conversion of old MXNet models (trained with 1.0.0 version of MXNet and lower) with custom layers." mean? -############################################################################################################################################################################################### - -**A:** This message means that if you have a model with custom layers and its JSON file has been generated with Apache MXNet version -lower than 1.0.0, Model Optimizer does not support such topologies. If you want to convert it, you have to rebuild -MXNet with unsupported layers or generate a new JSON file with Apache MXNet version 1.0.0 or higher. You also need to implement -OpenVINO extension to use custom layers. -For more information, refer to the :doc:`OpenVINO Extensibility Mechanism <../../../openvino-extensibility>` guide. - -.. _question-94: - -Q94. What does the message "Expected token ````, has ``...``" mean? -##################################################################################################################################################### - -**A:** This error messages mean that Model Optimizer does not support your Kaldi model, because the Net contains ``ParallelComponent`` that does not end with the ```` tag. -Make sure that you provide a path to a true Kaldi model and try again. - -.. _question-95: - -.. _question-96: - -.. _question-97: - -Q97. What does the message "Graph contains a cycle. Can not proceed .." mean? -##################################################################################################################################################### - -**A:** Model Optimizer supports only straightforward models without cycles. - -There are multiple ways to avoid cycles: - -For Tensorflow: - -* :doc:`Convert models, created with TensorFlow Object Detection API <[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-object-detection>` - -For all frameworks: - -1. :doc:`Replace cycle containing Sub-graph in Model Optimizer [Legacy Solution] <../legacy-model-optimizer-extensibility>` -2. See :doc:`OpenVINO Extensibility Mechanism <../../../openvino-extensibility>` - -or - -* Edit the model in its original framework to exclude cycle. - -.. _question-98: - -.. _question-99: - -.. _question-100: - -Q100. What does the message "Interp layer shape inference function may be wrong, please, try to update layer shape inference function in the file (extensions/ops/interp.op at the line ...)." mean? -#################################################################################################################################################################################################### - -**A:** There are many flavors of Caffe framework, and most layers in them are implemented identically. -However, there are exceptions. For example, the output value of layer Interp is calculated differently in Deeplab-Caffe and classic Caffe. Therefore, if your model contains layer Interp and the conversion of your model has failed, modify the ``interp_infer`` function in the ``extensions/ops/interp.op`` file according to the comments in the file. - -.. _question-101: - -Q101. What does the message "Mean/scale values should ..." mean? -##################################################################################################################################################### - -**A:** It means that your mean/scale values have a wrong format. Specify mean/scale values in the form of ``layer_name(val1,val2,val3)``. -You need to specify values for each input of the model. For more information, refer to the :doc:`Converting a Model to Intermediate Representation <[legacy]-setting-input-shapes>` guide. - -.. _question-102: - -Q102. What does the message "Operation _contrib_box_nms is not supported ..." mean? -##################################################################################################################################################### - -**A:** It means that you are trying to convert a topology contains the ``_contrib_box_nms`` operation which is not supported directly. However, the sub-graph of operations including ``_contrib_box_nms`` could be replaced with the DetectionOutput layer if your topology is one of the ``gluoncv`` topologies. Specify the ``--enable_ssd_gluoncv`` command-line parameter for Model Optimizer to enable this transformation. - -.. _question-103: - -Q103. What does the message "ModelOptimizer is not able to parse "\*.caffemodel" mean? -##################################################################################################################################################### - -**A:** If a ``*.caffemodel`` file exists and is correct, the error occurred possibly because of the use of Python protobuf implementation. In some cases, error messages may appear during model parsing, for example: "``utf-8`` codec can't decode byte 0xe0 in position 4: invalid continuation byte in field: mo_caffe.SpatialTransformerParameter.transform_type". You can either use a newer Python version (3.8 - 3.11) or build the ``cpp`` implementation of ``protobuf`` yourself for your version of Python. For the complete instructions about building ``protobuf`` from sources, see the appropriate section in the :doc:`Converting Models with Model Optimizer <../legacy-conversion-api>` guide. - -.. _question-104: - -.. _question-105: - -Q105. What does the message "The IR preparation was executed by the legacy MO path. ..." mean? -##################################################################################################################################################### - -**A:** For the models in ONNX format, there are two available paths of IR conversion. -The old one is handled by the old Python implementation, while the new one uses new C++ frontends. -Starting from the 2022.1 version, the default IR conversion path for ONNX models is processed using the new ONNX frontend. -Certain features, such as ``--extensions`` and ``--transformations_config``, are not yet fully supported on the new frontends. -The new frontends support only paths to shared libraries (.dll and .so) for ``--extensions``. They support JSON configurations with defined library fields for ``--transformations_config``. -Inputs freezing (enabled by ``--freeze_placeholder_with_value`` or ``--input`` arguments) is not supported by the new frontends. -The IR conversion falls back to the old path if a user does not select any expected path of conversion explicitly (with ``--use_new_frontend`` or ``--use_legacy_frontend`` MO arguments) and unsupported pre-defined scenario is detected on the new frontend path. - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-setting-input-shapes.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-setting-input-shapes.rst deleted file mode 100644 index 9e445742278568..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-setting-input-shapes.rst +++ /dev/null @@ -1,156 +0,0 @@ -[LEGACY] Setting Input Shapes -==================================== - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Setting Input Shapes <../../../../openvino-workflow/model-preparation/setting-input-shapes>` article. - -With model conversion API you can increase your model's efficiency by providing an additional shape definition, with these two parameters: `input_shape` and `static_shape`. - - -.. meta:: - :description: Learn how to increase the efficiency of a model with MO by providing an additional shape definition with the input_shape and static_shape parameters. - - -Specifying input_shape parameter -################################ - -``convert_model()`` supports conversion of models with dynamic input shapes that contain undefined dimensions. -However, if the shape of data is not going to change from one inference request to another, -it is recommended to set up static shapes (when all dimensions are fully defined) for the inputs. -Doing it at this stage, instead of during inference in runtime, can be beneficial in terms of performance and memory consumption. -To set up static shapes, model conversion API provides the ``input_shape`` parameter. -For more information on input shapes under runtime, refer to the :doc:`Changing input shapes <../../../../openvino-workflow/running-inference/changing-input-shape>` guide. -To learn more about dynamic shapes in runtime, refer to the :doc:`Dynamic Shapes <../../../../openvino-workflow/running-inference/dynamic-shapes>` guide. - -The OpenVINO Runtime API may present certain limitations in inferring models with undefined dimensions on some hardware. -In this case, the ``input_shape`` parameter and the :doc:`reshape method <../../../../openvino-workflow/running-inference/changing-input-shape>` can help to resolve undefined dimensions. - -For example, run model conversion for the TensorFlow MobileNet model with the single input -and specify the input shape of ``[2,300,300,3]``: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("MobileNet.pb", input_shape=[2,300,300,3]) - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model MobileNet.pb --input_shape [2,300,300,3] - - -If a model has multiple inputs, ``input_shape`` must be used in conjunction with ``input`` parameter. -The ``input`` parameter contains a list of input names, for which shapes in the same order are defined via ``input_shape``. -For example, launch model conversion for the ONNX OCR model with a pair of inputs ``data`` and ``seq_len`` -and specify shapes ``[3,150,200,1]`` and ``[3]`` for them: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("ocr.onnx", input=["data","seq_len"], input_shape=[[3,150,200,1],[3]]) - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model ocr.onnx --input data,seq_len --input_shape [3,150,200,1],[3] - - -Alternatively, specify input shapes, using the ``input`` parameter as follows: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("ocr.onnx", input=[("data",[3,150,200,1]),("seq_len",[3])]) - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model ocr.onnx --input data[3,150,200,1],seq_len[3] - - -The ``input_shape`` parameter allows overriding original input shapes to ones compatible with a given model. -Dynamic shapes, i.e. with dynamic dimensions, can be replaced in the original model with static shapes for the converted model, and vice versa. -The dynamic dimension can be marked in model conversion API parameter as ``-1`` or ``?``. -For example, launch model conversion for the ONNX OCR model and specify dynamic batch dimension for inputs: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - ov_model = convert_model("ocr.onnx", input=["data","seq_len"], input_shape=[[-1,150,200,1],[-1]] - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model ocr.onnx --input data,seq_len --input_shape [-1,150,200,1],[-1] - - -To optimize memory consumption for models with undefined dimensions in run-time, model conversion API provides the capability to define boundaries of dimensions. -The boundaries of undefined dimension can be specified with ellipsis. -For example, launch model conversion for the ONNX OCR model and specify a boundary for the batch dimension: - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - .. code-block:: py - :force: - - from openvino.tools.mo import convert_model - from openvino.runtime import Dimension - ov_model = convert_model("ocr.onnx", input=["data","seq_len"], input_shape=[[Dimension(1,3),150,200,1],[Dimension(1,3)]] - - .. tab-item:: CLI - :sync: cli - - .. code-block:: sh - - mo --input_model ocr.onnx --input data,seq_len --input_shape [1..3,150,200,1],[1..3] - - -Practically, some models are not ready for input shapes change. -In this case, a new input shape cannot be set via model conversion API. -For more information about shape follow the :doc:`inference troubleshooting <[legacy]-troubleshooting-reshape-errors>` -and :ref:`ways to relax shape inference flow ` guides. - -Additional Resources -#################### - -* :doc:`Convert a Model <../legacy-conversion-api>` -* :doc:`Cutting Off Parts of a Model <[legacy]-cutting-parts-of-a-model>` - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats.rst deleted file mode 100644 index fb9f41c755d4fb..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats.rst +++ /dev/null @@ -1,598 +0,0 @@ -[LEGACY] Supported Model Formats -===================================== - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Supported Model Formats <../../../../openvino-workflow/model-preparation>` article. - -.. toctree:: - :maxdepth: 1 - :hidden: - - Converting a TensorFlow Model <[legacy]-supported-model-formats/[legacy]-convert-tensorflow> - Converting an ONNX Model <[legacy]-supported-model-formats/[legacy]-convert-onnx> - Converting a PyTorch Model <[legacy]-supported-model-formats/[legacy]-convert-pytorch> - Converting a TensorFlow Lite Model <[legacy]-supported-model-formats/[legacy]-convert-tensorflow-lite> - Converting a PaddlePaddle Model <[legacy]-supported-model-formats/[legacy]-convert-paddle> - Model Conversion Tutorials <[legacy]-supported-model-formats/[legacy]-conversion-tutorials> - -.. meta:: - :description: Learn about supported model formats and the methods used to convert, read, and compile them in OpenVINO™. - - -**OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other supported model formats, as listed below, are converted to :doc:`OpenVINO IR <../../../openvino-ir-format>` to enable inference. Consider storing your model in this format to minimize first-inference latency, perform model optimization, and, in some cases, save space on your drive. - -**PyTorch, TensorFlow, ONNX, and PaddlePaddle** - can be used with OpenVINO Runtime API directly, -which means you do not need to save them as OpenVINO IR before including them in your application. -OpenVINO can read, compile, and convert them automatically, as part of its pipeline. - -In the Python API, these options are provided as three separate methods: -``read_model()``, ``compile_model()``, and ``convert_model()``. -The ``convert_model()`` method enables you to perform additional adjustments -to the model, such as setting shapes, changing model input types or layouts, -cutting parts of the model, freezing inputs, etc. For a detailed description -of the conversion process, see the -:doc:`model conversion guide <../legacy-conversion-api>`. - -Here are code examples of how to use these methods with different model formats: - -.. tab-set:: - - .. tab-item:: PyTorch - :sync: torch - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - * The ``convert_model()`` method: - - This is the only method applicable to PyTorch models. - - .. dropdown:: List of supported formats: - - * **Python objects**: - - * ``torch.nn.Module`` - * ``torch.jit.ScriptModule`` - * ``torch.jit.ScriptFunction`` - - .. code-block:: py - :force: - - import openvino - import torchvision - from openvino.tools.mo import convert_model - core = openvino.Core() - - model = torchvision.models.resnet50(weights='DEFAULT') - ov_model = convert_model(model) - compiled_model = core.compile_model(ov_model, "AUTO") - - For more details on conversion, refer to the - :doc:`guide <[legacy]-supported-model-formats/[legacy]-convert-pytorch>` - and an example `tutorial `__ - on this topic. - - .. tab-item:: TensorFlow - :sync: tf - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - * The ``convert_model()`` method: - - When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use. - - .. dropdown:: List of supported formats: - - * **Files**: - - * SavedModel - ```` or ``.pb`` - * Checkpoint - ``.pb`` or ``.pbtxt`` - * MetaGraph - ``.meta`` - - * **Python objects**: - - * ``tf.keras.Model`` - * ``tf.keras.layers.Layer`` - * ``tf.Module`` - * ``tf.compat.v1.Graph`` - * ``tf.compat.v1.GraphDef`` - * ``tf.function`` - * ``tf.compat.v1.session`` - * ``tf.train.checkpoint`` - - .. code-block:: py - :force: - - import openvino - from openvino.tools.mo import convert_model - - core = openvino.Core() - ov_model = convert_model("saved_model.pb") - compiled_model = core.compile_model(ov_model, "AUTO") - - For more details on conversion, refer to the - :doc:`guide <[legacy]-supported-model-formats/[legacy]-convert-tensorflow>` - and an example `tutorial `__ - on this topic. - - * The ``read_model()`` and ``compile_model()`` methods: - - .. dropdown:: List of supported formats: - - * **Files**: - - * SavedModel - ```` or ``.pb`` - * Checkpoint - ``.pb`` or ``.pbtxt`` - * MetaGraph - ``.meta`` - - .. code-block:: py - :force: - - ov_model = read_model("saved_model.pb") - compiled_model = core.compile_model(ov_model, "AUTO") - - For a guide on how to run inference, see how to - :doc:`Integrate OpenVINO™ with Your Application <../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`. - - .. tab-item:: C++ - :sync: cpp - - * The ``compile_model()`` method: - - .. dropdown:: List of supported formats: - - * **Files**: - - * SavedModel - ```` or ``.pb`` - * Checkpoint - ``.pb`` or ``.pbtxt`` - * MetaGraph - ``.meta`` - - .. code-block:: cpp - - ov::CompiledModel compiled_model = core.compile_model("saved_model.pb", "AUTO"); - - For a guide on how to run inference, see how to - :doc:`Integrate OpenVINO™ with Your Application <../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`. - - .. tab-item:: C - :sync: c - - * The ``compile_model()`` method: - - .. dropdown:: List of supported formats: - - * **Files**: - - * SavedModel - ```` or ``.pb`` - * Checkpoint - ``.pb`` or ``.pbtxt`` - * MetaGraph - ``.meta`` - - .. code-block:: c - - ov_compiled_model_t* compiled_model = NULL; - ov_core_compile_model_from_file(core, "saved_model.pb", "AUTO", 0, &compiled_model); - - For a guide on how to run inference, see how to - :doc:`Integrate OpenVINO™ with Your Application <../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`. - - .. tab-item:: CLI - :sync: cli - - You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred. - - .. code-block:: sh - - mo --input_model .pb - - For details on the conversion, refer to the - :doc:`article <[legacy]-supported-model-formats/[legacy]-convert-tensorflow>`. - - .. tab-item:: TensorFlow Lite - :sync: tflite - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - * The ``convert_model()`` method: - - When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use. - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.tflite`` - - .. code-block:: py - :force: - - import openvino - from openvino.tools.mo import convert_model - - core = openvino.Core() - ov_model = convert_model(".tflite") - compiled_model = core.compile_model(ov_model, "AUTO") - - For more details on conversion, refer to the - :doc:`guide <[legacy]-supported-model-formats/[legacy]-convert-tensorflow>` - and an example `tutorial `__ - on this topic. - - - * The ``read_model()`` method: - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.tflite`` - - .. code-block:: py - :force: - - import openvino - - core = openvino.Core() - ov_model = core.read_model(".tflite") - compiled_model = core.compile_model(ov_model, "AUTO") - - * The ``compile_model()`` method: - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.tflite`` - - .. code-block:: py - :force: - - import openvino - - core = openvino.Core() - compiled_model = core.compile_model(".tflite", "AUTO") - - For a guide on how to run inference, see how to - :doc:`Integrate OpenVINO™ with Your Application <../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`. - - - .. tab-item:: C++ - :sync: cpp - - * The ``compile_model()`` method: - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.tflite`` - - .. code-block:: cpp - - ov::CompiledModel compiled_model = core.compile_model(".tflite", "AUTO"); - - For a guide on how to run inference, see how to - :doc:`Integrate OpenVINO™ with Your Application <../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`. - - .. tab-item:: C - :sync: c - - * The ``compile_model()`` method: - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.tflite`` - - .. code-block:: c - - ov_compiled_model_t* compiled_model = NULL; - ov_core_compile_model_from_file(core, ".tflite", "AUTO", 0, &compiled_model); - - For a guide on how to run inference, see how to - :doc:`Integrate OpenVINO™ with Your Application <../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`. - - .. tab-item:: CLI - :sync: cli - - * The ``convert_model()`` method: - - You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred. - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.tflite`` - - .. code-block:: sh - - mo --input_model .tflite - - For details on the conversion, refer to the - :doc:`article <[legacy]-supported-model-formats/[legacy]-convert-tensorflow-lite>`. - - .. tab-item:: ONNX - :sync: onnx - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - * The ``convert_model()`` method: - - When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use. - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.onnx`` - - .. code-block:: py - :force: - - import openvino - from openvino.tools.mo import convert_model - - core = openvino.Core() - ov_model = convert_model(".onnx") - compiled_model = core.compile_model(ov_model, "AUTO") - - For more details on conversion, refer to the - :doc:`guide <[legacy]-supported-model-formats/[legacy]-convert-onnx>` - and an example `tutorial `__ - on this topic. - - - * The ``read_model()`` method: - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.onnx`` - - .. code-block:: py - :force: - - import openvino - core = openvino.Core() - - ov_model = core.read_model(".onnx") - compiled_model = core.compile_model(ov_model, "AUTO") - - * The ``compile_model()`` method: - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.onnx`` - - .. code-block:: py - :force: - - import openvino - core = openvino.Core() - - compiled_model = core.compile_model(".onnx", "AUTO") - - For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application <../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`. - - - .. tab-item:: C++ - :sync: cpp - - * The ``compile_model()`` method: - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.onnx`` - - .. code-block:: cpp - - ov::CompiledModel compiled_model = core.compile_model(".onnx", "AUTO"); - - For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application <../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`. - - .. tab-item:: C - :sync: c - - * The ``compile_model()`` method: - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.onnx`` - - .. code-block:: c - - ov_compiled_model_t* compiled_model = NULL; - ov_core_compile_model_from_file(core, ".onnx", "AUTO", 0, &compiled_model); - - For details on the conversion, refer to the :doc:`article <[legacy]-supported-model-formats/[legacy]-convert-onnx>` - - .. tab-item:: CLI - :sync: cli - - * The ``convert_model()`` method: - - You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred. - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.onnx`` - - .. code-block:: sh - - mo --input_model .onnx - - For details on the conversion, refer to the - :doc:`article <[legacy]-supported-model-formats/[legacy]-convert-onnx>` - - .. tab-item:: PaddlePaddle - :sync: pdpd - - .. tab-set:: - - .. tab-item:: Python - :sync: py - - * The ``convert_model()`` method: - - When you use the ``convert_model()`` method, you have more control and you can specify additional adjustments for ``ov.Model``. The ``read_model()`` and ``compile_model()`` methods are easier to use, however, they do not have such capabilities. With ``ov.Model`` you can choose to optimize, compile and run inference on it or serialize it into a file for subsequent use. - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.pdmodel`` - - * **Python objects**: - - * ``paddle.hapi.model.Model`` - * ``paddle.fluid.dygraph.layers.Layer`` - * ``paddle.fluid.executor.Executor`` - - .. code-block:: py - :force: - - import openvino - from openvino.tools.mo import convert_model - - core = openvino.Core() - ov_model = convert_model(".pdmodel") - compiled_model = core.compile_model(ov_model, "AUTO") - - For more details on conversion, refer to the - :doc:`guide <[legacy]-supported-model-formats/[legacy]-convert-paddle>` - and an example `tutorial `__ - on this topic. - - * The ``read_model()`` method: - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.pdmodel`` - - .. code-block:: py - :force: - - import openvino - core = openvino.Core() - - ov_model = read_model(".pdmodel") - compiled_model = core.compile_model(ov_model, "AUTO") - - * The ``compile_model()`` method: - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.pdmodel`` - - .. code-block:: py - :force: - - import openvino - core = openvino.Core() - - compiled_model = core.compile_model(".pdmodel", "AUTO") - - For a guide on how to run inference, see how to - :doc:`Integrate OpenVINO™ with Your Application <../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`. - - .. tab-item:: C++ - :sync: cpp - - * The ``compile_model()`` method: - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.pdmodel`` - - .. code-block:: cpp - - ov::CompiledModel compiled_model = core.compile_model(".pdmodel", "AUTO"); - - For a guide on how to run inference, see how to - :doc:`Integrate OpenVINO™ with Your Application <../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`. - - .. tab-item:: C - :sync: c - - * The ``compile_model()`` method: - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.pdmodel`` - - .. code-block:: c - - ov_compiled_model_t* compiled_model = NULL; - ov_core_compile_model_from_file(core, ".pdmodel", "AUTO", 0, &compiled_model); - - For a guide on how to run inference, see how to - :doc:`Integrate OpenVINO™ with Your Application <../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`. - - .. tab-item:: CLI - :sync: cli - - * The ``convert_model()`` method: - - You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred. - - .. dropdown:: List of supported formats: - - * **Files**: - - * ``.pdmodel`` - - .. code-block:: sh - - mo --input_model .pdmodel - - For details on the conversion, refer to the - :doc:`article <[legacy]-supported-model-formats/[legacy]-convert-paddle>`. - - -As OpenVINO support for **MXNet, Caffe, and Kaldi formats** has been **discontinued**, converting these legacy formats -to OpenVINO IR or ONNX before running inference should be considered the default path for use with OpenVINO. - -.. note:: - - If you want to keep working with the legacy formats the old way, refer to a previous - `OpenVINO LTS version and its documentation `__ . - - OpenVINO versions of 2023 are mostly compatible with the old instructions, - through a deprecated MO tool, installed with the deprecated OpenVINO Developer Tools package. - - `OpenVINO 2023.0 `__ is the last - release officially supporting the MO conversion process for the legacy formats. - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials.rst deleted file mode 100644 index 5fbe486a20960a..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials.rst +++ /dev/null @@ -1,59 +0,0 @@ -[LEGACY] Model Conversion Tutorials -==================================================== - - -.. toctree:: - :maxdepth: 1 - :hidden: - - [legacy]-conversion-tutorials/convert-tensorflow-attention-ocr - [legacy]-conversion-tutorials/convert-tensorflow-bert - [legacy]-conversion-tutorials/convert-tensorflow-crnn - [legacy]-conversion-tutorials/convert-tensorflow-deep-speech - [legacy]-conversion-tutorials/convert-tensorflow-efficient-det - [legacy]-conversion-tutorials/convert-tensorflow-face-net - [legacy]-conversion-tutorials/convert-tensorflow-gnmt - [legacy]-conversion-tutorials/convert-tensorflow-language-1b - [legacy]-conversion-tutorials/convert-tensorflow-ncf - [legacy]-conversion-tutorials/convert-tensorflow-object-detection - [legacy]-conversion-tutorials/convert-tensorflow-retina-net - [legacy]-conversion-tutorials/convert-tensorflow-slim-library - [legacy]-conversion-tutorials/convert-tensorflow-wide-and-deep-family - [legacy]-conversion-tutorials/convert-tensorflow-xlnet - [legacy]-conversion-tutorials/convert-tensorflow-yolo - [legacy]-conversion-tutorials/convert-onnx-faster-r-cnn - [legacy]-conversion-tutorials/convert-onnx-gpt-2 - [legacy]-conversion-tutorials/convert-onnx-mask-r-cnn - [legacy]-conversion-tutorials/convert-pytorch-bert-ner - [legacy]-conversion-tutorials/convert-pytorch-cascade-rcnn-r-101 - [legacy]-conversion-tutorials/convert-pytorch-f3-net - [legacy]-conversion-tutorials/convert-pytorch-quartz-net - [legacy]-conversion-tutorials/convert-pytorch-rcan - [legacy]-conversion-tutorials/convert-pytorch-rnn-t - [legacy]-conversion-tutorials/convert-pytorch-yolact - - -.. meta:: - :description: Get to know conversion methods for specific TensorFlow, ONNX, and PyTorch models. - - -.. danger:: - - The code described in the tutorials has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../learn-openvino/interactive-tutorials-python>`. - -This section provides a set of tutorials that demonstrate conversion methods for specific -TensorFlow, ONNX, and PyTorch models. Note that these instructions do not cover all use -cases and may not reflect your particular needs. -Before studying the tutorials, try to convert the model out-of-the-box by specifying only the -``--input_model`` parameter in the command line. - -.. note:: - - Apache MXNet, Caffe, and Kaldi are no longer directly supported by OpenVINO. - -You will find a collection of :doc:`Python tutorials <../../../../../learn-openvino/interactive-tutorials-python>` written for running on Jupyter notebooks -that provide an introduction to the OpenVINO™ toolkit and explain how to use the Python API and tools for -optimized deep learning inference. - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-onnx-faster-r-cnn.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-onnx-faster-r-cnn.rst deleted file mode 100644 index 7880b261c80b81..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-onnx-faster-r-cnn.rst +++ /dev/null @@ -1,41 +0,0 @@ -Converting an ONNX Faster R-CNN Model -===================================== - - -.. meta:: - :description: Learn how to convert a Faster R-CNN model - from ONNX to the OpenVINO Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -The instructions below are applicable **only** to the Faster R-CNN model converted to the ONNX file format from the `maskrcnn-benchmark model `__: - -1. Download the pretrained model file from `onnx/models `__ (commit-SHA: 8883e49e68de7b43e263d56b9ed156dfa1e03117). - -2. Generate the Intermediate Representation of the model, by changing your current working directory to the model conversion API installation directory, and running model conversion with the following parameters: - - .. code-block:: sh - - mo \ - --input_model FasterRCNN-10.onnx \ - --input_shape [1,3,800,800] \ - --input 0:2 \ - --mean_values [102.9801,115.9465,122.7717] \ - --transformations_config front/onnx/faster_rcnn.json - - - Be aware that the height and width specified with the ``input_shape`` command line parameter - could be different. For more information about supported input image dimensions and - required pre- and post-processing steps, refer to the - `Faster R-CNN article `__. - -3. Interpret the outputs of the generated IR: class indices, probabilities and box coordinates. Below are the outputs from the ``DetectionOutput`` layer: - - * class indices - * probabilities - * box coordinates - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-onnx-gpt-2.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-onnx-gpt-2.rst deleted file mode 100644 index 4c10c941c7fb47..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-onnx-gpt-2.rst +++ /dev/null @@ -1,34 +0,0 @@ -Converting an ONNX GPT-2 Model -============================== - - -.. meta:: - :description: Learn how to convert a pre-trained GPT-2 - model from ONNX to the OpenVINO Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -`Public pre-trained GPT-2 model `__ is a large -transformer-based language model with a simple objective: predict the next word, given all of the previous words within some text. - -Downloading the Pre-Trained Base GPT-2 Model -############################################ - -To download the model, go to `this model `__, and press **Download**. - -To download the model and sample test data, go to `this model `__, and press **Download**. - -Converting an ONNX GPT-2 Model to IR -#################################### - -Generate the Intermediate Representation of the model GPT-2 by running model conversion with the following parameters: - -.. code-block:: sh - - mo --input_model gpt2-10.onnx --input_shape [X,Y,Z] --output_dir - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-onnx-mask-r-cnn.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-onnx-mask-r-cnn.rst deleted file mode 100644 index 6158f5bdcb59ed..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-onnx-mask-r-cnn.rst +++ /dev/null @@ -1,41 +0,0 @@ -Converting an ONNX Mask R-CNN Model -=================================== - - -.. meta:: - :description: Learn how to convert a pre-trained Mask - R-CNN model from ONNX to the OpenVINO Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -The instructions below are applicable **only** to the Mask R-CNN model converted to the ONNX file format from the `maskrcnn-benchmark model `__. - -1. Download the pretrained model file from `onnx/models `__ (commit-SHA: 8883e49e68de7b43e263d56b9ed156dfa1e03117). - -2. Generate the Intermediate Representation of the model by changing your current working directory to the model conversion API installation directory and running model conversion with the following parameters: - - .. code-block:: sh - - mo \ - --input_model mask_rcnn_R_50_FPN_1x.onnx \ - --input "0:2" \ - --input_shape [1,3,800,800] \ - --mean_values [102.9801,115.9465,122.7717] \ - --transformations_config front/onnx/mask_rcnn.json - - - Be aware that the height and width specified with the ``input_shape`` command line parameter could be different. For more information about supported input image dimensions and required pre- and post-processing steps, refer to the `documentation `__. - -3. Interpret the outputs of the generated IR file: masks, class indices, probabilities and box coordinates: - - * masks - * class indices - * probabilities - * box coordinates - -The first one is a layer with the name ``6849/sink_port_0``, and rest are outputs from the ``DetectionOutput`` layer. - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-bert-ner.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-bert-ner.rst deleted file mode 100644 index e89d21f28c66c4..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-bert-ner.rst +++ /dev/null @@ -1,76 +0,0 @@ -Converting a PyTorch BERT-NER Model -=================================== - - -.. meta:: - :description: Learn how to convert a BERT-NER model - from PyTorch to the OpenVINO Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -The goal of this article is to present a step-by-step guide on how to convert PyTorch BERT-NER model to OpenVINO IR. First, you need to download the model and convert it to ONNX. - - -Downloading and Converting the Model to ONNX -############################################ - -To download a pretrained model or train the model yourself, refer -to the `instructions `__ in the -BERT-NER model repository. The model with configuration files is stored in the ``out_base`` directory. - -To convert the model to ONNX format, create and run the following script in the root -directory of the model repository. If you download the pretrained model, you need -to download `bert.py `__ to run the script. -The instructions were tested with the commit-SHA: ``e5be564156f194f1becb0d82aeaf6e762d9eb9ed``. - -.. code-block:: py - :force: - - import torch - - from bert import Ner - - ner = Ner("out_base") - - input_ids, input_mask, segment_ids, valid_positions = ner.preprocess('Steve went to Paris') - input_ids = torch.tensor([input_ids], dtype=torch.long, device=ner.device) - input_mask = torch.tensor([input_mask], dtype=torch.long, device=ner.device) - segment_ids = torch.tensor([segment_ids], dtype=torch.long, device=ner.device) - valid_ids = torch.tensor([valid_positions], dtype=torch.long, device=ner.device) - - ner_model, tknizr, model_config = ner.load_model("out_base") - - with torch.no_grad(): - logits = ner_model(input_ids, segment_ids, input_mask, valid_ids) - torch.onnx.export(ner_model, - (input_ids, segment_ids, input_mask, valid_ids), - "bert-ner.onnx", - input_names=['input_ids', 'segment_ids', 'input_mask', 'valid_ids'], - output_names=['output'], - dynamic_axes={ - "input_ids": {0: "batch_size"}, - "segment_ids": {0: "batch_size"}, - "input_mask": {0: "batch_size"}, - "valid_ids": {0: "batch_size"}, - "output": {0: "output"} - }, - opset_version=11, - ) - - -The script generates ONNX model file ``bert-ner.onnx``. - -Converting an ONNX BERT-NER model to IR -####################################### - -.. code-block:: sh - - mo --input_model bert-ner.onnx --input "input_mask[1,128],segment_ids[1,128],input_ids[1,128]" - - -where ``1`` is ``batch_size`` and ``128`` is ``sequence_length``. - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-cascade-rcnn-r-101.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-cascade-rcnn-r-101.rst deleted file mode 100644 index a61ca5e79f1c30..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-cascade-rcnn-r-101.rst +++ /dev/null @@ -1,51 +0,0 @@ -Converting a PyTorch Cascade RCNN R-101 Model -============================================= - - -.. meta:: - :description: Learn how to convert a Cascade RCNN R-101 - model from PyTorch to the OpenVINO Intermediate Representation. - - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -The goal of this article is to present a step-by-step guide on how to convert a PyTorch Cascade RCNN R-101 model to OpenVINO IR. First, you need to download the model and convert it to ONNX. - -Downloading and Converting Model to ONNX -######################################## - -* Clone the `repository `__ : - - .. code-block:: sh - - git clone https://github.com/open-mmlab/mmdetection - cd mmdetection - - - .. note:: - - To set up an environment, refer to the `instructions `__. - -* Download the pre-trained `model `__. The model is also available `here `__. - -* To convert the model to ONNX format, use this `script `__. - - .. code-block:: sh - - python3 tools/deployment/pytorch2onnx.py configs/cascade_rcnn/cascade_rcnn_r101_fpn_1x_coco.py cascade_rcnn_r101_fpn_1x_coco_20200317-0b6a2fbf.pth --output-file cascade_rcnn_r101_fpn_1x_coco.onnx - - -The script generates ONNX model file ``cascade_rcnn_r101_fpn_1x_coco.onnx`` in the directory ``tools/deployment/``. If required, specify the model name or output directory, using ``--output-file /.onnx``. - -Converting an ONNX Cascade RCNN R-101 Model to OpenVINO IR -########################################################## - -.. code-block:: sh - - mo --input_model cascade_rcnn_r101_fpn_1x_coco.onnx --mean_values [123.675,116.28,103.53] --scale_values [58.395,57.12,57.375] - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-f3-net.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-f3-net.rst deleted file mode 100644 index d1391cfb1519ba..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-f3-net.rst +++ /dev/null @@ -1,55 +0,0 @@ -Converting a PyTorch F3Net Model -================================ - - -.. meta:: - :description: Learn how to convert a F3Net model - from PyTorch to the OpenVINO Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -`F3Net `__ : Fusion, Feedback and Focus for Salient Object Detection - -Cloning the F3Net Repository -############################ - -To clone the repository, run the following command: - -.. code-block:: sh - - git clone http://github.com/weijun88/F3Net.git - - -Downloading and Converting the Model to ONNX -############################################ - -To download the pretrained model or train the model yourself, refer to the -`instructions `__ in the F3Net model repository. First, convert the model to ONNX format. Create and run the following Python script in the ``src`` directory of the model repository: - -.. code-block:: py - :force: - - import torch - from dataset import Config - from net import F3Net - - cfg = Config(mode='test', snapshot=) - net = F3Net(cfg) - image = torch.zeros([1, 3, 352, 352]) - torch.onnx.export(net, image, 'f3net.onnx', export_params=True, do_constant_folding=True, opset_version=11) - - -The script generates the ONNX model file ``f3net.onnx``. The model conversion was tested with the commit-SHA: ``eecace3adf1e8946b571a4f4397681252f9dc1b8``. - -Converting an ONNX F3Net Model to IR -#################################### - -.. code-block:: sh - - mo --input_model /f3net.onnx - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-quartz-net.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-quartz-net.rst deleted file mode 100644 index f1ee885dae0b26..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-quartz-net.rst +++ /dev/null @@ -1,61 +0,0 @@ -Converting a PyTorch QuartzNet Model -==================================== - - -.. meta:: - :description: Learn how to convert a QuartzNet model - from PyTorch to the OpenVINO Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -`NeMo project `__ provides the QuartzNet model. - -Downloading the Pre-trained QuartzNet Model -########################################### - -To download the pre-trained model, refer to the `NeMo Speech Models Catalog `__. -Here are the instructions on how to obtain QuartzNet in ONNX format. - -1. Install the NeMo toolkit, using the `instructions `__. - -2. Run the following code: - - .. code-block:: py - :force: - - import nemo - import nemo.collections.asr as nemo_asr - - quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="QuartzNet15x5Base-En") - # Export QuartzNet model to ONNX format - quartznet.decoder.export('decoder_qn.onnx') - quartznet.encoder.export('encoder_qn.onnx') - quartznet.export('qn.onnx') - - - This code produces 3 ONNX model files: ``encoder_qn.onnx``, ``decoder_qn.onnx``, ``qn.onnx``. - They are ``decoder``, ``encoder``, and a combined ``decoder(encoder(x))`` models, respectively. - -Converting an ONNX QuartzNet model to IR -######################################## - -If using a combined model: - -.. code-block:: sh - - mo --input_model /qt.onnx --input_shape [B,64,X] - -If using separate models: - -.. code-block:: sh - - mo --input_model /encoder_qt.onnx --input_shape [B,64,X] - mo --input_model /decoder_qt.onnx --input_shape [B,1024,Y] - - -Where shape is determined by the audio file Mel-Spectrogram length: ``B`` - batch dimension, ``X`` - dimension based on the input length, ``Y`` - determined by encoder output, usually ``X / 2``. - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-rcan.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-rcan.rst deleted file mode 100644 index 7e9fb7b5717cbd..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-rcan.rst +++ /dev/null @@ -1,49 +0,0 @@ -Converting a PyTorch RCAN Model -=============================== - - -.. meta:: - :description: Learn how to convert a RCAN model - from PyTorch to the OpenVINO Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -`RCAN `__ : Image Super-Resolution Using Very Deep Residual Channel Attention Networks - -Downloading and Converting the Model to ONNX -############################################ - -To download the pre-trained model or train the model yourself, refer to the `instruction `__ in the RCAN model repository. First, convert the model to ONNX format. Create and run the script with the following content in the root -directory of the model repository: - -.. code-block:: py - :force: - - from argparse import Namespace - - import torch - - from RCAN_TestCode.code.model.rcan import RCAN - - config = Namespace(n_feats=64, n_resblocks=4, n_resgroups=2, reduction=16, scale=[2], data_train='DIV2K', res_scale=1, - n_colors=3, rgb_range=255) - net = RCAN(config) - net.eval() - dummy_input = torch.randn(1, 3, 360, 640) - torch.onnx.export(net, dummy_input, 'RCAN.onnx') - - -The script generates the ONNX model file ``RCAN.onnx``. More information about model parameters (``n_resblocks``, ``n_resgroups``, and others) and their different values can be found in the model repository. The model conversion was tested with the commit-SHA: ``3339ebc59519c3bb2b5719b87dd36515ec7f3ba7``. - -Converting an ONNX RCAN Model to IR -################################### - -.. code-block:: sh - - mo --input_model RCAN.onnx - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-rnn-t.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-rnn-t.rst deleted file mode 100644 index ad646568aed598..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-rnn-t.rst +++ /dev/null @@ -1,137 +0,0 @@ -Converting a PyTorch RNN-T Model -================================ - - -.. meta:: - :description: Learn how to convert a RNN-T model - from PyTorch to the OpenVINO Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -This guide covers conversion of RNN-T model from `MLCommons `__ repository. Follow -the instructions below to export a PyTorch model into ONNX, before converting it to IR: - -**Step 1**. Clone RNN-T PyTorch implementation from MLCommons repository (revision r1.0). Make a shallow clone to pull -only RNN-T model without full repository. If you already have a full repository, skip this and go to **Step 2**: - -.. code-block:: sh - - git clone -b r1.0 -n https://github.com/mlcommons/inference rnnt_for_openvino --depth 1 - cd rnnt_for_openvino - git checkout HEAD speech_recognition/rnnt - - -**Step 2**. If you already have a full clone of MLCommons inference repository, create a folder for -pretrained PyTorch model, where conversion into IR will take place. You will also need to specify the path to -your full clone at **Step 5**. Skip this step if you have a shallow clone. - -.. code-block:: sh - - mkdir rnnt_for_openvino - cd rnnt_for_openvino - - -**Step 3**. Download pre-trained weights for PyTorch implementation from `here `__. -For UNIX-like systems, you can use ``wget``: - -.. code-block:: sh - - wget https://zenodo.org/record/3662521/files/DistributedDataParallel_1576581068.9962234-epoch-100.pt - - -The link was taken from ``setup.sh`` in the ``speech_recoginitin/rnnt`` subfolder. You will get exactly the same weights as -if you were following the `guide `__. - -**Step 4**. Install required Python packages: - -.. code-block:: sh - - pip3 install torch toml - - -**Step 5**. Export RNN-T model into ONNX, using the script below. Copy the code below into a file named -``export_rnnt_to_onnx.py`` and run it in the current directory ``rnnt_for_openvino``: - -.. note:: - - If you already have a full clone of MLCommons inference repository, you need - to specify the ``mlcommons_inference_path`` variable. - -.. code-block:: py - :force: - - import toml - import torch - import sys - - - def load_and_migrate_checkpoint(ckpt_path): - checkpoint = torch.load(ckpt_path, map_location="cpu") - migrated_state_dict = {} - for key, value in checkpoint['state_dict'].items(): - key = key.replace("joint_net", "joint.net") - migrated_state_dict[key] = value - del migrated_state_dict["audio_preprocessor.featurizer.fb"] - del migrated_state_dict["audio_preprocessor.featurizer.window"] - return migrated_state_dict - - - mlcommons_inference_path = './' # specify relative path for MLCommons inferene - checkpoint_path = 'DistributedDataParallel_1576581068.9962234-epoch-100.pt' - config_toml = 'speech_recognition/rnnt/pytorch/configs/rnnt.toml' - config = toml.load(config_toml) - rnnt_vocab = config['labels']['labels'] - sys.path.insert(0, mlcommons_inference_path + 'speech_recognition/rnnt/pytorch') - - from model_separable_rnnt import RNNT - - model = RNNT(config['rnnt'], len(rnnt_vocab) + 1, feature_config=config['input_eval']) - model.load_state_dict(load_and_migrate_checkpoint(checkpoint_path)) - - seq_length, batch_size, feature_length = 157, 1, 240 - inp = torch.randn([seq_length, batch_size, feature_length]) - feature_length = torch.LongTensor([seq_length]) - x_padded, x_lens = model.encoder(inp, feature_length) - torch.onnx.export(model.encoder, (inp, feature_length), "rnnt_encoder.onnx", opset_version=12, - input_names=['input', 'feature_length'], output_names=['x_padded', 'x_lens'], - dynamic_axes={'input': {0: 'seq_len', 1: 'batch'}}) - - symbol = torch.LongTensor([[20]]) - hidden = torch.randn([2, batch_size, 320]), torch.randn([2, batch_size, 320]) - g, hidden = model.prediction.forward(symbol, hidden) - torch.onnx.export(model.prediction, (symbol, hidden), "rnnt_prediction.onnx", opset_version=12, - input_names=['symbol', 'hidden_in_1', 'hidden_in_2'], - output_names=['g', 'hidden_out_1', 'hidden_out_2'], - dynamic_axes={'symbol': {0: 'batch'}, 'hidden_in_1': {1: 'batch'}, 'hidden_in_2': {1: 'batch'}}) - - f = torch.randn([batch_size, 1, 1024]) - model.joint.forward(f, g) - torch.onnx.export(model.joint, (f, g), "rnnt_joint.onnx", opset_version=12, - input_names=['0', '1'], output_names=['result'], dynamic_axes={'0': {0: 'batch'}, '1': {0: 'batch'}}) - - -.. code-block:: sh - - python3 export_rnnt_to_onnx.py - - -After completing this step, the files ``rnnt_encoder.onnx``, ``rnnt_prediction.onnx``, and ``rnnt_joint.onnx`` will be saved in the current directory. - -**Step 6**. Run the conversion commands: - -.. code-block:: sh - - mo --input_model rnnt_encoder.onnx --input "input[157,1,240],feature_length->157" - mo --input_model rnnt_prediction.onnx --input "symbol[1,1],hidden_in_1[2,1,320],hidden_in_2[2,1,320]" - mo --input_model rnnt_joint.onnx --input "0[1,1,1024],1[1,1,320]" - - -.. note:: - - The hardcoded value for sequence length = 157 was taken from the MLCommons, but conversion to IR preserves network :doc:`reshapeability <../../../../../../openvino-workflow/running-inference/changing-input-shape>`. Therefore, input shapes can be changed manually to any value during either conversion or inference. - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-yolact.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-yolact.rst deleted file mode 100644 index 0eacbd6c5b0bf9..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-pytorch-yolact.rst +++ /dev/null @@ -1,222 +0,0 @@ -Converting a PyTorch YOLACT Model -================================= - - -.. meta:: - :description: Learn how to convert a YOLACT model - from PyTorch to the OpenVINO Intermediate Representation. - - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -You Only Look At CoefficienTs (YOLACT) is a simple, fully convolutional model for real-time instance segmentation. -The PyTorch implementation is publicly available in `this GitHub repository `__. -The YOLACT++ model is not supported, because it uses deformable convolutional layers that cannot be represented in ONNX format. - -.. _patch-file-yolact: - -Creating a Patch File -##################### - -Before converting the model, create a patch file for the repository. -The patch modifies the framework code by adding a special command-line argument to the framework options. The argument enables inference graph dumping: - -1. Go to a writable directory and create a ``YOLACT_onnx_export.patch`` file. -2. Copy the following diff code to the file: - - .. code-block:: console - - From 76deb67d4f09f29feda1a633358caa18335d9e9f Mon Sep 17 00:00:00 2001 - From: "OpenVINO" - Date: Fri, 12 Mar 2021 00:27:35 +0300 - Subject: [PATCH] Add export to ONNX - - --- - eval.py | 5 ++++- - utils/augmentations.py | 7 +++++-- - yolact.py | 29 +++++++++++++++++++---------- - 3 files changed, 28 insertions(+), 13 deletions(-) - - diff --git a/eval.py b/eval.py - index 547bc0a..bde0680 100644 - --- a/eval.py - +++ b/eval.py - @@ -593,9 +593,12 @@ def badhash(x): - return x - - def evalimage(net:Yolact, path:str, save_path:str=None): - - frame = torch.from_numpy(cv2.imread(path)).cuda().float() - + frame = torch.from_numpy(cv2.imread(path)).float() - + if torch.cuda.is_available(): - + frame = frame.cuda() - batch = FastBaseTransform()(frame.unsqueeze(0)) - preds = net(batch) - + torch.onnx.export(net, batch, "yolact.onnx", opset_version=11) - - img_numpy = prep_display(preds, frame, None, None, undo_transform=False) - - diff --git a/utils/augmentations.py b/utils/augmentations.py - index cc7a73a..2420603 100644 - --- a/utils/augmentations.py - +++ b/utils/augmentations.py - @@ -623,8 +623,11 @@ class FastBaseTransform(torch.nn.Module): - def __init__(self): - super().__init__() - - - self.mean = torch.Tensor(MEANS).float().cuda()[None, :, None, None] - - self.std = torch.Tensor( STD ).float().cuda()[None, :, None, None] - + self.mean = torch.Tensor(MEANS).float()[None, :, None, None] - + self.std = torch.Tensor( STD ).float()[None, :, None, None] - + if torch.cuda.is_available(): - + self.mean.cuda() - + self.std.cuda() - self.transform = cfg.backbone.transform - - def forward(self, img): - diff --git a/yolact.py b/yolact.py - index d83703b..f8c787c 100644 - --- a/yolact.py - +++ b/yolact.py - @@ -17,19 +17,22 @@ import torch.backends.cudnn as cudnn - from utils import timer - from utils.functions import MovingAverage, make_net - - -# This is required for Pytorch 1.0.1 on Windows to initialize Cuda on some driver versions. - -# See the bug report here: https://github.com/pytorch/pytorch/issues/17108 - -torch.cuda.current_device() - - - -# As of March 10, 2019, Pytorch DataParallel still doesn't support JIT Script Modules - -use_jit = torch.cuda.device_count() <= 1 - -if not use_jit: - - print('Multiple GPUs detected! Turning off JIT.') - +use_jit = False - - ScriptModuleWrapper = torch.jit.ScriptModule if use_jit else nn.Module - script_method_wrapper = torch.jit.script_method if use_jit else lambda fn, _rcn=None: fn - - - +def decode(loc, priors): - + variances = [0.1, 0.2] - + boxes = torch.cat((priors[:, :2] + loc[:, :, :2] * variances[0] * priors[:, 2:], priors[:, 2:] * torch.exp(loc[:, :, 2:] * variances[1])), 2) - + - + boxes_result1 = boxes[:, :, :2] - boxes[:, :, 2:] / 2 - + boxes_result2 = boxes[:, :, 2:] + boxes_result1 - + boxes_result = torch.cat((boxes_result1, boxes_result2), 2) - + - + return boxes_result - + - - class Concat(nn.Module): - def __init__(self, nets, extra_params): - @@ -476,7 +479,10 @@ class Yolact(nn.Module): - - def load_weights(self, path): - """ Loads weights from a compressed save file. """ - - state_dict = torch.load(path) - + if torch.cuda.is_available(): - + state_dict = torch.load(path) - + else: - + state_dict = torch.load(path, map_location=torch.device('cpu')) - - # For backward compatibility, remove these (the new variable is called layers) - for key in list(state_dict.keys()): - @@ -673,8 +679,11 @@ class Yolact(nn.Module): - else: - pred_outs['conf'] = F.softmax(pred_outs['conf'], -1) - - - return self.detect(pred_outs, self) - + pred_outs['boxes'] = decode(pred_outs['loc'], pred_outs['priors']) # decode output boxes - - + pred_outs.pop('priors') # remove unused in postprocessing layers - + pred_outs.pop('loc') # remove unused in postprocessing layers - + return pred_outs - - - - -- - - -3. Save and close the file. - -Converting a YOLACT Model to the OpenVINO IR format -################################################### - -**Step 1**. Clone the GitHub repository and check out the commit: - -1. Clone the YOLACT repository: - - .. code-block:: sh - - git clone https://github.com/dbolya/yolact - - -2. Check out the necessary commit: - - .. code-block:: sh - - git checkout 57b8f2d95e62e2e649b382f516ab41f949b57239 - - -3. Set up the environment as described in ``README.md``. - -**Step 2**. Download a pre-trained model from the list attached in the ``Evaluation`` section of ``README.md`` document, for example ``yolact_base_54_800000.pth``. - -**Step 3**. Export the model to ONNX format. - -1. Apply the `YOLACT_onnx_export.patch` patch to the repository. Refer to the :ref:`Create a Patch File ` instructions if you do not have it: - - .. code-block:: sh - - git apply /path/to/patch/YOLACT_onnx_export.patch - - -2. Evaluate the YOLACT model to export it to ONNX format: - - .. code-block:: sh - - python3 eval.py \ - --trained_model=/path/to/yolact_base_54_800000.pth \ - --score_threshold=0.3 \ - --top_k=10 \ - --image=/path/to/image.jpg \ - --cuda=False - - -3. The script may fail, but you should get ``yolact.onnx`` file. - -**Step 4**. Convert the model to the IR: - -.. code-block:: sh - - mo --input_model /path/to/yolact.onnx - - -**Step 5**. Embed input preprocessing into the IR: - -To get performance gain by offloading to the OpenVINO application of mean/scale values and RGB->BGR conversion, use the following model conversion API parameters: - -* If the backbone of the model is Resnet50-FPN or Resnet101-FPN, use the following MO command line: - - .. code-block:: sh - - mo \ - --input_model /path/to/yolact.onnx \ - --reverse_input_channels \ - --mean_values "[123.68, 116.78, 103.94]" \ - --scale_values "[58.40, 57.12, 57.38]" - - -* If the backbone of the model is Darknet53-FPN, use the following MO command line: - - .. code-block:: sh - - mo \ - --input_model /path/to/yolact.onnx \ - --reverse_input_channels \ - --scale 255 - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-attention-ocr.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-attention-ocr.rst deleted file mode 100644 index dd419456ccbcd3..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-attention-ocr.rst +++ /dev/null @@ -1,60 +0,0 @@ -Converting a TensorFlow Attention OCR Model -=========================================== - - -.. meta:: - :description: Learn how to convert the Attention OCR - model from the TensorFlow Attention OCR repository to the - OpenVINO Intermediate Representation. - - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -This tutorial explains how to convert the Attention OCR (AOCR) model from the `TensorFlow Attention OCR repository `__ to the Intermediate Representation (IR). - -Extracting a Model from ``aocr`` Library -######################################## - -To get an AOCR model, download ``aocr`` Python library: - -.. code-block:: sh - - pip install git+https://github.com/emedvedev/attention-ocr.git@master#egg=aocr - -This library contains a pretrained model and allows training and running AOCR, using the command line. After installation of `aocr`, extract the model: - -.. code-block:: sh - - aocr export --format=frozengraph model/path/ - -Once extracted, the model can be found in ``model/path/`` folder. - -Converting the TensorFlow AOCR Model to IR -########################################## - -The original AOCR model includes the preprocessing data, which contains: - -* Decoding input data to binary format where input data is an image represented as a string. -* Resizing binary image to working resolution. - -The resized image is sent to the convolution neural network (CNN). Because model conversion API does not support image decoding, the preprocessing part of the model should be cut off, using the ``input`` command-line parameter. - -.. code-block:: sh - - mo \ - --input_model=model/path/frozen_graph.pb \ - --input="map/TensorArrayStack/TensorArrayGatherV3:0[1,32,86,1]" \ - --output "transpose_1,transpose_2" \ - --output_dir path/to/ir/ - - -Where: - -* ``map/TensorArrayStack/TensorArrayGatherV3:0[1 32 86 1]`` - name of node producing tensor after preprocessing. -* ``transpose_1`` - name of the node producing tensor with predicted characters. -* ``transpose_2`` - name of the node producing tensor with predicted characters probabilities. - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-bert.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-bert.rst deleted file mode 100644 index 197b6e13c4e27a..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-bert.rst +++ /dev/null @@ -1,170 +0,0 @@ -Converting a TensorFlow BERT Model -================================== - - -.. meta:: - :description: Learn how to convert a BERT model - from TensorFlow to the OpenVINO Intermediate Representation. - - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -Pretrained models for BERT (Bidirectional Encoder Representations from Transformers) are -`publicly available `__. - -.. _supported_models: - -Supported Models -################ - -The following models from the pretrained `BERT model list `__ are currently supported: - -* ``BERT-Base, Cased`` -* ``BERT-Base, Uncased`` -* ``BERT-Base, Multilingual Cased`` -* ``BERT-Base, Multilingual Uncased`` -* ``BERT-Base, Chinese`` -* ``BERT-Large, Cased`` -* ``BERT-Large, Uncased`` - -Downloading the Pretrained BERT Model -##################################### - -Download and unzip an archive with the `BERT-Base, Multilingual Uncased Model `__. - -After the archive is unzipped, the directory ``uncased_L-12_H-768_A-12`` is created and contains the following files: - -* ``bert_config.json`` -* ``bert_model.ckpt.data-00000-of-00001`` -* ``bert_model.ckpt.index`` -* ``bert_model.ckpt.meta`` -* ``vocab.txt`` - -Pretrained model meta-graph files are ``bert_model.ckpt.*``. - -Converting a TensorFlow BERT Model to IR -######################################### - -To generate the BERT Intermediate Representation (IR) of the model, run model conversion with the following parameters: - -.. code-block:: sh - - mo \ - --input_meta_graph uncased_L-12_H-768_A-12/bert_model.ckpt.meta \ - --output bert/pooler/dense/Tanh \ - --input Placeholder{i32},Placeholder_1{i32},Placeholder_2{i32} - - -Pretrained models are not suitable for batch reshaping out-of-the-box because of multiple hardcoded shapes in the model. - -Converting a Reshapable TensorFlow BERT Model to OpenVINO IR -============================================================= - -Follow these steps to make a pretrained TensorFlow BERT model reshapable over batch dimension: - -1. Download a pretrained BERT model you want to use from the `Supported Models list <#supported_models>`__. - -2. Clone google-research/bert git repository: - - .. code-block:: sh - - https://github.com/google-research/bert.git - -3. Go to the root directory of the cloned repository: - - .. code-block:: sh - - cd bert - -4. (Optional) Checkout to the commit that the conversion was tested on: - - .. code-block:: sh - - git checkout eedf5716c - -5. Download script to load GLUE data: - - * For UNIX-like systems, run the following command: - - .. code-block:: sh - - wget https://gist.githubusercontent.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e/raw/17b8dd0d724281ed7c3b2aeeda662b92809aadd5/download_glue_data.py - - * For Windows systems: - - Download the `Python script `__ to the current working directory. - -6. Download GLUE data by running: - - .. code-block:: sh - - python3 download_glue_data.py --tasks MRPC - -7. Open the file ``modeling.py`` in the text editor and delete lines 923-924. They should look like this: - - .. code-block:: py - :force: - - if not non_static_indexes: - return shape - -8. Open the file ``run_classifier.py`` and insert the following code after the line 645: - - .. code-block:: py - :force: - - import os, sys - import tensorflow as tf - from tensorflow.python.framework import graph_io - with tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph()) as sess: - (assignment_map, initialized_variable_names) = \ - modeling.get_assignment_map_from_checkpoint(tf.compat.v1.trainable_variables(), init_checkpoint) - tf.compat.v1.train.init_from_checkpoint(init_checkpoint, assignment_map) - sess.run(tf.compat.v1.global_variables_initializer()) - frozen = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["bert/pooler/dense/Tanh"]) - graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False) - print('BERT frozen model path {}'.format(os.path.join(os.path.dirname(__file__), 'inference_graph.pb'))) - sys.exit(0) - - Lines before the inserted code should look like this: - - .. code-block:: py - :force: - - (total_loss, per_example_loss, logits, probabilities) = create_model( - bert_config, is_training, input_ids, input_mask, segment_ids, label_ids, - num_labels, use_one_hot_embeddings) - - -9. Set environment variables ``BERT_BASE_DIR``, ``BERT_REPO_DIR`` and run the script ``run_classifier.py`` to create ``inference_graph.pb`` file in the root of the cloned BERT repository. - - .. code-block:: sh - - export BERT_BASE_DIR=/path/to/bert/uncased_L-12_H-768_A-12 - export BERT_REPO_DIR=/current/working/directory - - python3 run_classifier.py \ - --task_name=MRPC \ - --do_eval=true \ - --data_dir=$BERT_REPO_DIR/glue_data/MRPC \ - --vocab_file=$BERT_BASE_DIR/vocab.txt \ - --bert_config_file=$BERT_BASE_DIR/bert_config.json \ - --init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \ - --output_dir=./ - - Run model conversion with the following command line parameters to generate reshape-able BERT Intermediate Representation (IR): - - .. code-block:: sh - - mo \ - --input_model inference_graph.pb \ - --input "IteratorGetNext:0{i32}[1,128],IteratorGetNext:1{i32}[1,128],IteratorGetNext:4{i32}[1,128]" - -For other applicable parameters, refer to the :doc:`Convert Model from TensorFlow <../[legacy]-convert-tensorflow>` guide. - -For more information about reshape abilities, refer to the :doc:`Using Shape Inference <../../../../../../openvino-workflow/running-inference/changing-input-shape>` guide. - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-crnn.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-crnn.rst deleted file mode 100644 index a94d72b4508f3c..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-crnn.rst +++ /dev/null @@ -1,86 +0,0 @@ -Converting a TensorFlow CRNN Model -================================== - - -.. meta:: - :description: Learn how to convert a CRNN model - from TensorFlow to the OpenVINO Intermediate Representation. - - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -This tutorial explains how to convert a CRNN model to OpenVINO™ Intermediate Representation (IR). - -There are several public versions of TensorFlow CRNN model implementation available on GitHub. This tutorial explains how to convert the model from -the `CRNN Tensorflow `__ repository to IR, and is validated with Python 3.7, TensorFlow 1.15.0, and protobuf 3.19.0. -If you have another implementation of CRNN model, it can be converted to OpenVINO IR in a similar way. You need to get inference graph and run model conversion of it. - -**To convert the model to IR:** - -**Step 1.** Clone this GitHub repository and check out the commit: - -1. Clone the repository: - - .. code-block:: sh - - git clone https://github.com/MaybeShewill-CV/CRNN_Tensorflow.git - -2. Go to the ``CRNN_Tensorflow`` directory of the cloned repository: - - .. code-block:: sh - - cd path/to/CRNN_Tensorflow - -3. Check out the necessary commit: - - .. code-block:: sh - - git checkout 64f1f1867bffaacfeacc7a80eebf5834a5726122 - - -**Step 2.** Train the model using the framework or the pretrained checkpoint provided in this repository. - - -**Step 3.** Create an inference graph: - -1. Add the ``CRNN_Tensorflow`` folder to ``PYTHONPATH``. - - * For Linux: - - .. code-block:: sh - - export PYTHONPATH="${PYTHONPATH}:/path/to/CRNN_Tensorflow/" - - - * For Windows, add ``/path/to/CRNN_Tensorflow/`` to the ``PYTHONPATH`` environment variable in settings. - -2. Edit the ``tools/demo_shadownet.py`` script. After ``saver.restore(sess=sess, save_path=weights_path)`` line, add the following code: - - .. code-block:: py - :force: - - from tensorflow.python.framework import graph_io - frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['shadow/LSTMLayers/transpose_time_major']) - graph_io.write_graph(frozen, '.', 'frozen_graph.pb', as_text=False) - -3. Run the demo with the following command: - - .. code-block:: sh - - python tools/demo_shadownet.py --image_path data/test_images/test_01.jpg --weights_path model/shadownet/shadownet_2017-10-17-11-47-46.ckpt-199999 - - - If you want to use your checkpoint, replace the path in the ``--weights_path`` parameter with a path to your checkpoint. - -4. In the ``CRNN_Tensorflow`` directory, you will find the inference CRNN graph ``frozen_graph.pb``. You can use this graph with OpenVINO to convert the model to IR and then run inference. - -**Step 4.** Convert the model to IR: - -.. code-block:: sh - - mo --input_model path/to/your/CRNN_Tensorflow/frozen_graph.pb - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-deep-speech.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-deep-speech.rst deleted file mode 100644 index e572b26324faf3..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-deep-speech.rst +++ /dev/null @@ -1,108 +0,0 @@ -Converting a TensorFlow DeepSpeech Model -======================================== - - -.. meta:: - :description: Learn how to convert a DeepSpeech model - from TensorFlow to the OpenVINO Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -`DeepSpeech project `__ provides an engine to train speech-to-text models. - -Downloading the Pretrained DeepSpeech Model -########################################### - -Create a directory where model and metagraph with pretrained weights will be stored: - -.. code-block:: sh - - mkdir deepspeech - cd deepspeech - -`Pre-trained English speech-to-text model `__ is publicly available. -To download the model, follow the instruction below: - -* For UNIX-like systems, run the following command: - - .. code-block:: sh - - wget -O - https://github.com/mozilla/DeepSpeech/archive/v0.8.2.tar.gz | tar xvfz - - wget -O - https://github.com/mozilla/DeepSpeech/releases/download/v0.8.2/deepspeech-0.8.2-checkpoint.tar.gz | tar xvfz - - -* For Windows systems: - - 1. Download `the archive with the model `__. - 2. Download the `TensorFlow MetaGraph with pre-trained weights `__. - 3. Unpack it with a file archiver application. - -Freezing the Model into a "\*.pb File" -###################################### - -After unpacking the archives above, you have to freeze the model. This requires -TensorFlow version 1, which is not available under Python 3.8, so you need Python 3.7 or lower. -Before freezing, deploy a virtual environment and install the required packages: - -.. code-block:: sh - - virtualenv --python=python3.7 venv-deep-speech - source venv-deep-speech/bin/activate - cd DeepSpeech-0.8.2 - pip3 install -e . - -Freeze the model with the following command: - -.. code-block:: sh - - python3 DeepSpeech.py --checkpoint_dir ../deepspeech-0.8.2-checkpoint --export_dir ../ - -After that, you will get the pretrained frozen model file ``output_graph.pb`` in the directory ``deepspeech`` created at -the beginning. The model contains the preprocessing and main parts. The first preprocessing part performs conversion of input -spectrogram into a form useful for speech recognition (mel). This part of the model is not convertible into -the IR because it contains unsupported operations ``AudioSpectrogram`` and ``Mfcc``. - -The main and most computationally expensive part of the model converts the preprocessed audio into text. -There are two specificities with the supported part of the model. - -The first is that the model contains an input with sequence length. So the model can be converted with -a fixed input length shape, thus the model is not reshapable. -Refer to the :doc:`Using Shape Inference <../../../../../../openvino-workflow/running-inference/changing-input-shape>` guide. - -The second is that the frozen model still has two variables: ``previous_state_c`` and ``previous_state_h``, figure -with the frozen \*.pb model is below. It means that the model keeps training these variables at each inference. - -.. image:: ../../../../../../assets/images/DeepSpeech-0.8.2.png - -At the first inference, the variables are initialized with zero tensors. After execution, the results of the ``BlockLSTM`` -are assigned to cell state and hidden state, which are these two variables. - -Converting the Main Part of DeepSpeech Model into OpenVINO IR -############################################################# - -Model conversion API assumes that the output model is for inference only. That is why you should cut ``previous_state_c`` and ``previous_state_h`` variables off and resolve keeping cell and hidden states on the application level. - -There are certain limitations for the model conversion: - -* Time length (``time_len``) and sequence length (``seq_len``) are equal. -* Original model cannot be reshaped, so you should keep original shapes. - -To generate the IR, run model conversion with the following parameters: - -.. code-block:: sh - - mo \ - --input_model output_graph.pb \ - --input "input_lengths->[16],input_node[1,16,19,26],previous_state_h[1,2048],previous_state_c[1,2048]" \ - --output "cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/GatherNd_1,cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/GatherNd,logits" - - -Where: - -* ``input_lengths->[16]`` Replaces the input node with name "input_lengths" with a constant tensor of shape [1] with a single integer value of 16. This means that the model now can consume input sequences of length 16 only. -* ``input_node[1 16 19 26],previous_state_h[1 2048],previous_state_c[1 2048]`` replaces the variables with a placeholder. -* ``output ".../GatherNd_1,.../GatherNd,logits"`` output node names. - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-efficient-det.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-efficient-det.rst deleted file mode 100644 index c894765a5dc604..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-efficient-det.rst +++ /dev/null @@ -1,90 +0,0 @@ -Converting TensorFlow EfficientDet Models -========================================= - - -.. meta:: - :description: Learn how to convert an EfficientDet model - from TensorFlow to the OpenVINO Intermediate Representation. - - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -This tutorial explains how to convert EfficientDet public object detection models to the Intermediate Representation (IR). - -.. _efficientdet-to-ir: - -Converting EfficientDet Model to the IR -####################################### - -There are several public versions of EfficientDet model implementation available on GitHub. This tutorial explains how to -convert models from the `repository `__ (commit 96e1fee) to the OpenVINO format. - -Download and extract the model checkpoint `efficientdet-d4.tar.gz `__ -referenced in the **"Pretrained EfficientDet Checkpoints"** section of the model repository: - -.. code-block:: sh - - wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientdet/coco2/efficientdet-d4.tar.gz - tar zxvf efficientdet-d4.tar.gz - -Converting an EfficientDet TensorFlow Model to the IR -+++++++++++++++++++++++++++++++++++++++++++++++++++++ - -To generate the IR of the EfficientDet TensorFlow model, run: - -.. code-block:: sh - - mo \ - --input_meta_graph efficientdet-d4/model.meta \ - --input_shape [1,$IMAGE_SIZE,$IMAGE_SIZE,3] \ - --reverse_input_channels - - -Where ``$IMAGE_SIZE`` is the size that the input image of the original TensorFlow model will be resized to. Different -EfficientDet models were trained with different input image sizes. To determine the right one, refer to the ``efficientdet_model_param_dict`` -dictionary in the `hparams_config.py `__ file. -The attribute ``image_size`` specifies the shape to be defined for the model conversion. - -.. note:: - - The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the ``RGB<->BGR`` conversion specifying the command-line parameter: ``--reverse_input_channels``. Otherwise, inference results may be incorrect. For more information about the parameter, refer to the **When to Reverse Input Channels** section of the :doc:`Converting a Model to Intermediate Representation (IR) <../../[legacy]-setting-input-shapes>` guide. - -OpenVINO toolkit provides samples that can be used to infer EfficientDet model. -For more information, refer to the `Open Model Zoo Demos `__. - -.. important:: - - Due to the deprecation of Open Model Zoo, models in the OpenVINO IR format have are now - published on `Hugging Face `__. - - -Interpreting Results of the TensorFlow Model and the IR -####################################################### - -The TensorFlow model produces as output a list of 7-element tuples: ``[image_id, y_min, x_min, y_max, x_max, confidence, class_id]``, where: - -* ``image_id`` -- image batch index. -* ``y_min`` -- absolute ``y`` coordinate of the lower left corner of the detected object. -* ``x_min`` -- absolute ``x`` coordinate of the lower left corner of the detected object. -* ``y_max`` -- absolute ``y`` coordinate of the upper right corner of the detected object. -* ``x_max`` -- absolute ``x`` coordinate of the upper right corner of the detected object. -* ``confidence`` -- the confidence of the detected object. -* ``class_id`` -- the id of the detected object class counted from 1. - -The output of the IR is a list of 7-element tuples: ``[image_id, class_id, confidence, x_min, y_min, x_max, y_max]``, where: - -* ``image_id`` -- image batch index. -* ``class_id`` -- the id of the detected object class counted from 0. -* ``confidence`` -- the confidence of the detected object. -* ``x_min`` -- normalized ``x`` coordinate of the lower left corner of the detected object. -* ``y_min`` -- normalized ``y`` coordinate of the lower left corner of the detected object. -* ``x_max`` -- normalized ``x`` coordinate of the upper right corner of the detected object. -* ``y_max`` -- normalized ``y`` coordinate of the upper right corner of the detected object. - -The first element with ``image_id = -1`` means end of data. - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-face-net.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-face-net.rst deleted file mode 100644 index a528718349f717..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-face-net.rst +++ /dev/null @@ -1,42 +0,0 @@ -Converting TensorFlow FaceNet Models -==================================== - - -.. meta:: - :description: Learn how to convert a FaceNet model - from TensorFlow to the OpenVINO Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Supported Model Formats <../../../../../../openvino-workflow/model-preparation>` article. - -`Public pre-trained FaceNet models `__ contain both training -and inference part of graph. Switch between this two states is manageable with placeholder value. -Intermediate Representation (IR) models are intended for inference, which means that train part is redundant. - -There are two inputs in this network: boolean ``phase_train`` which manages state of the graph (train/infer) and -``batch_size`` which is a part of batch joining pattern. - -.. image:: ../../../../../../assets/images/FaceNet.svg - -Converting a TensorFlow FaceNet Model to the IR -############################################### - -To generate a FaceNet OpenVINO model, feed a TensorFlow FaceNet model to model conversion API with the following parameters: - -.. code-block:: sh - - mo - --input_model path_to_model/model_name.pb \ - --freeze_placeholder_with_value "phase_train->False" - - -The batch joining pattern transforms to a placeholder with the model default shape if ``--input_shape`` or ``--batch``/``-b`` are not provided. Otherwise, the placeholder shape has custom parameters. - -* ``freeze_placeholder_with_value "phase_train->False"`` to switch graph to inference mode -* ``batch`*/*`-b`` is applicable to override original network batch -* ``input_shape`` is applicable with or without ``input`` -* other options are applicable - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-gnmt.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-gnmt.rst deleted file mode 100644 index b8d2c592ed931d..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-gnmt.rst +++ /dev/null @@ -1,315 +0,0 @@ -Converting a TensorFlow GNMT Model -================================== - - -.. meta:: - :description: Learn how to convert a GNMT model - from TensorFlow to the OpenVINO Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -This tutorial explains how to convert Google Neural Machine Translation (GNMT) model to the Intermediate Representation (IR). - -There are several public versions of TensorFlow GNMT model implementation available on GitHub. This tutorial explains how to convert the GNMT model from the `TensorFlow Neural Machine Translation (NMT) repository `__ to the IR. - -Creating a Patch File -##################### - -Before converting the model, you need to create a patch file for the repository. The patch modifies the framework code by adding a special command-line argument to the framework options that enables inference graph dumping: - -1. Go to a writable directory and create a ``GNMT_inference.patch`` file. -2. Copy the following diff code to the file: - - .. code-block:: py - - diff --git a/nmt/inference.py b/nmt/inference.py - index 2cbef07..e185490 100644 - --- a/nmt/inference.py - +++ b/nmt/inference.py - @@ -17,9 +17,11 @@ - from __future__ import print_function - - import codecs - +import os - import time - - import tensorflow as tf - +from tensorflow.python.framework import graph_io - - from . import attention_model - from . import gnmt_model - @@ -105,6 +107,29 @@ def start_sess_and_load_model(infer_model, ckpt_path): - return sess, loaded_infer_model - - - +def inference_dump_graph(ckpt_path, path_to_dump, hparams, scope=None): - + model_creator = get_model_creator(hparams) - + infer_model = model_helper.create_infer_model(model_creator, hparams, scope) - + sess = tf.Session( - + graph=infer_model.graph, config=utils.get_config_proto()) - + with infer_model.graph.as_default(): - + loaded_infer_model = model_helper.load_model( - + infer_model.model, ckpt_path, sess, "infer") - + utils.print_out("Dumping inference graph to {}".format(path_to_dump)) - + loaded_infer_model.saver.save( - + sess, - + os.path.join(path_to_dump + 'inference_GNMT_graph') - + ) - + utils.print_out("Dumping done!") - + - + output_node_name = 'index_to_string_Lookup' - + utils.print_out("Freezing GNMT graph with output node {}...".format(output_node_name)) - + frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, - + [output_node_name]) - + graph_io.write_graph(frozen, '.', os.path.join(path_to_dump, 'frozen_GNMT_inference_graph.pb'), as_text=False) - + utils.print_out("Freezing done. Freezed model frozen_GNMT_inference_graph.pb saved to {}".format(path_to_dump)) - + - + - def inference(ckpt_path, - inference_input_file, - inference_output_file, - diff --git a/nmt/nmt.py b/nmt/nmt.py - index f5823d8..a733748 100644 - --- a/nmt/nmt.py - +++ b/nmt/nmt.py - @@ -310,6 +310,13 @@ def add_arguments(parser): - parser.add_argument("--num_intra_threads", type=int, default=0, - help="number of intra_op_parallelism_threads") - - + # Special argument for inference model dumping without inference - + parser.add_argument("--dump_inference_model", type="bool", nargs="?", - + const=True, default=False, - + help="Argument for dump inference graph for specified trained ckpt") - + - + parser.add_argument("--path_to_dump", type=str, default="", - + help="Path to dump inference graph.") - - def create_hparams(flags): - """Create training hparams.""" - @@ -396,6 +403,9 @@ def create_hparams(flags): - language_model=flags.language_model, - num_intra_threads=flags.num_intra_threads, - num_inter_threads=flags.num_inter_threads, - + - + dump_inference_model=flags.dump_inference_model, - + path_to_dump=flags.path_to_dump, - ) - - - @@ -613,7 +623,7 @@ def create_or_load_hparams( - return hparams - - - -def run_main(flags, default_hparams, train_fn, inference_fn, target_session=""): - +def run_main(flags, default_hparams, train_fn, inference_fn, inference_dump, target_session=""): - """Run main.""" - # Job - jobid = flags.jobid - @@ -653,8 +663,26 @@ def run_main(flags, default_hparams, train_fn, inference_fn, target_session=""): - out_dir, default_hparams, flags.hparams_path, - save_hparams=(jobid == 0)) - - - ## Train / Decode - - if flags.inference_input_file: - + # Dumping inference model - + if flags.dump_inference_model: - + # Inference indices - + hparams.inference_indices = None - + if flags.inference_list: - + (hparams.inference_indices) = ( - + [int(token) for token in flags.inference_list.split(",")]) - + - + # Ckpt - + ckpt = flags.ckpt - + if not ckpt: - + ckpt = tf.train.latest_checkpoint(out_dir) - + - + # Path to dump graph - + assert flags.path_to_dump != "", "Please, specify path_to_dump model." - + path_to_dump = flags.path_to_dump - + if not tf.gfile.Exists(path_to_dump): tf.gfile.MakeDirs(path_to_dump) - + - + inference_dump(ckpt, path_to_dump, hparams) - + elif flags.inference_input_file: - # Inference output directory - trans_file = flags.inference_output_file - assert trans_file - @@ -693,7 +721,8 @@ def main(unused_argv): - default_hparams = create_hparams(FLAGS) - train_fn = train.train - inference_fn = inference.inference - - run_main(FLAGS, default_hparams, train_fn, inference_fn) - + inference_dump = inference.inference_dump_graph - + run_main(FLAGS, default_hparams, train_fn, inference_fn, inference_dump) - - - if __name__ == "__main__": - - -3. Save and close the file. - -Converting a GNMT Model to the IR -################################# - -.. note:: Use TensorFlow version 1.13 or lower. - -**Step 1**. Clone the GitHub repository and check out the commit: - -1. Clone the NMT repository: - - .. code-block:: sh - - git clone https://github.com/tensorflow/nmt.git - -2. Check out the necessary commit: - - .. code-block:: sh - - git checkout b278487980832417ad8ac701c672b5c3dc7fa553 - - -**Step 2**. Get a trained model. You have two options: - -* Train the model with the GNMT ``wmt16_gnmt_4_layer.json`` or ``wmt16_gnmt_8_layer.json`` configuration file using the NMT framework. -* *Do not use the pre-trained checkpoints provided in the NMT repository, as they are outdated and can be incompatible with the current repository version.* - -This tutorial assumes the use of the trained GNMT model from ``wmt16_gnmt_4_layer.json`` config, German to English translation. - -**Step 3**. Create an inference graph: - -The OpenVINO assumes that a model is used for inference only. Hence, before converting the model into the IR, you need to transform the training graph into the inference graph. -For the GNMT model, the training graph and the inference graph have different decoders: the training graph uses a greedy search decoding algorithm, while the inference graph uses a beam search decoding algorithm. - -1. Apply the ``GNMT_inference.patch`` patch to the repository. `Create a Patch File <#Creating-a-Patch-File>`__ instructions if you do not have it: - - .. code-block:: sh - - git apply /path/to/patch/GNMT_inference.patch - - -2. Run the NMT framework to dump the inference model: - - .. code-block:: sh - - python -m nmt.nmt - --src=de - --tgt=en - --ckpt=/path/to/ckpt/translate.ckpt - --hparams_path=/path/to/repository/nmt/nmt/standard_hparams/wmt16_gnmt_4_layer.json - --vocab_prefix=/path/to/vocab/vocab.bpe.32000 - --out_dir="" - --dump_inference_model - --infer_mode beam_search - --path_to_dump /path/to/dump/model/ - - -If you use different checkpoints, use the corresponding values for the ``src``, ``tgt``, ``ckpt``, ``hparams_path``, and ``vocab_prefix`` parameters. -Inference checkpoint ``inference_GNMT_graph`` and frozen inference graph ``frozen_GNMT_inference_graph.pb`` will appear in the ``/path/to/dump/model/`` folder. - -To generate ``vocab.bpe.32000``, execute the ``nmt/scripts/wmt16_en_de.sh`` script. If you face an issue of a size mismatch between the checkpoint graph's embedding layer and vocabulary (both src and target), make sure you add the following code to the ``nmt.py`` file to the ``extend_hparams`` function after the line 508 (after initialization of the ``src_vocab_size`` and ``tgt_vocab_size`` variables): - -.. code-block:: py - :force: - - src_vocab_size -= 1 - tgt_vocab_size -= 1 - - -**Step 4**. Convert the model to the IR: - -.. code-block:: sh - - mo - --input_model /path/to/dump/model/frozen_GNMT_inference_graph.pb - --input "IteratorGetNext:1{i32}[1],IteratorGetNext:0{i32}[1,50],dynamic_seq2seq/hash_table_Lookup_1:0[1]->[2],dynamic_seq2seq/hash_table_Lookup:0[1]->[1]" - --output dynamic_seq2seq/decoder/decoder/GatherTree - --output_dir /path/to/output/IR/ - - -Input and output cutting with the ``--input`` and ``--output`` options is required since OpenVINO™ does not support ``IteratorGetNext`` and ``LookupTableFindV2`` operations. - -Input cutting: - -* ``IteratorGetNext`` operation iterates over a dataset. It is cut by output ports: port 0 contains data tensor with shape ``[batch_size, max_sequence_length]``, port 1 contains ``sequence_length`` for every batch with shape ``[batch_size]``. - -* ``LookupTableFindV2`` operations (``dynamic_seq2seq/hash_table_Lookup_1`` and ``dynamic_seq2seq/hash_table_Lookup`` nodes in the graph) are cut with constant values). - -Output cutting: - -* ``LookupTableFindV2`` operation is cut from the output and the ``dynamic_seq2seq/decoder/decoder/GatherTree`` node is treated as a new exit point. - -For more information about model cutting, refer to the :doc:`Cutting Off Parts of a Model <../../[legacy]-cutting-parts-of-a-model>` guide. - -Using a GNMT Model -################## - -.. note:: - - This step assumes you have converted a model to the Intermediate Representation. - -Inputs of the model: - -* ``IteratorGetNext/placeholder_out_port_0`` input with shape ``[batch_size, max_sequence_length]`` contains ``batch_size`` decoded input sentences. Every sentence is decoded the same way as indices of sentence elements in vocabulary and padded with index of ``eos`` (end of sentence symbol). If the length of the sentence is less than ``max_sequence_length``, remaining elements are filled with index of ``eos`` token. - -* ``IteratorGetNext/placeholder_out_port_1`` input with shape ``[batch_size]`` contains sequence lengths for every sentence from the first input. For example, if ``max_sequence_length = 50``, ``batch_size = 1`` and the sentence has only 30 elements, then the input tensor for ``IteratorGetNext/placeholder_out_port_1`` should be ``[30]``. - - -Outputs of the model: - -* ``dynamic_seq2seq/decoder/decoder/GatherTree`` tensor with shape ``[max_sequence_length * 2, batch, beam_size]``, - that contains ``beam_size`` best translations for every sentence from input (also decoded as indices of words in - vocabulary). - -.. note:: - The shape of this tensor in TensorFlow can be different: instead of ``max_sequence_length * 2``, it can be any value less than that, because OpenVINO does not support dynamic shapes of outputs, while TensorFlow can stop decoding iterations when ``eos`` symbol is generated. - -Running GNMT IR ---------------- - -1. With benchmark app: - - .. code-block:: sh - - benchmark_app -m -d CPU - - -2. With OpenVINO Runtime Python API: - - .. note:: - - Before running the example, insert a path to your GNMT ``.xml`` and ``.bin`` files into ``MODEL_PATH`` and ``WEIGHTS_PATH``, and fill ``input_data_tensor`` and ``seq_lengths`` tensors according to your input data. - - .. code-block:: py - :force: - - from openvino.inference_engine import IENetwork, IECore - - MODEL_PATH = '/path/to/IR/frozen_GNMT_inference_graph.xml' - WEIGHTS_PATH = '/path/to/IR/frozen_GNMT_inference_graph.bin' - - # Creating network - net = IENetwork( - model=MODEL_PATH, - weights=WEIGHTS_PATH) - - # Creating input data - input_data = {'IteratorGetNext/placeholder_out_port_0': input_data_tensor, - 'IteratorGetNext/placeholder_out_port_1': seq_lengths} - - # Creating plugin and loading extensions - ie = IECore() - ie.add_extension(extension_path="libcpu_extension.so", device_name="CPU") - - # Loading network - exec_net = ie.load_network(network=net, device_name="CPU") - - # Run inference - result_ie = exec_net.infer(input_data) - - -For more information about Python API, refer to the :doc:`OpenVINO Runtime Python API <../../../../../../api/ie_python_api/api>` guide. - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-language-1b.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-language-1b.rst deleted file mode 100644 index 1b51809f9d1b6b..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-language-1b.rst +++ /dev/null @@ -1,131 +0,0 @@ -Converting a TensorFlow Language Model on One Billion Word Benchmark -==================================================================== - - -.. meta:: - :description: Learn how to convert a TensorFlow Language - Model on One Billion Word Benchmark to the OpenVINO Intermediate - Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -Downloading a Pre-trained Language Model on One Billion Word Benchmark -###################################################################### - -TensorFlow provides a pretrained `Language Model on One Billion Word Benchmark `__. - -To download the model for IR conversion, follow the instructions: - -1. Create new directory to store the model: - - .. code-block:: sh - - mkdir lm_1b - -2. Go to the ``lm_1b`` directory: - - .. code-block:: sh - - cd lm_1b - -3. Download the model GraphDef file: - - .. code-block:: sh - - wget http://download.tensorflow.org/models/LM_LSTM_CNN/graph-2016-09-10.pbtxt - -4. Create new directory to store 12 checkpoint shared files: - - .. code-block:: sh - - mkdir ckpt - -5. Go to the ``ckpt`` directory: - - .. code-block:: sh - - cd ckpt - -6. Download 12 checkpoint shared files: - - .. code-block:: sh - - wget http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-base - wget http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-char-embedding - wget http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-lstm - wget http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax0 - wget http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax1 - wget http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax2 - wget http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax3 - wget http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax4 - wget http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax5 - wget http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax6 - wget http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax7 - wget http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax8 - - -Once you have downloaded the pretrained model files, you will have the ``lm_1b`` directory with the following hierarchy: - -.. code-block:: sh - - lm_1b/ - graph-2016-09-10.pbtxt - ckpt/ - ckpt-base - ckpt-char-embedding - ckpt-lstm - ckpt-softmax0 - ckpt-softmax1 - ckpt-softmax2 - ckpt-softmax3 - ckpt-softmax4 - ckpt-softmax5 - ckpt-softmax6 - ckpt-softmax7 - ckpt-softmax8 - - - -.. image:: ../../../../../../assets/images/lm_1b.svg - -The frozen model still has two variables: ``Variable`` and ``Variable_1``. -It means that the model keeps training those variables at each inference. - -At the first inference of this graph, the variables are initialized by initial values. -After executing the ``lstm`` nodes, results of execution are assigned to these two variables. - -With each inference of the ``lm_1b`` graph, ``lstm`` initial states data is taken from previous inference -from variables, and states of current inference of ``lstm`` is reassigned to the same variables. - -It helps the model to remember the context of the words that it takes as input. - -Converting a TensorFlow Language Model on One Billion Word Benchmark to IR -########################################################################## - -Model Optimizer assumes that output model is for inference only. -Therefore, you should cut those variables off and resolve keeping cell and hidden states on application level. - -There is a certain limitation for the model conversion: the original model cannot be reshaped, so you should keep original shapes. - -To generate the ``lm_1b`` Intermediate Representation (IR), provide TensorFlow ``lm_1b`` model to the -Model Optimizer with parameters: - -.. code-block:: sh - - mo - --input_model lm_1b/graph-2016-09-10.pbtxt \ - --input_checkpoint lm_1b/ckpt \ - --input_model_is_text \ - --input_shape [50],[50],[1,9216],[1,9216] \ - --output softmax_out,lstm/lstm_0/concat_2,lstm/lstm_1/concat_2 \ - --input char_embedding/EmbeddingLookupUnique/Unique:0,char_embedding/EmbeddingLookupUnique/Unique:1,Variable/read,Variable_1/read - -Where: - -* ``--input char_embedding/EmbeddingLookupUnique/Unique:0,char_embedding/EmbeddingLookupUnique/Unique:1,Variable/read,Variable_1/read`` and ``--input_shape [50],[50],[1,9216],[1,9216]`` replace the variables with a placeholder. -* ``--output softmax_out,lstm/lstm_0/concat_2,lstm/lstm_1/concat_2`` specifies output node name and names of LSTM cell states. - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-ncf.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-ncf.rst deleted file mode 100644 index a8592e75d65b31..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-ncf.rst +++ /dev/null @@ -1,68 +0,0 @@ -Converting a TensorFlow Neural Collaborative Filtering Model -============================================================ - - -.. meta:: - :description: Learn how to convert a Neural Collaborative - Filtering Model from TensorFlow to the OpenVINO Intermediate - Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -This tutorial explains how to convert Neural Collaborative Filtering (NCF) model to the OpenVINO Intermediate Representation. - -`Public TensorFlow NCF model `__ does not contain pre-trained weights. To convert this model to the IR: - -1. Use `the instructions `__ from this repository to train the model. - -2. Freeze the inference graph you get in the previous step in ``model_dir``, following the instructions from the **Freezing Custom Models in Python** section of the :doc:`Converting a TensorFlow Model <../[legacy]-convert-tensorflow>` guide. - - Run the following commands: - - .. code-block:: py - :force: - - import tensorflow as tf - from tensorflow.python.framework import graph_io - - sess = tf.compat.v1.Session() - saver = tf.compat.v1.train.import_meta_graph("/path/to/model/model.meta") - saver.restore(sess, tf.train.latest_checkpoint('/path/to/model/')) - - frozen = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph_def, \ - ["rating/BiasAdd"]) - graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False) - - where ``rating/BiasAdd`` is an output node. - -3. Convert the model to the OpenVINO format. If you look at your frozen model, you can see that it has one input that is split into four ``ResourceGather`` layers. (Click image to zoom in.) - - .. image:: ../../../../../../assets/images/NCF_start.svg - - However, as the model conversion API does not support such data feeding, you should skip it. Cut - the edges incoming in ``ResourceGather`` port 1: - - .. code-block:: sh - - mo --input_model inference_graph.pb \ - --input 1:embedding/embedding_lookup,1:embedding_1/embedding_lookup, \ - 1:embedding_2/embedding_lookup,1:embedding_3/embedding_lookup \ - --input_shape [256],[256],[256],[256] \ - --output_dir - - In the ``input_shape`` parameter, 256 specifies the ``batch_size`` for your model. - -Alternatively, you can do steps 2 and 3 in one command line: - -.. code-block:: sh - - mo --input_meta_graph /path/to/model/model.meta \ - --input 1:embedding/embedding_lookup,1:embedding_1/embedding_lookup, \ - 1:embedding_2/embedding_lookup,1:embedding_3/embedding_lookup \ - --input_shape [256],[256],[256],[256] --output rating/BiasAdd \ - --output_dir - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-object-detection.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-object-detection.rst deleted file mode 100644 index ad321a4abb3cda..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-object-detection.rst +++ /dev/null @@ -1,184 +0,0 @@ -Converting TensorFlow Object Detection API Models -================================================= - - -.. meta:: - :description: Learn how to convert Object Detection - API Models from TensorFlow to the OpenVINO Intermediate - Representation. - - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -* Starting with the 2022.1 release, model conversion API can convert the TensorFlow Object Detection API Faster and Mask RCNNs topologies differently. By default, model conversion adds operation "Proposal" to the generated IR. This operation needs an additional input to the model with name "image_info" which should be fed with several values describing the preprocessing applied to the input image (refer to the :doc:`Proposal <../../../../../openvino-ir-format/operation-sets/operation-specs/detection/proposal-4>` operation specification for more information). However, this input is redundant for the models trained and inferred with equal size images. Model conversion API can generate IR for such models and insert operation :doc:`DetectionOutput <../../../../../openvino-ir-format/operation-sets/operation-specs/detection/detectionoutput-1>` instead of ``Proposal``. The `DetectionOutput` operation does not require additional model input "image_info". Moreover, for some models the produced inference results are closer to the original TensorFlow model. In order to trigger new behavior, the attribute "operation_to_add" in the corresponding JSON transformation configuration file should be set to value "DetectionOutput" instead of default one "Proposal". -* Starting with the 2021.1 release, model conversion API converts the TensorFlow Object Detection API SSDs, Faster and Mask RCNNs topologies keeping shape-calculating sub-graphs by default, so topologies can be re-shaped in the OpenVINO Runtime using dedicated reshape API. Refer to the :doc:`Using Shape Inference <../../../../../../openvino-workflow/running-inference/changing-input-shape>` guide for more information on how to use this feature. It is possible to change the both spatial dimensions of the input image and batch size. -* To generate IRs for TF 1 SSD topologies, model conversion API creates a number of ``PriorBoxClustered`` operations instead of a constant node with prior boxes calculated for the particular input image size. This change allows you to reshape the topology in the OpenVINO Runtime using dedicated API. The reshaping is supported for all SSD topologies except FPNs, which contain hardcoded shapes for some operations preventing from changing topology input shape. - -Converting a Model -################## - -You can download TensorFlow Object Detection API models from the `TensorFlow 1 Detection Model Zoo `__ or `TensorFlow 2 Detection Model Zoo `__. - -.. note:: - - Before converting, make sure you have configured model conversion API. For configuration steps, refer to the :doc:`Convert a Model <../../../legacy-conversion-api>`. - -To convert a TensorFlow Object Detection API model, run the ``mo`` command with the following required parameters: - -* ``input_model `` - File with a pretrained model (binary or text .pb file after freezing) OR ``saved_model_dir `` for the TensorFlow 2 models -* ``transformations_config `` - A subgraph replacement configuration file with transformations description. For the models downloaded from the TensorFlow Object Detection API zoo, you can find the configuration files in the ``/openvino/tools/mo/front/tf`` directory. Use: - - * ``ssd_v2_support.json`` - for frozen SSD topologies from the models zoo version up to 1.13.X inclusively - * ``ssd_support_api_v.1.14.json`` - for SSD topologies trained using the TensorFlow Object Detection API version 1.14 up to 1.14.X inclusively - * ``ssd_support_api_v.1.15.json`` - for SSD topologies trained using the TensorFlow Object Detection API version 1.15 up to 2.0 - * ``ssd_support_api_v.2.0.json`` - for SSD topologies trained using the TensorFlow Object Detection API version 2.0 up to 2.3.X inclusively - * ``ssd_support_api_v.2.4.json`` - for SSD topologies trained using the TensorFlow Object Detection API version 2.4 or higher - * ``efficient_det_support_api_v.2.0.json`` - for EfficientDet topologies trained using the TensorFlow Object Detection API version 2.0 up to 2.3.X inclusively - * ``efficient_det_support_api_v.2.4.json`` - for EfficientDet topologies trained using the TensorFlow Object Detection API version 2.4 or higher - * ``faster_rcnn_support.json`` - for Faster R-CNN topologies from the TF 1.X models zoo trained with TensorFlow version up to 1.6.X inclusively - * ``faster_rcnn_support_api_v1.7.json`` - for Faster R-CNN topologies trained using the TensorFlow Object Detection API version 1.7.0 up to 1.9.X inclusively - * ``faster_rcnn_support_api_v1.10.json`` - for Faster R-CNN topologies trained using the TensorFlow Object Detection API version 1.10.0 up to 1.12.X inclusively - * ``faster_rcnn_support_api_v1.13.json`` - for Faster R-CNN topologies trained using the TensorFlow Object Detection API version 1.13.X - * ``faster_rcnn_support_api_v1.14.json`` - for Faster R-CNN topologies trained using the TensorFlow Object Detection API version 1.14.0 up to 1.14.X inclusively - * ``faster_rcnn_support_api_v1.15.json`` - for Faster R-CNN topologies trained using the TensorFlow Object Detection API version 1.15.0 up to 2.0 - * ``faster_rcnn_support_api_v2.0.json`` - for Faster R-CNN topologies trained using the TensorFlow Object Detection API version 2.0 up to 2.3.X inclusively - * ``faster_rcnn_support_api_v2.4.json`` - for Faster R-CNN topologies trained using the TensorFlow Object Detection API version 2.4 or higher - * ``mask_rcnn_support.json`` - for Mask R-CNN topologies from the TF 1.X models zoo trained with TensorFlow version 1.9.0 or lower. - * ``mask_rcnn_support_api_v1.7.json`` - for Mask R-CNN topologies trained using the TensorFlow Object Detection API version 1.7.0 up to 1.9.X inclusively - * ``mask_rcnn_support_api_v1.11.json`` - for Mask R-CNN topologies trained using the TensorFlow Object Detection API version 1.11.0 up to 1.12.X inclusively - * ``mask_rcnn_support_api_v1.13.json`` - for Mask R-CNN topologies trained using the TensorFlow Object Detection API version 1.13.0 up to 1.13.X inclusively - * ``mask_rcnn_support_api_v1.14.json`` - for Mask R-CNN topologies trained using the TensorFlow Object Detection API version 1.14.0 up to 1.14.X inclusively - * ``mask_rcnn_support_api_v1.15.json`` - for Mask R-CNN topologies trained using the TensorFlow Object Detection API version 1.15.0 up to 2.0 - * ``mask_rcnn_support_api_v2.0.json`` - for Mask R-CNN topologies trained using the TensorFlow Object Detection API version 2.0 up to 2.3.X inclusively - * ``mask_rcnn_support_api_v2.4.json`` - for Mask R-CNN topologies trained using the TensorFlow Object Detection API version 2.4 or higher - * ``rfcn_support.json`` - for RFCN topology from the models zoo trained with TensorFlow version up to 1.9.X inclusively - * ``rfcn_support_api_v1.10.json`` - for RFCN topology from the models zoo frozen with TensorFlow version 1.10.0 up to 1.12.X inclusively - * ``rfcn_support_api_v1.13.json`` - for RFCN topology from the models zoo frozen with TensorFlow version 1.13.X - * ``rfcn_support_api_v1.14.json`` - for RFCN topology from the models zoo frozen with TensorFlow version 1.14.0 or higher - -* ``tensorflow_object_detection_api_pipeline_config `` - A special configuration file that describes the topology hyper-parameters and structure of the TensorFlow Object Detection API model. For the models downloaded from the TensorFlow Object Detection API zoo, the configuration file is named ``pipeline.config``. If you plan to train a model yourself, you can find templates for these files in the `models repository `__. -* ``input_shape`` (optional) - A custom input image shape. For more information how the ``input_shape`` parameter is handled for the TensorFlow Object Detection API models, refer to the `Custom Input Shape <#Custom-Input-Shape>`__ guide. - -.. note:: - - The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the ``RGB<->BGR`` conversion specifying the command-line parameter: ``reverse_input_channels``. Otherwise, inference results may be incorrect. If you convert a TensorFlow Object Detection API model to use with the OpenVINO sample applications, you must specify the ``reverse_input_channels`` parameter. For more information about the parameter, refer to the **When to Reverse Input Channels** section of the :doc:`Converting a Model to Intermediate Representation (IR) <../../[legacy]-setting-input-shapes>` guide. - -Additionally to the mandatory parameters listed above you can use optional conversion parameters if needed. A full list of parameters is available in the :doc:`Converting a TensorFlow Model <../[legacy]-convert-tensorflow>` guide. - -For example, if you downloaded the pre-trained `SSD InceptionV2 topology `__ and extracted archive to the directory ``/tmp/ssd_inception_v2_coco_2018_01_28``, the sample command line to convert the model looks as follows: - -.. code-block:: sh - - mo --input_model=/tmp/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb --transformations_config front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /tmp/ssd_inception_v2_coco_2018_01_28/pipeline.config --reverse_input_channels - - -OpenVINO™ Toolkit Samples and Open Model Zoo Demos -################################################## - -OpenVINO comes with a number of samples to demonstrate use of OpenVINO Runtime API. Additionally, -Open Model Zoo provides set of demo applications to show implementation of close to real life applications, -based on deep learning in various tasks, including Image Classification, Visual Object Detection, Text Recognition, -Speech Recognition, Natural Language Processing and others. Refer to the links below for more details. - -* :doc:`OpenVINO Samples <../../../../../../learn-openvino/openvino-samples>` -* :doc:`Open Model Zoo Demos <../../../../model-zoo>` - -.. important:: - - Due to the deprecation of Open Model Zoo, models in the OpenVINO IR format are now - published on `Hugging Face `__. - -Feeding Input Images to the Samples -################################### - -There are several important notes about feeding input images to the samples: - -1. OpenVINO samples stretch input image to the size of the input operation without preserving aspect ratio. This behavior is usually correct for most topologies (including SSDs), but incorrect for other models like Faster R-CNN, Mask R-CNN and R-FCN. These models usually use keeps aspect ratio resizer. The type of preprocessing is defined in the pipeline configuration file in the section ``image_resizer``. If keeping aspect ratio is used, then it is necessary to resize image before passing it to the sample and optionally pad the resized image with 0s (if the attribute "pad_to_max_dimension" in the pipeline.config is equal to "true"). - -2. TensorFlow implementation of image resize may be different from the one implemented in the sample. Even reading input image from compressed format (like ``.jpg``) could give different results in the sample and TensorFlow. If it is necessary to compare accuracy between the TensorFlow and the OpenVINO, it is recommended to pass pre-resized input image in a non-compressed format (like ``.bmp``). - -3. If you want to infer the model with the OpenVINO samples, convert the model specifying the ``reverse_input_channels`` command line parameter. The samples load images in BGR channels order, while TensorFlow models were trained with images in RGB order. When the ``reverse_input_channels`` command line parameter is specified, model conversion API performs first convolution or other channel dependent operation weights modification so the output will be like the image is passed with RGB channels order. - -4. Read carefully the messages printed by model conversion API. They contain important instructions on how to prepare input data before running the inference and how to interpret the output. - -Custom Input Shape -################## - -Model conversion handles the command line parameter ``input_shape`` for TensorFlow Object Detection API models in a special way depending on the image resizer type defined in the ``pipeline.config`` file. TensorFlow Object Detection API generates different ``Preprocessor`` sub-graph based on the image resizer type. Model conversion API supports two types of image resizer: - -* ``fixed_shape_resizer`` --- *Stretches* input image to the specific height and width. The ``pipeline.config`` snippet below shows a ``fixed_shape_resizer`` sample definition: - - .. code-block:: sh - - image_resizer { - fixed_shape_resizer { - height: 300 - width: 300 - } - } - -* ``keep_aspect_ratio_resizer`` --- Resizes the input image *keeping aspect ratio* to satisfy the minimum and maximum size constraints. The ``pipeline.config`` snippet below shows a ``keep_aspect_ratio_resizer`` sample definition: - - .. code-block:: sh - - image_resizer { - keep_aspect_ratio_resizer { - min_dimension: 600 - max_dimension: 1024 - } - } - -If an additional parameter "pad_to_max_dimension" is equal to "true", then the resized image will be padded with 0s to the square image of size "max_dimension". - -Fixed Shape Resizer Replacement -+++++++++++++++++++++++++++++++ - -* If the ``input_shape`` command line parameter is not specified, model conversion generates an input operation with the height and width as defined in the ``pipeline.config``. - -* If the ``input_shape [1, H, W, 3]`` command line parameter is specified, model conversion sets the input operation height to ``H`` and width to ``W`` and convert the model. However, the conversion may fail because of the following reasons: - - * The model is not reshape-able, meaning that it's not possible to change the size of the model input image. For example, SSD FPN models have ``Reshape`` operations with hard-coded output shapes, but the input size to these ``Reshape`` instances depends on the input image size. In this case, model conversion API shows an error during the shape inference phase. Run model conversion with ``log_level DEBUG`` to see the inferred operations output shapes to see the mismatch. - * Custom input shape is too small. For example, if you specify ``input_shape [1,100,100,3]`` to convert a SSD Inception V2 model, one of convolution or pooling nodes decreases input tensor spatial dimensions to non-positive values. In this case, model conversion API shows error message like this: '[ ERROR ] Shape [ 1 -1 -1 256] is not fully defined for output X of "node_name".' - - -Keeping Aspect Ratio Resizer Replacement -++++++++++++++++++++++++++++++++++++++++ - -* If the ``input_shape`` command line parameter is not specified, model conversion API generates an input operation with both height and width equal to the value of parameter ``min_dimension`` in the ``keep_aspect_ratio_resizer``. - -* If the ``input_shape [1, H, W, 3]`` command line parameter is specified, model conversion API scales the specified input image height ``H`` and width ``W`` to satisfy the ``min_dimension`` and ``max_dimension`` constraints defined in the ``keep_aspect_ratio_resizer``. The following function calculates the input operation height and width: - - .. code-block:: py - :force: - - def calculate_shape_keeping_aspect_ratio(H: int, W: int, min_dimension: int, max_dimension: int): - ratio_min = min_dimension / min(H, W) - ratio_max = max_dimension / max(H, W) - ratio = min(ratio_min, ratio_max) - return int(round(H * ratio)), int(round(W * ratio)) - -The ``input_shape`` command line parameter should be specified only if the "pad_to_max_dimension" does not exist of is set to "false" in the ``keep_aspect_ratio_resizer``. - -Models with ``keep_aspect_ratio_resizer`` were trained to recognize object in real aspect ratio, in contrast with most of the classification topologies trained to recognize objects stretched vertically and horizontally as well. By default, topologies are converted with ``keep_aspect_ratio_resizer`` to consume a square input image. If the non-square image is provided as input, it is stretched without keeping aspect ratio that results to object detection quality decrease. - -.. note:: - - It is highly recommended to specify the ``input_shape`` command line parameter for the models with ``keep_aspect_ratio_resizer``, if the input image dimensions are known in advance. - -Model Conversion Process in Detail -################################## - -This section is intended for users who want to understand how model conversion API performs Object Detection API models conversion in details. The information in this section is also useful for users having complex models that are not converted with model conversion API out of the box. It is highly recommended to read the **Graph Transformation Extensions** section in the :doc:`[Legacy] Model Optimizer Extensibility <../../../legacy-model-optimizer-extensibility>` documentation first to understand sub-graph replacement concepts which are used here. - -It is also important to open the model in the `TensorBoard `__ to see the topology structure. Model conversion API can create an event file that can be then fed to the TensorBoard tool. Run model conversion, providing two command line parameters: - -* ``input_model `` --- Path to the frozen model. -* ``tensorboard_logdir`` --- Path to the directory where TensorBoard looks for the event files. - -Implementation of the transformations for Object Detection API models is located in the `file `__. Refer to the code in this file to understand the details of the conversion process. - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-retina-net.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-retina-net.rst deleted file mode 100644 index db2c6424367f58..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-retina-net.rst +++ /dev/null @@ -1,31 +0,0 @@ -Converting a TensorFlow RetinaNet Model -======================================= - - -.. meta:: - :description: Learn how to convert a RetinaNet model - from TensorFlow to the OpenVINO Intermediate Representation. - - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python ../../../../../../learn-openvino/interactive-tutorials-python <../../../../../../learn-openvino/interactive-tutorials-python>`. - -This tutorial explains how to convert a RetinaNet model to the Intermediate Representation (IR). - -`Public RetinaNet model `__ does not contain pretrained TensorFlow weights. -To convert this model to the TensorFlow format, follow the `Reproduce Keras to TensorFlow Conversion tutorial `__. - -After converting the model to TensorFlow format, run the following command: - -.. code-block:: sh - - mo --input "input_1[1,1333,1333,3]" --input_model retinanet_resnet50_coco_best_v2.1.0.pb --transformations_config front/tf/retinanet.json - - -Where ``transformations_config`` command-line parameter specifies the configuration json file containing model conversion hints for model conversion API. -The json file contains some parameters that need to be changed if you train the model yourself. It also contains information on how to match endpoints -to replace the subgraph nodes. After the model is converted to the OpenVINO IR format, the output nodes will be replaced with DetectionOutput layer. - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-slim-library.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-slim-library.rst deleted file mode 100644 index 847d44fce813b1..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-slim-library.rst +++ /dev/null @@ -1,117 +0,0 @@ -Converting TensorFlow Slim Image Classification Model Library Models -==================================================================== - - -.. meta:: - :description: Learn how to convert a Slim Image - Classification model from TensorFlow to the OpenVINO - Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -`TensorFlow-Slim Image Classification Model Library `__ is a library to define, train and evaluate classification models in TensorFlow. The library contains Python scripts defining the classification topologies together with checkpoint files for several pre-trained classification topologies. To convert a TensorFlow-Slim library model, complete the following steps: - -1. Download the TensorFlow-Slim models `git repository `__. -2. Download the pre-trained model `checkpoint `__. -3. Export the inference graph. -4. Convert the model using model conversion API. - -The `Example of an Inception V1 Model Conversion <#example_of_an_inception_v1_model_conversion>`__ below illustrates the process of converting an Inception V1 Model. - -Example of an Inception V1 Model Conversion -########################################### - -This example demonstrates how to convert the model on Linux OSes, but it could be easily adopted for the Windows OSes. - -**Step 1**. Create a new directory to clone the TensorFlow-Slim git repository to: - -.. code-block:: sh - - mkdir tf_models - -.. code-block:: sh - - git clone https://github.com/tensorflow/models.git tf_models - - -**Step 2**. Download and unpack the `Inception V1 model checkpoint file `__: - -.. code-block:: sh - - wget http://download.tensorflow.org/models/inception_v1_2016_08_28.tar.gz - -.. code-block:: sh - - tar xzvf inception_v1_2016_08_28.tar.gz - -**Step 3**. Export the inference graph --- the protobuf file (``.pb``) containing the architecture of the topology. This file *does not* contain the neural network weights and cannot be used for inference. - -.. code-block:: sh - - python3 tf_models/research/slim/export_inference_graph.py \ - --model_name inception_v1 \ - --output_file inception_v1_inference_graph.pb - - -Model conversion API comes with the summarize graph utility, which identifies graph input and output nodes. Run the utility to determine input/output nodes of the Inception V1 model: - -.. code-block:: sh - - python3 /openvino/tools/mo/utils/summarize_graph.py --input_model ./inception_v1_inference_graph.pb - -The output looks as follows: - -.. code-block:: sh - - 1 input(s) detected: - Name: input, type: float32, shape: (-1,224,224,3) - 1 output(s) detected: - InceptionV1/Logits/Predictions/Reshape_1 - -The tool finds one input node with name ``input``, type ``float32``, fixed image size ``(224,224,3)`` and undefined batch size ``-1``. The output node name is ``InceptionV1/Logits/Predictions/Reshape_1``. - -**Step 4**. Convert the model with the model conversion API: - -.. code-block:: sh - - mo --input_model ./inception_v1_inference_graph.pb --input_checkpoint ./inception_v1.ckpt -b 1 --mean_value [127.5,127.5,127.5] --scale 127.5 - - -The ``-b`` command line parameter is required because model conversion API cannot convert a model with undefined input size. - -For the information on why ``--mean_values`` and ``--scale`` command-line parameters are used, refer to the `Mean and Scale Values for TensorFlow-Slim Models <#Mean-and-Scale-Values-for-TensorFlow-Slim-Models>`__. - -Mean and Scale Values for TensorFlow-Slim Models -################################################# - -The TensorFlow-Slim Models were trained with normalized input data. There are several different normalization algorithms used in the Slim library. OpenVINO classification sample does not perform image pre-processing except resizing to the input layer size. It is necessary to pass mean and scale values to model conversion API so they are embedded into the generated IR in order to get correct classification results. - -The file `preprocessing_factory.py `__ contains a dictionary variable ``preprocessing_fn_map`` defining mapping between the model type and pre-processing function to be used. The function code should be analyzed to figure out the mean/scale values. - -The `inception_preprocessing.py `__ file defines the pre-processing function for the Inception models. The ``preprocess_for_eval`` function contains the following code: - -.. code-block:: py - :force: - - ... - import tensorflow as tf - if image.dtype != tf.float32: - image = tf.image.convert_image_dtype(image, dtype=tf.float32) - ... - image = tf.subtract(image, 0.5) - image = tf.multiply(image, 2.0) - return image - - -Firstly, the ``image`` is converted to data type `tf.float32` and the values in the tensor are scaled to the ``[0, 1]`` range using the `tf.image.convert_image_dtype `__ function. Then the ``0.5`` is subtracted from the image values and values multiplied by ``2.0``. The final image range of values is ``[-1, 1]``. - -OpenVINO classification sample reads an input image as a three-dimensional array of integer values from the range ``[0, 255]``. In order to scale them to ``[-1, 1]`` range, the mean value ``127.5`` for each image channel should be specified as well as a scale factor ``127.5``. - -Similarly, the mean/scale values can be determined for other Slim models. - -The exact mean/scale values are defined in the table with list of supported TensorFlow-Slim models at the :doc:`Converting a TensorFlow Model <../[legacy]-convert-tensorflow>` guide. - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-wide-and-deep-family.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-wide-and-deep-family.rst deleted file mode 100644 index d2f83fa12d8e67..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-wide-and-deep-family.rst +++ /dev/null @@ -1,166 +0,0 @@ -Converting TensorFlow Wide and Deep Family Models -================================================= - - -.. meta:: - :description: Learn how to convert Wide and Deep Family - models from TensorFlow to the OpenVINO Intermediate Representation. - - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -The Wide and Deep models is a combination of wide and deep parts for memorization and generalization of object features respectively. -These models can contain different types of object features such as numerical, categorical, sparse and sequential features. These feature types are specified -through Tensorflow tf.feature_column API. Table below presents what feature types are supported by the OpenVINO toolkit. - -.. list-table:: - :header-rows: 1 - - * - numeric - - (weighted) categorical - - categorical with hash - - bucketized - - sequential - - crossed - * - yes - - yes - - no - - yes - - yes - - no - - -.. note:: The categorical with hash and crossed features are currently unsupported since OpenVINO does not cover tensors of the `string` type and operations with them. - -Preparing an Example of Wide and Deep Model -########################################### - -**Step 1**. Clone the GitHub repository with TensorFlow models and move to the directory with an example of Wide and Deep model: - -.. code-block:: sh - - git clone https://github.com/tensorflow/models.git --branch r2.2.0; - cd official/r1/wide_deep - - -The Wide and Deep model is no longer in the master branch of the repository but is still available in the r2.2.0 branch. - - -**Step 2**. Train the model - -As the OpenVINO™ toolkit does not support the categorical with hash and crossed features, such feature types must be switched off in the model -by changing the ``build_model_columns()`` function in `census_dataset.py` as follows: - -.. code-block:: py - :force: - - def build_model_columns(): - """Builds a set of wide and deep feature columns.""" - # Continuous variable columns - age = tf.feature_column.numeric_column('age') - education_num = tf.feature_column.numeric_column('education_num') - capital_gain = tf.feature_column.numeric_column('capital_gain') - capital_loss = tf.feature_column.numeric_column('capital_loss') - hours_per_week = tf.feature_column.numeric_column('hours_per_week') - education = tf.feature_column.categorical_column_with_vocabulary_list( - 'education', [ - 'Bachelors', 'HS-grad', '11th', 'Masters', '9th', 'Some-college', - 'Assoc-acdm', 'Assoc-voc', '7th-8th', 'Doctorate', 'Prof-school', - '5th-6th', '10th', '1st-4th', 'Preschool', '12th']) - marital_status = tf.feature_column.categorical_column_with_vocabulary_list( - 'marital_status', [ - 'Married-civ-spouse', 'Divorced', 'Married-spouse-absent', - 'Never-married', 'Separated', 'Married-AF-spouse', 'Widowed']) - relationship = tf.feature_column.categorical_column_with_vocabulary_list( - 'relationship', [ - 'Husband', 'Not-in-family', 'Wife', 'Own-child', 'Unmarried', - 'Other-relative']) - workclass = tf.feature_column.categorical_column_with_vocabulary_list( - 'workclass', [ - 'Self-emp-not-inc', 'Private', 'State-gov', 'Federal-gov', - 'Local-gov', '?', 'Self-emp-inc', 'Without-pay', 'Never-worked']) - # To show an example of hashing: - #occupation = tf.feature_column.categorical_column_with_hash_bucket( - # 'occupation', hash_bucket_size=_HASH_BUCKET_SIZE) - # Transformations. - age_buckets = tf.feature_column.bucketized_column( - age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65]) - # Wide columns and deep columns. - base_columns = [ - education, marital_status, relationship, workclass, - age_buckets, - ] - crossed_columns = [] - wide_columns = base_columns + crossed_columns - deep_columns = [ - age, - education_num, - capital_gain, - capital_loss, - hours_per_week, - tf.feature_column.indicator_column(workclass), - tf.feature_column.indicator_column(education), - tf.feature_column.indicator_column(marital_status), - tf.feature_column.indicator_column(relationship), - # To show an example of embedding - ] - return wide_columns, deep_columns - -After that, start training with the following command: - -.. code-block:: sh - - python census_main.py - - -Converting the Wide and Deep Model to IR -######################################## - -Use the following command line to convert the saved model file with the checkpoint: - -.. code-block:: sh - - mo - --input_checkpoint checkpoint --input_meta_graph model.ckpt.meta - --input "IteratorGetNext:0[2], - IteratorGetNext:1[2], - IteratorGetNext:2[2], - IteratorGetNext:4[2], - IteratorGetNext:7[2], - linear/linear_model/linear_model/linear_model/education/to_sparse_input/indices:0[10,2]{i64}, - linear/linear_model/linear_model/linear_model/education/hash_table_Lookup/LookupTableFindV2:0[10]{i64}, - linear/linear_model/linear_model/linear_model/education/to_sparse_input/dense_shape:0[2]{i64}->[2,50], - linear/linear_model/linear_model/linear_model/marital_status/to_sparse_input/indices:0[10,2]{i64}, - linear/linear_model/linear_model/linear_model/marital_status/hash_table_Lookup/LookupTableFindV2:0[10]{i64}, - linear/linear_model/linear_model/linear_model/marital_status/to_sparse_input/dense_shape:0[2]{i64}->[2,50], - linear/linear_model/linear_model/linear_model/relationship/to_sparse_input/indices:0[10,2]{i64}, - linear/linear_model/linear_model/linear_model/relationship/hash_table_Lookup/LookupTableFindV2:0[10]{i64}, - linear/linear_model/linear_model/linear_model/relationship/to_sparse_input/dense_shape:0[2]{i64}->[2,50], - linear/linear_model/linear_model/linear_model/workclass/to_sparse_input/indices:0[10,2]{i64}, - linear/linear_model/linear_model/linear_model/workclass/hash_table_Lookup/LookupTableFindV2:0[10]{i64}, - linear/linear_model/linear_model/linear_model/workclass/to_sparse_input/dense_shape:0[2]{i64}->[2,50], - dnn/input_from_feature_columns/input_layer/education_indicator/to_sparse_input/indices:0[10,2]{i64}, - dnn/input_from_feature_columns/input_layer/education_indicator/hash_table_Lookup/LookupTableFindV2:0[10]{i64}, - dnn/input_from_feature_columns/input_layer/education_indicator/to_sparse_input/dense_shape:0[2]{i64}->[2,50], - dnn/input_from_feature_columns/input_layer/marital_status_indicator/to_sparse_input/indices:0[10,2]{i64}, - dnn/input_from_feature_columns/input_layer/marital_status_indicator/hash_table_Lookup/LookupTableFindV2:0[10]{i64}, - dnn/input_from_feature_columns/input_layer/marital_status_indicator/to_sparse_input/dense_shape:0[2]{i64}->[2,50], - dnn/input_from_feature_columns/input_layer/relationship_indicator/to_sparse_input/indices:0[10,2]{i64}, - dnn/input_from_feature_columns/input_layer/relationship_indicator/hash_table_Lookup/LookupTableFindV2:0[10]{i64}, - dnn/input_from_feature_columns/input_layer/relationship_indicator/to_sparse_input/dense_shape:0[2]{i64}->[2,50], - dnn/input_from_feature_columns/input_layer/workclass_indicator/to_sparse_input/indices:0[10,2]{i64}, - dnn/input_from_feature_columns/input_layer/workclass_indicator/hash_table_Lookup/LookupTableFindV2:0[10]{i64}, - dnn/input_from_feature_columns/input_layer/workclass_indicator/to_sparse_input/dense_shape:0[2]{i64}->[2,50]" - --output head/predictions/probabilities - - -The model contains operations unsupported by the OpenVINO™ toolkit such as ``IteratorGetNext`` and ``LookupTableFindV2``, so the Model Optimizer must prune these nodes. -The pruning is specified through `--input` option. The prunings for ``IteratorGetNext:*`` nodes correspond to numeric features. -The pruning for each categorical feature consists of three prunings for the following nodes: ``*/to_sparse_input/indices:0``, ``*/hash_table_Lookup/LookupTableFindV2:0``, and ``*/to_sparse_input/dense_shape:0``. - -The above command line generates an OpenVINO model for a batch of two objects, with the total number of actual categorical feature values equal to 10 and maximum size of a sparse categorical feature for one object equal to 50. - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-xlnet.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-xlnet.rst deleted file mode 100644 index 853614de85feed..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-xlnet.rst +++ /dev/null @@ -1,208 +0,0 @@ -Converting a TensorFlow XLNet Model -=================================== - - -.. meta:: - :description: Learn how to convert an XLNet model from - TensorFlow to the OpenVINO Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -Pretrained models for XLNet (Bidirectional Encoder Representations from Transformers) are -`publicly available `__. - -Supported Models -################ - -The following models from the pretrained `XLNet model list `__ are currently supported: - -* `XLNet-Large, Cased `__ -* `XLNet-Base, Cased `__ - -Downloading the Pretrained Base XLNet Model -########################################### - -Download and unzip an archive with the `XLNet-Base, Cased `__. - -After the archive is unzipped, the directory ``cased_L-12_H-768_A-12`` is created and contains the following files: - -* TensorFlow checkpoint (``xlnet_model.ckpt``), containing the pretrained weights (which is actually 3 files) -* sentence piece model (``spiece.model``) used for (de)tokenization -* config file (``xlnet_config.json``), which specifies the hyperparameters of the model - -To get pb-file from the archive contents, you need to do the following. - -1. Run commands - - .. code-block:: sh - - cd ~ - mkdir XLNet-Base - cd XLNet-Base - git clone https://github.com/zihangdai/xlnet - wget https://storage.googleapis.com/xlnet/released_models/cased_L-12_H-768_A-12.zip - unzip cased_L-12_H-768_A-12.zip - mkdir try_save - - -2. Save and run the following Python script in `~/XLNet-Base/xlnet`: - - .. note:: The original model repository has been tested with TensorFlow 1.13.1 under Python2. - - .. code-block:: py - :force: - - from collections import namedtuple - - import tensorflow as tf - from tensorflow.python.framework import graph_io - - import model_utils - import xlnet - - LENGTHS = 50 - BATCH = 1 - OUTPUT_DIR = '~/XLNet-Base/try_save/' - INIT_CKPT_PATH = '~/XLNet-Base/xlnet_cased_L-12_H-768_A-12/xlnet_model.ckpt' - XLNET_CONFIG_PATH = '~/XLNet-Base/xlnet_cased_L-12_H-768_A-12/xlnet_config.json' - - FLags = namedtuple('FLags', 'use_tpu init_checkpoint') - FLAGS = FLags(use_tpu=False, init_checkpoint=INIT_CKPT_PATH) - - xlnet_config = xlnet.XLNetConfig(json_path=XLNET_CONFIG_PATH) - run_config = xlnet.RunConfig(is_training=False, use_tpu=False, use_bfloat16=False, dropout=0.1, dropatt=0.1,) - - - sentence_features_input_idx = tf.compat.v1.placeholder(tf.int32, shape=[LENGTHS, BATCH], name='input_ids') - sentence_features_segment_ids = tf.compat.v1.placeholder(tf.int32, shape=[LENGTHS, BATCH], name='seg_ids') - sentence_features_input_mask = tf.compat.v1.placeholder(tf.float32, shape=[LENGTHS, BATCH], name='input_mask') - - with tf.compat.v1.Session() as sess: - xlnet_model = xlnet.XLNetModel(xlnet_config=xlnet_config, run_config=run_config, - input_ids=sentence_features_input_idx, - seg_ids=sentence_features_segment_ids, - input_mask=sentence_features_input_mask) - - sess.run(tf.compat.v1.global_variables_initializer()) - model_utils.init_from_checkpoint(FLAGS, True) - - # Save the variables to disk. - saver = tf.compat.v1.train.Saver() - - # Saving checkpoint - save_path = saver.save(sess, OUTPUT_DIR + "model.ckpt") - - # Freezing model - outputs = ['model/transformer/dropout_2/Identity'] - graph_def_freezed = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), outputs) - - # Saving non-frozen and frozen model to pb - graph_io.write_graph(sess.graph.as_graph_def(), OUTPUT_DIR, 'model.pb', as_text=False) - graph_io.write_graph(graph_def_freezed,OUTPUT_DIR, 'model_frozen.pb', - as_text=False) - - # Write to tensorboard - with tf.compat.v1.summary.FileWriter(logdir=OUTPUT_DIR, graph_def=graph_def_freezed) as writer: - writer.flush() - -Downloading the Pretrained Large XLNet Model -############################################ - -Download and unzip an archive with the `XLNet-Base, Cased `__. - -After unzipping the archive, the directory ``cased_L-12_H-1024_A-16`` is created and contains the following files: - -* TensorFlow checkpoint (``xlnet_model.ckpt``) containing the pretrained weights (which is actually 3 files) -* sentence piece model (``spiece.model``) used for (de)tokenization -* config file (``xlnet_config.json``) which specifies the hyperparameters of the model - -To get ``pb-file`` from the archive contents, follow the instructions below: - -1. Run commands - - .. code-block:: sh - - cd ~ - mkdir XLNet-Large - cd XLNet-Large - git clone https://github.com/zihangdai/xlnet - wget https://storage.googleapis.com/xlnet/released_models/cased_L-24_H-1024_A-16.zip - unzip cased_L-24_H-1024_A-16.zip - mkdir try_save - - -2. Save and run the following Python script in ``~/XLNet-Large/xlnet``: - - .. code-block:: py - :force: - - from collections import namedtuple - - import tensorflow as tf - from tensorflow.python.framework import graph_io - - import model_utils - import xlnet - - LENGTHS = 50 - BATCH = 1 - OUTPUT_DIR = '~/XLNet-Large/try_save' - INIT_CKPT_PATH = '~/XLNet-Large/cased_L-24_H-1024_A-16/xlnet_model.ckpt' - XLNET_CONFIG_PATH = '~/XLNet-Large/cased_L-24_H-1024_A-16/xlnet_config.json' - - FLags = namedtuple('FLags', 'use_tpu init_checkpoint') - FLAGS = FLags(use_tpu=False, init_checkpoint=INIT_CKPT_PATH) - - xlnet_config = xlnet.XLNetConfig(json_path=XLNET_CONFIG_PATH) - run_config = xlnet.RunConfig(is_training=False, use_tpu=False, use_bfloat16=False, dropout=0.1, dropatt=0.1,) - - - sentence_features_input_idx = tf.compat.v1.placeholder(tf.int32, shape=[LENGTHS, BATCH], name='input_ids') - sentence_features_segment_ids = tf.compat.v1.placeholder(tf.int32, shape=[LENGTHS, BATCH], name='seg_ids') - sentence_features_input_mask = tf.compat.v1.placeholder(tf.float32, shape=[LENGTHS, BATCH], name='input_mask') - - with tf.compat.v1.Session() as sess: - xlnet_model = xlnet.XLNetModel(xlnet_config=xlnet_config, run_config=run_config, - input_ids=sentence_features_input_idx, - seg_ids=sentence_features_segment_ids, - input_mask=sentence_features_input_mask) - - sess.run(tf.compat.v1.global_variables_initializer()) - model_utils.init_from_checkpoint(FLAGS, True) - - # Save the variables to disk. - saver = tf.compat.v1.train.Saver() - - # Saving checkpoint - save_path = saver.save(sess, OUTPUT_DIR + "model.ckpt") - - # Freezing model - outputs = ['model/transformer/dropout_2/Identity'] - graph_def_freezed = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), outputs) - - # Saving non-frozen and frozen model to pb - graph_io.write_graph(sess.graph.as_graph_def(), OUTPUT_DIR, 'model.pb', as_text=False) - graph_io.write_graph(graph_def_freezed,OUTPUT_DIR, 'model_frozen.pb', - as_text=False) - - # Write to tensorboard - with tf.compat.v1.summary.FileWriter(logdir=OUTPUT_DIR, graph_def=graph_def_freezed) as writer: - writer.flush() - - -The script should save into ``~/XLNet-Large/xlnet``. - -Converting a frozen TensorFlow XLNet Model to IR -################################################# - -To generate the XLNet Intermediate Representation (IR) of the model, run model conversion with the following parameters: - -.. code-block:: sh - - mo --input_model path-to-model/model_frozen.pb \ - --input "input_mask[50,1],input_ids[50,1],seg_ids[50,1]" - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-yolo.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-yolo.rst deleted file mode 100644 index e7e8072b1bda05..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-yolo.rst +++ /dev/null @@ -1,322 +0,0 @@ -Converting TensorFlow YOLO Models -================================= - - -.. meta:: - :description: Learn how to convert YOLO models from - TensorFlow to the OpenVINO Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - -This document explains how to convert real-time object detection YOLOv1, YOLOv2, YOLOv3 and YOLOv4 public models to the Intermediate Representation (IR). All YOLO models are originally implemented in the DarkNet framework and consist of two files: - -* The ``.cfg`` file with model configurations -* The ``.weights`` file with model weights - -Depending on a YOLO model version, the ``convert_model()`` method converts it differently: - -- YOLOv4 must be first converted from Keras to TensorFlow 2. -- YOLOv3 has several implementations. This tutorial uses a TensorFlow implementation of YOLOv3 model, which can be directly converted to an IR. -- YOLOv1 and YOLOv2 models must be first converted to TensorFlow using DarkFlow. - -Converting a YOLOv4 Model to IR -############################### - -This section explains how to convert the YOLOv4 Keras model from the `repository `__ to an IR. To convert the YOLOv4 model, follow the instructions below: - -1. Download YOLOv4 weights and associated with it cfg file: - - - for YOLOv4 ( `weights `__ / `config file `__ ) - - for YOLOv4-tiny ( `weights `__ / `config file `__ ) - -2. Clone the repository with the YOLOv4 model: - - .. code-block:: sh - - git clone https://github.com/david8862/keras-YOLOv3-model-set - - -3. Convert the model to the TensorFlow 2 format: - - - for YOLOv4: - - .. code-block:: sh - - python keras-YOLOv3-model-set/tools/model_converter/convert.py /yolov4.cfg /yolov4.weights - - - - for YOLOv4-tiny: - - .. code-block:: sh - - python keras-YOLOv3-model-set/tools/model_converter/convert.py /yolov4-tiny.cfg /yolov4-tiny.weights - - -4. Run model conversion from the TensorFlow 2 to an IR format: - - .. note:: - - Before you run the conversion, make sure you have installed all the model conversion API dependencies for TensorFlow 2. - - If you get errors, you may need to add the additional step to divide the input by 255: - - .. code-block:: sh - - --scale_values=image_input[255] - - - .. code-block:: sh - - mo --saved_model_dir yolov4 --output_dir models/IRs --input_shape [1,608,608,3] --model_name yolov4 - - -Converting YOLOv3 Model to the OpenVINO format -############################################## - -There are several public versions of TensorFlow YOLOv3 model implementation available on GitHub. This section explains how to convert YOLOv3 model from -the `repository `__ (commit ed60b90) to an IR , but the process is similar for other versions of TensorFlow YOLOv3 model. - -Overview of YOLOv3 Model Architecture -+++++++++++++++++++++++++++++++++++++ - -Originally, YOLOv3 model includes feature extractor called ``Darknet-53`` with three branches at the end that make detections at three different scales. These branches must end with the YOLO ``Region`` layer. - -``Region`` layer was first introduced in the DarkNet framework. Other frameworks, including TensorFlow, do not have the ``Region`` implemented as a single layer, so every author of public YOLOv3 model creates it using simple layers. This badly affects performance. For this reason, the main idea of YOLOv3 model conversion to IR is to cut off these custom ``Region`` -like parts of the model and complete the model with the ``Region`` layers where required. - -Dumping a YOLOv3 TensorFlow Model -+++++++++++++++++++++++++++++++++ - -To dump TensorFlow model out of `GitHub repository `__ (commit ed60b90), follow the instructions below: - -1. Clone the repository: - - .. code-block:: sh - - git clone https://github.com/mystic123/tensorflow-yolo-v3.git - cd tensorflow-yolo-v3 - - -2. (Optional) Checkout to the commit that the conversion was tested on: - - .. code-block:: sh - - git checkout ed60b90 - - -3. Download `coco.names `__ file from the DarkNet website **OR** use labels that fit your task. -4. Download the `yolov3.weights `__ (for the YOLOv3 model) or `yolov3-tiny.weights `__ (for the YOLOv3-tiny model) file **OR** use your pre-trained weights with the same structure. -5. Install PIL, which is used by the conversion script in the repo: - - .. code-block:: sh - - pip install pillow - - -6. Run a converter: - - .. note:: This converter works with TensorFlow 1.x and numpy 1.19 or lower. - - - - For YOLO-v3: - - .. code-block:: sh - - python3 convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3.weights - - - - For YOLOv3-tiny: - - .. code-block:: sh - - python3 convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3-tiny.weights --tiny - - - At this step, you may receive a warning like ``WARNING:tensorflow:Entity <...> could not be transformed and will be executed as-is.``. To work around this issue, switch to gast 0.2.2 with the following command: - - .. code-block:: sh - - pip3 install --user gast==0.2.2 - - -If you have YOLOv3 weights trained for an input image with the size different from 416 (320, 608 or your own), provide the ``--size`` key with the size of your image specified while running the converter. For example, run the following command for an image with size 608: - -.. code-block:: sh - - python3 convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3_608.weights --size 608 - - -Converting a YOLOv3 TensorFlow Model to the OpenVINO format -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -To solve the problems explained in the `YOLOv3 architecture overview <#overview-of-yolov3-model-architecture>`__ section, use the ``yolo_v3.json`` or ``yolo_v3_tiny.json`` (depending on a model) configuration file with custom operations located in the ``/tools/model_optimizer/extensions/front/tf`` repository. - -It consists of several attributes: - -.. code-block:: sh - - [ - { - "id": "TFYOLOV3", - "match_kind": "general", - "custom_attributes": { - "classes": 80, - "anchors": [10, 13, 16, 30, 33, 23, 30, 61, 62, 45, 59, 119, 116, 90, 156, 198, 373, 326], - "coords": 4, - "num": 9, - "masks":[[6, 7, 8], [3, 4, 5], [0, 1, 2]], - "entry_points": ["detector/yolo-v3/Reshape", "detector/yolo-v3/Reshape_4", "detector/yolo-v3/Reshape_8"] - } - } - ] - - -where: - -- ``id`` and ``match_kind`` are parameters that you cannot change. -- ``custom_attributes`` is a parameter that stores all the YOLOv3 specific attributes: - - - ``classes``, ``coords``, ``num``, and ``masks`` are attributes that you should copy from the configuration file that was used for model training. If you used DarkNet officially shared weights, you can use ``yolov3.cfg`` or ``yolov3-tiny.cfg`` configuration file from `GitHub repository `__. Replace the default values in ``custom_attributes`` with the parameters that follow the ``[yolo]`` titles in the configuration file. - - ``anchors`` is an optional parameter that is not used while inference of the model, but it used in a demo to parse ``Region`` layer output - - ``entry_points`` is a node name list to cut off the model and append the ``Region`` layer with custom attributes specified above. - - -To generate an IR of the YOLOv3 TensorFlow model, run: - -.. code-block:: sh - - mo \ - --input_model /path/to/yolo_v3.pb \ - --transformations_config front/tf/yolo_v3.json \ - --batch 1 \ - --output_dir - - -To generate an IR of the YOLOv3-tiny TensorFlow model, run: - -.. code-block:: sh - - mo \ - --input_model /path/to/yolo_v3_tiny.pb \ - --transformations_config front/tf/yolo_v3_tiny.json \ - --batch 1 \ - --output_dir - - -where: - -* ``batch`` defines shape of model input. In the example, ``batch`` is equal to 1, but you can also specify other integers larger than 1. -* ``transformations_config`` adds missing ``Region`` layers to the model. In the IR, the ``Region`` layer has name ``RegionYolo``. - -.. note:: - - The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the ``RGB<->BGR`` conversion specifying the command-line parameter: ``reverse_input_channels``. Otherwise, inference results may be incorrect. For more information about the parameter, refer to the **When to Reverse Input Channels** section of the :doc:`Converting a Model to Intermediate Representation (IR) <../../[legacy]-setting-input-shapes>` guide. - - -OpenVINO toolkit provides a demo that uses YOLOv3 model. Refer to the `Object Detection C++ Demo `__ for more information. - -Converting YOLOv1 and YOLOv2 Models to the IR -############################################# - -Before converting, choose a YOLOv1 or YOLOv2 model version that best suits your task. Download model configuration file and corresponding weight file: - -* From `DarkFlow repository `__ : configuration files are stored in the ``cfg`` directory, links to weight files are given in the ``README.md`` file. The files from this repository are adapted for conversion to TensorFlow using DarkFlow. -* From DarkNet website and repository: configuration files are stored in the ``cfg`` directory of the `repository `__, links to weight files are given on the `YOLOv1 `__ and `YOLOv2 `__ websites. - -To convert DarkNet YOLOv1 and YOLOv2 models to the OpenVINO format, follow these steps: - -1. `Install DarkFlow <#installing-darkflow>`__ -2. `Convert DarkNet YOLOv1 or YOLOv2 model to TensorFlow <#converting-a-darknet-yolov1-or-yolov2-model-to-tensorflow>`__ using DarkFlow -3. `Convert TensorFlow YOLOv1 or YOLOv2 model to IR <#converting-a-tensorflow-yolov1-or-yolov2-model-to-the-ir>`__ - - -Installing DarkFlow -+++++++++++++++++++++ - -You need DarkFlow to convert YOLOv1 and YOLOv2 models to TensorFlow. To install DarkFlow: - -1. Install DarkFlow `required dependencies `__. -2. Clone DarkFlow git repository: - - .. code-block:: sh - - git clone https://github.com/thtrieu/darkflow.git - - -3. Go to the root directory of the cloned repository: - - .. code-block:: sh - - cd darkflow - - -4. Install DarkFlow, using the instructions from the ``README.md`` file in the `DarkFlow repository `__. - - -Converting a DarkNet YOLOv1 or YOLOv2 Model to TensorFlow -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -To convert YOLOv1 or YOLOv2 model to TensorFlow, go to the root directory of the cloned DarkFlow repository, place the previously downloaded \*.cfg and \*.weights files in the current directory and run the following command: - -- For YOLOv1: - - .. code-block:: sh - - python3 flow --model yolov1.cfg --load yolov1.weights --savepb - - -- For YOLOv2 with VOC dataset ``--labels`` argument should be specified and additional changes in the original exporting script are required. In the `file `__ change line 121 from ``self.offset = 16`` to ``self.offset = 20``. Then run: - - .. code-block:: sh - - python3 flow --model yolov2-voc.cfg --load yolov2-voc.weights --labels voc-labels.txt --savepb - - -VOC labels can be found on the following `link `__ - -General conversion command is: - -.. code-block:: sh - - python3 flow --model /.cfg --load /.weights --labels --savepb - - -For YOLOv1, the ``--labels`` argument can be skipped. If the model was successfully converted, you can find the ``.meta`` and ``.pb`` files. -in ``built_graph`` subdirectory of the cloned DarkFlow repository. - -File ``.pb`` is a TensorFlow representation of the YOLO model. - -Converting a TensorFlow YOLOv1 or YOLOv2 Model to the IR -++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -Converted TensorFlow YOLO model is missing ``Region`` layer and its parameters. Original YOLO ``Region`` layer parameters are stored in the configuration ``/.cfg`` file under the ``[region]`` title. - -To recreate the original model structure, use the corresponding yolo ``.json`` configuration file with custom operations and ``Region`` layer parameters when converting the model to the IR. This file is located in the ``/tools/model_optimizer/extensions/front/tf`` directory. - -If chosen model has specific values of these parameters, create another configuration file with custom operations and use it for conversion. - -To generate the IR of the YOLOv1 model, provide TensorFlow YOLOv1 or YOLOv2 model to model conversion API with the following parameters: - -.. code-block:: sh - - mo - --input_model /.pb \ - --batch 1 \ - --scale 255 \ - --transformations_config front/tf/.json - - -where: - -* ``batch`` defines shape of model input. In the example, ``batch`` is equal to 1, but you can also specify other integers larger than 1. -* ``scale`` specifies scale factor that input values will be divided by. The model was trained with input values in the range ``[0,1]``. OpenVINO toolkit samples read input images as values in ``[0,255]`` range, so the scale 255 must be applied. -* ``transformations_config`` adds missing ``Region`` layers to the model. In the IR, the ``Region`` layer has name ``RegionYolo``. For other applicable parameters, refer to the :doc:`Convert Model from TensorFlow <../[legacy]-convert-tensorflow>` guide. - -.. note:: - - The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the ``RGB<->BGR`` conversion specifying the command-line parameter: ``reverse_input_channels``. Otherwise, inference results may be incorrect. For more information about the parameter, refer to the **When to Reverse Input Channels** section of the :doc:`Converting a Model to Intermediate Representation (IR) <../../[legacy]-setting-input-shapes>` guide. - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-convert-onnx.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-convert-onnx.rst deleted file mode 100644 index a864a037d488b7..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-convert-onnx.rst +++ /dev/null @@ -1,70 +0,0 @@ -[LEGACY] Converting an ONNX Model -============================================= - -.. meta:: - :description: Learn how to convert a model from the - ONNX format to the OpenVINO Intermediate Representation. - - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Converting an ONNX Model <../../../../../openvino-workflow/model-preparation/convert-model-onnx>` article. - - -.. note:: ONNX models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <../../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions. - -Converting an ONNX Model -######################## - -The model conversion process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format. - -.. tab-set:: - - .. tab-item:: Python - :sync: py - - To convert an ONNX model, run ``convert_model()`` method with the path to the ``.onnx`` file: - - .. code-block:: py - :force: - - import openvino - from openvino.tools.mo import convert_model - - core = openvino.Core() - ov_model = convert_model(".onnx") - compiled_model = core.compile_model(ov_model, "AUTO") - - .. important:: - - The ``convert_model()`` method returns ``ov.Model`` that you can optimize, compile, or save to a file for subsequent use. - - .. tab-item:: CLI - :sync: cli - - You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred. - - .. code-block:: sh - - mo --input_model .onnx - - -There are no ONNX-specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the *General Conversion Parameters* section in the :doc:`Converting a Model to Intermediate Representation (IR) <../[legacy]-setting-input-shapes>` guide. - -Supported ONNX Layers -##################### - -For the list of supported standard layers, refer to the :doc:`Supported Operations <../../../../../about-openvino/compatibility-and-support/supported-operations>` page. - -Additional Resources -#################### - -See the :doc:`Model Conversion Tutorials <[legacy]-conversion-tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific ONNX models. Here are some examples: - -* :doc:`Convert ONNX Faster R-CNN Model <[legacy]-conversion-tutorials/convert-onnx-faster-r-cnn>` -* :doc:`Convert ONNX GPT-2 Model <[legacy]-conversion-tutorials/convert-onnx-gpt-2>` -* :doc:`Convert ONNX Mask R-CNN Model <[legacy]-conversion-tutorials/convert-onnx-mask-r-cnn>` - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-convert-paddle.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-convert-paddle.rst deleted file mode 100644 index 041a14f93547b6..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-convert-paddle.rst +++ /dev/null @@ -1,139 +0,0 @@ -[LEGACY] Converting a PaddlePaddle Model -====================================================== - - -.. meta:: - :description: Learn how to convert a model from the - PaddlePaddle format to the OpenVINO Intermediate Representation. - - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Converting a PaddlePaddle Model <../../../../../openvino-workflow/model-preparation/convert-model-paddle>` article. - - -This page provides general instructions on how to convert a model from a PaddlePaddle format to the OpenVINO IR format using Model Optimizer. The instructions are different depending on PaddlePaddle model format. - -.. note:: PaddlePaddle models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <../../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions. - -Converting PaddlePaddle Model Inference Format -############################################## - -PaddlePaddle inference model includes ``.pdmodel`` (storing model structure) and ``.pdiparams`` (storing model weight). For how to export PaddlePaddle inference model, please refer to the `Exporting PaddlePaddle Inference Model `__ Chinese guide. - - -To convert a PaddlePaddle model, use the ``mo`` script and specify the path to the input ``.pdmodel`` model file: - -.. code-block:: sh - - mo --input_model .pdmodel - -**For example**, this command converts a yolo v3 PaddlePaddle network to OpenVINO IR network: - -.. code-block:: sh - - mo --input_model=yolov3.pdmodel --input=image,im_shape,scale_factor --input_shape=[1,3,608,608],[1,2],[1,2] --reverse_input_channels --output=save_infer_model/scale_0.tmp_1,save_infer_model/scale_1.tmp_1 - -Converting PaddlePaddle Model From Memory Using Python API -########################################################## - -Model conversion API supports passing the following PaddlePaddle models directly from memory: - -* ``paddle.hapi.model.Model`` -* ``paddle.fluid.dygraph.layers.Layer`` -* ``paddle.fluid.executor.Executor`` - -When you convert certain PaddlePaddle models, you may need to set the ``example_input`` or ``example_output`` parameters first. Below you will find examples that show how to convert aforementioned model formats using the parameters. - -* ``paddle.hapi.model.Model`` - - .. code-block:: py - :force: - - import paddle - from openvino.tools.mo import convert_model - - # create a paddle.hapi.model.Model format model - resnet50 = paddle.vision.models.resnet50() - x = paddle.static.InputSpec([1,3,224,224], 'float32', 'x') - y = paddle.static.InputSpec([1,1000], 'float32', 'y') - - model = paddle.Model(resnet50, x, y) - - # convert to OpenVINO IR format - ov_model = convert_model(model) - - # optional: serialize OpenVINO IR to *.xml & *.bin - from openvino.runtime import serialize - serialize(ov_model, "ov_model.xml", "ov_model.bin") - -* ``paddle.fluid.dygraph.layers.Layer`` - - ``example_input`` is required while ``example_output`` is optional, and accept the following formats: - - ``list`` with tensor(``paddle.Tensor``) or InputSpec(``paddle.static.input.InputSpec``) - - .. code-block:: py - :force: - - import paddle - from openvino.tools.mo import convert_model - - # create a paddle.fluid.dygraph.layers.Layer format model - model = paddle.vision.models.resnet50() - x = paddle.rand([1,3,224,224]) - - # convert to OpenVINO IR format - ov_model = convert_model(model, example_input=[x]) - -* ``paddle.fluid.executor.Executor`` - - ``example_input`` and ``example_output`` are required, and accept the following formats: - - ``list`` or ``tuple`` with variable(``paddle.static.data``) - - .. code-block:: py - :force: - - import paddle - from openvino.tools.mo import convert_model - - paddle.enable_static() - - # create a paddle.fluid.executor.Executor format model - x = paddle.static.data(name="x", shape=[1,3,224]) - y = paddle.static.data(name="y", shape=[1,3,224]) - relu = paddle.nn.ReLU() - sigmoid = paddle.nn.Sigmoid() - y = sigmoid(relu(x)) - - exe = paddle.static.Executor(paddle.CPUPlace()) - exe.run(paddle.static.default_startup_program()) - - # convert to OpenVINO IR format - ov_model = convert_model(exe, example_input=[x], example_output=[y]) - - -.. important:: - - The ``convert_model()`` method returns ``ov.Model`` that you can optimize, compile, or save to a file for subsequent use. - - -Supported PaddlePaddle Layers -############################# - -For the list of supported standard layers, refer to the :doc:`Supported Operations <../../../../../about-openvino/compatibility-and-support/supported-operations>` page. - -Frequently Asked Questions (FAQ) -################################ - -The model conversion API displays explanatory messages for typographical errors, incorrectly used options, or other issues. They describe the potential cause of the problem and give a link to the :doc:`Model Optimizer FAQ <../[legacy]-model-optimizer-faq>`, which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in :doc:`Convert a Model <../../legacy-conversion-api>` to help you understand what went wrong. - -Additional Resources -#################### - -See the :doc:`Model Conversion Tutorials <[legacy]-conversion-tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific PaddlePaddle models. - - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-convert-pytorch.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-convert-pytorch.rst deleted file mode 100644 index 2ab66a49cd3546..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-convert-pytorch.rst +++ /dev/null @@ -1,111 +0,0 @@ -[LEGACY] Converting a PyTorch Model -============================================ - - -.. meta:: - :description: Learn how to convert a model from the - PyTorch format to the OpenVINO Intermediate Representation. - - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Converting a PyTorch Model <../../../../../openvino-workflow/model-preparation/convert-model-pytorch>` article. - -This page provides instructions on how to convert a model from the PyTorch format to the OpenVINO IR format. - -The conversion is a required step to run inference using OpenVINO API. -It is not required if you choose to work with OpenVINO under the PyTorch framework, -using its :doc:`torch.compile feature <../../../../../openvino-workflow/torch-compile>`. - -Converting a PyTorch model with PyTorch Frontend -############################################################### - -To convert a PyTorch model to the OpenVINO IR format, use the OVC API (superseding the previously used tool, MO). To do so, use the ``convert_model()`` method, like so: - - -.. code-block:: py - :force: - - import torchvision - import torch - from openvino.tools.mo import convert_model - - model = torchvision.models.resnet50(weights='DEFAULT') - ov_model = convert_model(model) - -Following PyTorch model formats are supported: - -* ``torch.nn.Module`` -* ``torch.jit.ScriptModule`` -* ``torch.jit.ScriptFunction`` - -Converting certain PyTorch models may require model tracing, which needs the ``example_input`` -parameter to be set, for example: - -.. code-block:: py - :force: - - import torchvision - import torch - from openvino.tools.mo import convert_model - - model = torchvision.models.resnet50(weights='DEFAULT') - ov_model = convert_model(model, example_input=torch.randn(1, 3, 100, 100)) - -``example_input`` accepts the following formats: - -* ``openvino.runtime.Tensor`` -* ``torch.Tensor`` -* ``np.ndarray`` -* ``list`` or ``tuple`` with tensors (``openvino.runtime.Tensor`` / ``torch.Tensor`` / ``np.ndarray``) -* ``dictionary`` where key is the input name, value is the tensor (``openvino.runtime.Tensor`` / ``torch.Tensor`` / ``np.ndarray``) - -Sometimes ``convert_model`` will produce inputs of the model with dynamic rank or dynamic type. -Such model may not be supported by the hardware chosen for inference. To avoid this issue, -use the ``input`` argument of ``convert_model``. For more information, refer to :doc:`Convert Models Represented as Python Objects <../[legacy]-convert-models-as-python-objects>`. - -.. important:: - - The ``convert_model()`` method returns ``ov.Model`` that you can optimize, compile, or save to a file for subsequent use. - -Exporting a PyTorch Model to ONNX Format -######################################## - -It is also possible to export a PyTorch model to ONNX and then convert it to OpenVINO IR. To convert and deploy a PyTorch model this way, follow these steps: - -1. `Export a PyTorch model to ONNX <#exporting-a-pytorch-model-to-onnx-format>`__. -2. :doc:`Convert an ONNX model <[legacy]-convert-onnx>` to produce an optimized :doc:`Intermediate Representation <../../../../openvino-ir-format/operation-sets>` of the model based on the trained network topology, weights, and biases values. - -PyTorch models are defined in Python. To export them, use the ``torch.onnx.export()`` method. The code to -evaluate or test the model is usually provided with its code and can be used for its initialization and export. -The export to ONNX is crucial for this process, but it is covered by PyTorch framework, therefore, It will not be covered here in detail. -For more information, refer to the `Exporting PyTorch models to ONNX format `__ guide. - -To export a PyTorch model, you need to obtain the model as an instance of ``torch.nn.Module`` class and call the ``export`` function. - -.. code-block:: py - :force: - - import torch - - # Instantiate your model. This is just a regular PyTorch model that will be exported in the following steps. - model = SomeModel() - # Evaluate the model to switch some operations from training mode to inference. - model.eval() - # Create dummy input for the model. It will be used to run the model inside export function. - dummy_input = torch.randn(1, 3, 224, 224) - # Call the export function - torch.onnx.export(model, (dummy_input, ), 'model.onnx') - - -Additional Resources -#################### - -See the :doc:`Model Conversion Tutorials <[legacy]-conversion-tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific PyTorch models. Here are some examples: - -* :doc:`Convert PyTorch BERT-NER Model <[legacy]-conversion-tutorials/convert-pytorch-bert-ner>` -* :doc:`Convert PyTorch RCAN Model <[legacy]-conversion-tutorials/convert-pytorch-rcan>` -* :doc:`Convert PyTorch YOLACT Model <[legacy]-conversion-tutorials/convert-pytorch-yolact>` - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-convert-tensorflow-lite.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-convert-tensorflow-lite.rst deleted file mode 100644 index 6d9256cdf09994..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-convert-tensorflow-lite.rst +++ /dev/null @@ -1,37 +0,0 @@ -[LEGACY] Converting a TensorFlow Lite Model -===================================================== - - -.. meta:: - :description: Learn how to convert a model from a - TensorFlow Lite format to the OpenVINO Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Converting a TensorFlow Lite Model <../../../../../openvino-workflow/model-preparation/convert-model-tensorflow-lite>` article. - -To convert a TensorFlow Lite model, use the ``mo`` script and specify the path to the input ``.tflite`` model file: - -.. code-block:: sh - - mo --input_model .tflite - -TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <../../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions. - -.. important:: - - The ``convert_model()`` method returns ``ov.Model`` that you can optimize, compile, or save to a file for subsequent use. - -Supported TensorFlow Lite Layers -################################### - -For the list of supported standard layers, refer to the :doc:`Supported Operations <../../../../../about-openvino/compatibility-and-support/supported-operations>` page. - -Supported TensorFlow Lite Models -################################### - -More than eighty percent of public TensorFlow Lite models are supported from open sources `TensorFlow Hub `__ and `MediaPipe `__. -Unsupported models usually have custom TensorFlow Lite operations. - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-convert-tensorflow.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-convert-tensorflow.rst deleted file mode 100644 index 2bcb6fde9b833b..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-convert-tensorflow.rst +++ /dev/null @@ -1,359 +0,0 @@ -[LEGACY] Converting a TensorFlow Model -============================================ - -.. meta:: - :description: Learn how to convert a model from a - TensorFlow format to the OpenVINO Intermediate Representation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Converting a TensorFlow Model <../../../../../openvino-workflow/model-preparation/convert-model-tensorflow>` article. - - -.. note:: TensorFlow models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <../../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions. - -The conversion instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X. - -Converting TensorFlow 1 Models -############################### - -Converting Frozen Model Format -+++++++++++++++++++++++++++++++ - -To convert a TensorFlow model, use the ``*mo*`` script to simply convert a model with a path to the input model *.pb* file: - -.. code-block:: sh - - mo --input_model .pb - - -Converting Non-Frozen Model Formats -+++++++++++++++++++++++++++++++++++ - -There are three ways to store non-frozen TensorFlow models and convert them by model conversion API: - -1. **Checkpoint**. In this case, a model consists of two files: ``inference_graph.pb`` (or ``inference_graph.pbtxt``) and ``checkpoint_file.ckpt``. -If you do not have an inference graph file, refer to the `Freezing Custom Models in Python <#freezing-custom-models-in-python>`__ section. -To convert the model with the inference graph in ``.pb`` format, run the `mo` script with a path to the checkpoint file: - -.. code-block:: sh - - mo --input_model .pb --input_checkpoint - -To convert the model with the inference graph in ``.pbtxt`` format, run the ``mo`` script with a path to the checkpoint file: - -.. code-block:: sh - - mo --input_model .pbtxt --input_checkpoint --input_model_is_text - - -2. **MetaGraph**. In this case, a model consists of three or four files stored in the same directory: ``model_name.meta``, ``model_name.index``, -``model_name.data-00000-of-00001`` (the numbers may vary), and ``checkpoint`` (optional). -To convert such TensorFlow model, run the `mo` script with a path to the MetaGraph ``.meta`` file: - -.. code-block:: sh - - mo --input_meta_graph .meta - - -3. **SavedModel format**. In this case, a model consists of a special directory with a ``.pb`` file -and several subfolders: ``variables``, ``assets``, and ``assets.extra``. For more information about the SavedModel directory, refer to the `README `__ file in the TensorFlow repository. -To convert such TensorFlow model, run the ``mo`` script with a path to the SavedModel directory: - -.. code-block:: sh - - mo --saved_model_dir - - -You can convert TensorFlow 1.x SavedModel format in the environment that has a 1.x or 2.x version of TensorFlow. However, TensorFlow 2.x SavedModel format strictly requires the 2.x version of TensorFlow. -If a model contains operations currently unsupported by OpenVINO, prune these operations by explicit specification of input nodes using the ``--input`` option. -To determine custom input nodes, display a graph of the model in TensorBoard. To generate TensorBoard logs of the graph, use the ``--tensorboard_logs`` option. -TensorFlow 2.x SavedModel format has a specific graph due to eager execution. In case of pruning, find custom input nodes in the ``StatefulPartitionedCall/*`` subgraph of TensorFlow 2.x SavedModel format. - -Freezing Custom Models in Python -++++++++++++++++++++++++++++++++ - -When a network is defined in Python code, you have to create an inference graph file. Graphs are usually built in a form -that allows model training. That means all trainable parameters are represented as variables in the graph. -To be able to use such graph with model conversion API, it should be frozen and dumped to a file with the following code: - -.. code-block:: py - :force: - - import tensorflow as tf - from tensorflow.python.framework import graph_io - frozen = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["name_of_the_output_node"]) - graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False) - -Where: - -* ``sess`` is the instance of the TensorFlow Session object where the network topology is defined. -* ``["name_of_the_output_node"]`` is the list of output node names in the graph; ``frozen`` graph will include only those nodes from the original ``sess.graph_def`` that are directly or indirectly used to compute given output nodes. The ``'name_of_the_output_node'`` is an example of a possible output node name. You should derive the names based on your own graph. -* ``./`` is the directory where the inference graph file should be generated. -* ``inference_graph.pb`` is the name of the generated inference graph file. -* ``as_text`` specifies whether the generated file should be in human readable text format or binary. - -Converting TensorFlow 2 Models -############################### - -To convert TensorFlow 2 models, ensure that `openvino-dev[tensorflow2]` is installed via `pip`. -TensorFlow 2.X officially supports two model formats: SavedModel and Keras H5 (or HDF5). -Below are the instructions on how to convert each of them. - -SavedModel Format -+++++++++++++++++ - -A model in the SavedModel format consists of a directory with a ``saved_model.pb`` file and two subfolders: ``variables`` and ``assets``. -To convert such a model, run the `mo` script with a path to the SavedModel directory: - -.. code-block:: sh - - mo --saved_model_dir - -TensorFlow 2 SavedModel format strictly requires the 2.x version of TensorFlow installed in the -environment for conversion to the Intermediate Representation (IR). - -If a model contains operations currently unsupported by OpenVINO™, -prune these operations by explicit specification of input nodes using the ``--input`` or ``--output`` -options. To determine custom input nodes, visualize a model graph in the TensorBoard. - -TensorFlow 2 SavedModel format has a specific graph structure due to eager execution. In case of -pruning, find custom input nodes in the ``StatefulPartitionedCall/*`` subgraph. - -Since the 2023.0 release, direct pruning of models in SavedModel format is not supported. -It is essential to freeze the model before pruning. Use the following code snippet for model freezing: - -.. code-block:: py - :force: - - import tensorflow as tf - from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2 - saved_model_dir = "./saved_model" - imported = tf.saved_model.load(saved_model_dir) - # retrieve the concrete function and freeze - concrete_func = imported.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY] - frozen_func = convert_variables_to_constants_v2(concrete_func, - lower_control_flow=False, - aggressive_inlining=True) - # retrieve GraphDef and save it into .pb format - graph_def = frozen_func.graph.as_graph_def(add_shapes=True) - tf.io.write_graph(graph_def, '.', 'model.pb', as_text=False) - -Keras H5 -++++++++ - -If you have a model in HDF5 format, load the model using TensorFlow 2 and serialize it to -SavedModel format. Here is an example of how to do it: - -.. code-block:: py - :force: - - import tensorflow as tf - model = tf.keras.models.load_model('model.h5') - tf.saved_model.save(model,'model') - - -The Keras H5 model with a custom layer has specifics to be converted into SavedModel format. -For example, the model with a custom layer ``CustomLayer`` from ``custom_layer.py`` is converted as follows: - -.. code-block:: py - :force: - - import tensorflow as tf - from custom_layer import CustomLayer - model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer}) - tf.saved_model.save(model,'model') - - -Then follow the above instructions for the SavedModel format. - -.. note:: - - Do not use other hacks to resave TensorFlow 2 models into TensorFlow 1 formats. - -Command-Line Interface (CLI) Examples Using TensorFlow-Specific Parameters -########################################################################## - -* Launching model conversion for Inception V1 frozen model when model file is a plain text protobuf: - - .. code-block:: sh - - mo --input_model inception_v1.pbtxt --input_model_is_text -b 1 - - -* Launching model conversion for Inception V1 frozen model and dump information about the graph to TensorBoard log dir ``/tmp/log_dir`` - - .. code-block:: sh - - mo --input_model inception_v1.pb -b 1 --tensorboard_logdir /tmp/log_dir - - -* Launching model conversion for BERT model in the SavedModel format, with three inputs. Specify explicitly the input shapes where the batch size and the sequence length equal 2 and 30 respectively. - - .. code-block:: sh - - mo --saved_model_dir BERT --input mask,word_ids,type_ids --input_shape [2,30],[2,30],[2,30] - -Conversion of TensorFlow models from memory using Python API -############################################################ - -Model conversion API supports passing TensorFlow/TensorFlow2 models directly from memory. - -* ``tf.keras.Model`` - - .. code-block:: py - :force: - - import tensorflow as tf - from openvino.tools.mo import convert_model - - model = tf.keras.applications.ResNet50(weights="imagenet") - ov_model = convert_model(model) - - -* ``tf.keras.layers.Layer``. Requires setting the "input_shape". - - .. code-block:: py - :force: - - import tensorflow_hub as hub - from openvino.tools.mo import convert_model - - model = hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/5") - ov_model = convert_model(model, input_shape=[-1, 224, 224, 3]) - -* ``tf.Module``. Requires setting the "input_shape". - - .. code-block:: py - :force: - - import tensorflow as tf - from openvino.tools.mo import convert_model - - class MyModule(tf.Module): - def __init__(self, name=None): - super().__init__(name=name) - self.variable1 = tf.Variable(5.0, name="var1") - self.variable2 = tf.Variable(1.0, name="var2") - def __call__(self, x): - return self.variable1 * x + self.variable2 - - model = MyModule(name="simple_module") - ov_model = convert_model(model, input_shape=[-1]) - -* ``tf.compat.v1.Graph`` - - .. code-block:: py - :force: - - import tensorflow as tf - from openvino.tools.mo import convert_model - - with tf.compat.v1.Session() as sess: - inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1') - inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2') - output = tf.nn.relu(inp1 + inp2, name='Relu') - tf.compat.v1.global_variables_initializer() - model = sess.graph - - ov_model = convert_model(model) - -* ``tf.compat.v1.GraphDef`` - - .. code-block:: py - :force: - - import tensorflow as tf - from openvino.tools.mo import convert_model - - with tf.compat.v1.Session() as sess: - inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1') - inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2') - output = tf.nn.relu(inp1 + inp2, name='Relu') - tf.compat.v1.global_variables_initializer() - model = sess.graph_def - - ov_model = convert_model(model) - -* ``tf.function`` - - .. code-block:: py - :force: - - import tensorflow as tf - from openvino.tools.mo import convert_model - - @tf.function( - input_signature=[tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32), - tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32)]) - def func(x, y): - return tf.nn.sigmoid(tf.nn.relu(x + y)) - - ov_model = convert_model(func) - -* ``tf.compat.v1.session`` - - .. code-block:: py - :force: - - import tensorflow as tf - from openvino.tools.mo import convert_model - - with tf.compat.v1.Session() as sess: - inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1') - inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2') - output = tf.nn.relu(inp1 + inp2, name='Relu') - tf.compat.v1.global_variables_initializer() - - ov_model = convert_model(sess) - -* ``tf.train.checkpoint`` - - .. code-block:: py - :force: - - import tensorflow as tf - from openvino.tools.mo import convert_model - - model = tf.keras.Model(...) - checkpoint = tf.train.Checkpoint(model) - save_path = checkpoint.save(save_directory) - # ... - checkpoint.restore(save_path) - ov_model = convert_model(checkpoint) - -.. important:: - - The ``convert_model()`` method returns ``ov.Model`` that you can optimize, compile, or save to a file for subsequent use. - -Supported TensorFlow and TensorFlow 2 Keras Layers -################################################## - -For the list of supported standard layers, refer to the :doc:`Supported Operations <../../../../../about-openvino/compatibility-and-support/supported-operations>` page. - -Frequently Asked Questions (FAQ) -################################ - -The model conversion API provides explanatory messages if it is unable to run to completion due to typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the :doc:`Model Optimizer FAQ <../[legacy]-model-optimizer-faq>`. The FAQ provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in :doc:`Convert a Model <../../legacy-conversion-api>` to help you understand what went wrong. - -Summary -####### - -In this document, you learned: - -* Basic information about how the model conversion API works with TensorFlow models. -* Which TensorFlow models are supported. -* How to freeze a TensorFlow model. -* How to convert a trained TensorFlow model using model conversion API with both framework-agnostic and TensorFlow-specific command-line parameters. - -Additional Resources -#################### - -See the :doc:`Model Conversion Tutorials <[legacy]-conversion-tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific TensorFlow models. Here are some examples: - -* :doc:`Convert TensorFlow EfficientDet Models <[legacy]-conversion-tutorials/convert-tensorflow-efficient-det>` -* :doc:`Convert TensorFlow FaceNet Models <[legacy]-conversion-tutorials/convert-tensorflow-face-net>` -* :doc:`Convert TensorFlow Object Detection API Models <[legacy]-conversion-tutorials/convert-tensorflow-object-detection>` - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-troubleshooting-reshape-errors.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-troubleshooting-reshape-errors.rst deleted file mode 100644 index 4d5c282a947d1b..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-troubleshooting-reshape-errors.rst +++ /dev/null @@ -1,54 +0,0 @@ -[LEGACY] Troubleshooting Reshape Errors -======================================= - - -.. meta:: - :description: In OpenVINO™, you can use several methods to address the issues - of non-reshape-able models and shape collision, which prevent - normal shape propagation. - - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - -How To Avoid Shape Collision -############################ - -Operation semantics may impose restrictions on input shapes of the operation. -Shape collision during shape propagation may be a sign that new shape does not satisfy the restrictions. -Changing the model input shape may result in intermediate operations shape collision. For example, in the following: - -* The :doc:`Reshape <../../../openvino-ir-format/operation-sets/operation-specs/shape/reshape-1>` operation with a hard-coded output shape value, -* The :doc:`MatMul <../../../openvino-ir-format/operation-sets/operation-specs/matrix/matmul-1>` operation with the ``Const`` second input and this input cannot be resized by spatial dimensions due to operation semantics. - -Model structure and logic should not change significantly after model reshaping. - -* The Global Pooling operation is commonly used to reduce output feature map of classification models output. Having the input of the shape *[N, C, H, W]*, Global Pooling returns the output of the shape *[N, C, 1, 1]*. Model architects usually express Global Pooling with the help of the ``Pooling`` operation with the fixed kernel size *[H, W]*. During spatial reshape, having the input of the shape *[N, C, H1, W1]*, ``Pooling`` with the fixed kernel size *[H, W]* returns the output of the shape *[N, C, H2, W2]*, where *H2* and *W2* are commonly not equal to *1*. It breaks the classification model structure. For example, the public `Inception family models from TensorFlow `__ have this issue. - -* Changing the model input shape may significantly affect its accuracy. For example, Object Detection models from TensorFlow have resizing restrictions by design. To keep the model valid after the reshape, choose a new input shape that satisfies conditions listed in the ``pipeline.config`` file. - -.. _how-to-fix-non-reshape-able-model: - -How To Fix Non-Reshape-able Model -################################# - -To fix some operators which prevent normal shape propagation: - -* see if the issue can be fixed via changing the values of some operators' input. For example, the most common problem of non-reshape-able models is a ``Reshape`` operator with a hard-coded output shape. You can cut-off the hard-coded second input of ``Reshape`` and fill it in with relaxed values. For the following example in the diagram below, the model conversion API command line should read: - - .. code-block:: sh - - mo --input_model path/to/model --input data[8,3,224,224],1:reshaped[2]->[0,-1]` - - - With ``1:reshaped[2]``, it is required to cut the second input (counting from zero, so ``1:`` means the second input) of the operation named ``reshaped`` and replace it with a ``Parameter`` with shape ``[2]``. - With ``->[0 -1]``, this new ``Parameter`` is replaced by a ``Constant`` operator which has the ``[0, -1]`` value. - Since the ``Reshape`` operator has ``0`` and ``-1`` as specific values, it allows propagating shapes freely without losing the intended meaning of ``Reshape``. For more information, see :doc:`the specification <../../../openvino-ir-format/operation-sets/operation-specs/shape/reshape-1>`. - - .. image:: ../../../../assets/images/batch_relaxation.png - -* transform the model conversion on the back phase. For more information, see the :doc:`How to Convert a Model <../legacy-model-optimizer-extensibility>`, -* transform OpenVINO Model during the runtime. For more information, see :doc:`OpenVINO Runtime Transformations <../../../openvino-extensibility/transformation-api>`, -* modify the original model with the help of the original framework. - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility.rst deleted file mode 100644 index 3d2365f45ffe3b..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility.rst +++ /dev/null @@ -1,326 +0,0 @@ -Legacy Model Optimizer Extensibility -==================================== - - - -.. toctree:: - :maxdepth: 1 - :hidden: - - legacy-model-optimizer-extensibility/[legacy]-graph-traversal-and-modification - legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions - legacy-model-optimizer-extensibility/[legacy]-extending-model-optimizer-with-caffe-python-layers - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated TensorFlow conversion method. The guide on the new and recommended method, using a new frontend, can be found in the :doc:`Frontend Extensions <../../openvino-extensibility/frontend-extensions>` article. - -This article describes Model Optimizer internals. Altering them may result in application instability, and in case of future changes to the API, lack of backward compatibility. - -.. note:: - If you want to add support for ONNX, TensorFlow Lite, PaddlePaddle or TensorFlow operations, or you are not familiar with other extension alternatives in OpenVINO, read :doc:`this guide <../../openvino-extensibility>` instead. - -.. _model-optimizer-extensibility: - -Model Optimizer extensibility mechanism enables support of new operations and custom transformations to generate the optimized intermediate representation (IR) as described :doc:`here <../../openvino-ir-format/operation-sets>`. -This mechanism is a core part of Model Optimizer, as a huge set of examples showing how to add custom logic to support your model. - -There are several cases when the customization is needed: - -* A model contains operation(s) not known for the Model Optimizer, but these operation(s) could be expressed as a combination of supported operations. In this case, a custom transformation should be implemented to replace unsupported operation(s) with supported ones. -* A model contains a sub-graph of operations that can be replaced with a smaller number of operations to get better performance. This example corresponds to so-called *fusing transformations* (e.g., replacing a sub-graph performing the calculation :math:`x/(1.0+e^{-(beta*x)})` with a single operation of type :doc:`Swish <../../openvino-ir-format/operation-sets/operation-specs/activation/swish-4>`. -* A model contains a custom framework operation (the operation that is not a part of an official operation set of the framework) that was developed using the framework extensibility mechanism. In this case, Model Optimizer should know how to handle the operation and generate a corresponding section in an IR for it. - -It is necessary to figure out how Model Optimizer represents a model in a memory and converts it to an IR before -going into details of the Model Optimizer extensibility mechanism. - -.. note:: - All paths in this article are provided relatively to the Model Optimizer installation directory if not stated otherwise. - -.. _mo_model_representation_in_memory: - -============================== -Model Representation in Memory -============================== - -The model can be represented as a directed graph, where nodes are operations and edges correspond to data passing from a -producer operation (node) to a consumer operation (node). - -Model Optimizer uses Python class ``mo.graph.graph.Graph`` instance to represent the computation graph in memory during -the model conversion. This class is inherited from the ``networkx.MultiDiGraph`` class of the standard ``networkx`` Python -library. It provides many convenient methods to traverse and modify the graph. Refer to the ``mo/graph/graph.py`` file for examples. - -Model Optimizer keeps all necessary information about the operation in node attributes. Model Optimizer uses the ``mo.graph.graph.Node`` class defined in the ``mo/graph/graph.py`` file, which is a wrapper on top of a ``networkx`` node attributes -dictionary, and provides many convenient methods to work with the node. For example, the node ``my_node`` attribute with a -name ``my_attr`` can be retrieved from the node with the following code ``my_node.my_attr``, which is equivalent to obtaining -attribute with name ``my_attr`` in the ``graph.node[my_node]`` dictionary. For the class implementation details, refer to the ``mo/graph/graph.py`` file. - -An operation may have several inputs and outputs. For example, operation :doc:`Split <../../openvino-ir-format/operation-sets/operation-specs/movement/split-1>` has -two inputs: data to split and axis to split along, and variable number of outputs depending on a value of attribute -``num_splits``. Each input data to the operation is passed to a specific operation **input port**. An operation produces -the output data from an **output port**. Input and output ports are numbered from 0 independently. Model Optimizer uses -classes ``mo.graph.port.Port`` and ``mo.graph.connection.Connection``, which are useful abstraction to perform graph -modifications like nodes connecting/re-connecting and graph traversing. These classes are widely used in the Model -Optimizer code so it is easy to find a lot of usage examples. - -There is no dedicated class corresponding to an edge, so low-level graph manipulation is needed to get access to -edge attributes if needed. Meanwhile, most manipulations with nodes connections should be done with help of the -``mo.graph.connection.Connection`` and ``mo.graph.port.Port`` classes. Thus, low-level graph manipulation is error prone and -is strongly not recommended. - -Further details and examples related to a model representation in memory are provided in the sections below, in a context -for a better explanation. For more information on how to use ports and connections, refer to the :doc:`Graph Traversal and Modification Using Ports and Connections ` article. - -.. _mo_model_conversion_pipeline: - -========================= -Model Conversion Pipeline -========================= - -A model conversion pipeline can be represented with the following diagram: - -.. image:: ../../../assets/images/MO_conversion_pipeline.svg - -Each conversion step is reviewed in details below. - -Model Loading -############# - -Model Optimizer gets a trained model file as an input. The model loader component of Model Optimizer reads a model file -using Python bindings provided with the framework and builds an in-memory representation of a computation graph. There -is a separate loader for each supported framework. These loaders are implemented in the -``extensions/load//loader.py`` files of Model Optimizer. - -.. note:: - Model Optimizer uses a special parser for Caffe models built on top of the ``caffe.proto`` file. In the case of a model loading failure, Model Optimizer throws an error and requests preparation of the parser that can read the model. For more information on how to prepare the custom Caffe parser, refer to the :ref:`question #1 ` in the :doc:`Model Optimizer FAQ `. - -The result of a model loading step is a ``Graph`` object, which can be depicted like in the following example: - -.. image:: ../../../assets/images/MO_graph_after_loader.svg - -Model Optimizer loader saves an operation instance framework description (usually it is a Protobuf message) into a node -attribute usually with a name ``pb`` for each operation of an input model. It is important that this is a -**framework-specific** description of an operation. This means that an operation (e.g. -:doc:`Convolution <../../openvino-ir-format/operation-sets/operation-specs/convolution/convolution-1>` may be represented differently in, for example, Caffe and -TensorFlow frameworks but performs the same calculations from a mathematical point of view. - -In the image above, the **Operation 2** has one input and two outputs. The tensor produced from the output **port 0** is -consumed with the **Operation 5** (the input **port 0**) and **Operation 3** (the input **port 1**). The tensor produced from the -output **port 1** is consumed with the **Operation 4** (the input **port 0**). - -Each edge has two attributes: ``in`` and ``out``. They contain the input port number of the consumer node and the output port -number of the producer node. These attributes describe the fact that nodes are operations consuming some input tensors -and producing some output tensors. From the perspective of Model Optimizer, nodes themselves are **black boxes** because -they do not contain required information about the operation they perform. - -Operations Attributes Extracting -################################ - -The next step is to parse framework-dependent operation representation saved in a node attribute and update the node -attributes with the operation specific attributes. There are three options to do this. - -1. The extractor extension approach (recommended way to extract attributes for an operation). Explained in details in the :doc:`Operation Extractor ` article. -2. The legacy approach with a built-in extractor. The ``mo/front//extractor.py`` file (for example, the one for Caffe) defines a dictionary with extractors for specific operation types. A key in the dictionary is a type of an operation to trigger the extracting function for and the value is the function. The function has one parameter – a node to extract attributes from. This is a legacy and non-extensible approach so it should be avoided. This mechanism will be removed in future versions of Model Optimizer. - -The extractors execution order is the following: - -* ``CustomLayersMapping.xml`` (for Caffe models only). -* Model Optimizer extension. -* Built-in Model Optimizer extractor. - -The result of operations attributes extracting step can be depicted like in the following example: - -.. image:: ../../../assets/images/MO_graph_after_extractors.svg - -The only difference in the graph from the previous step is that nodes contain dictionary with extracted attributes and -operation-specific attributes needed for Model Optimizer. However, from this step, Model Optimizer does not -need the original representation of the operation/model and just uses Model Optimizer representation (there are some -peculiar cases in which Model Optimizer still uses the ``pb`` attribute, covered in this -article partially). A detailed list of common node attributes and their values is provided in the -:doc:`Model Optimizer Operation ` article. - -Front Phase -########### - -For legacy reasons, you must specify shapes for all not fully-defined inputs of the model. In contrast, other -machine learning frameworks, like TensorFlow, let you create a model with undefined or partially defined input shapes. -As an example, undefined dimension is marked with an integer value ``-1`` in a TensorFlow model or has some string name -in an ONNX model. - -During the front phase, Model Optimizer knows shape of the model inputs and constants only and does not know shapes -(and even ranks) of the intermediate tensors. But information about shapes may not be needed to implement particular -transformation. For example, the transformation ``extensions/front/TopKNormalize.py`` removes an attribute ``k`` from a -``TopK`` node and adds an input constant with the value ``k``. The transformation is needed to convert a ``TopK`` operation. -It comes from frameworks, where a number of output elements is defined as an attribute of the operation to the -OpenVINO :doc:`TopK <../../openvino-ir-format/operation-sets/operation-specs/sort/top-k-3>` operation semantic, which requires this value to be a separate input. - -It is important to mention that sometimes it seems like transformation cannot be implemented during the front phase -because the actual values of inputs or shapes are needed. In fact, manipulations of shapes or values can be implemented -using operations that are added to the graph. Consider the -``extensions/front/onnx/flattenONNX_to_reshape.py`` transformation, which replaces an ONNX -`Flatten `__ operation with a sub-graph of operations performing -the following (when ``axis`` is not equal to 0 and 1): - -1. Calculate a shape of the ``Flatten`` input tensor, using the :doc:`ShapeOf <../../openvino-ir-format/operation-sets/operation-specs/shape/shape-of-3>` operation. -2. Get the first ``axis`` elements from the output of ``Shape`` operation and calculate their product, using the :doc:`ReduceProd <../../openvino-ir-format/operation-sets/operation-specs/reduction/reduce-prod-1>` operation. -3. Concatenate output of the ``ReduceProd`` and constant with the value of ``-1`` (for an explanation of this value refer to the :doc:`Reshape <../../openvino-ir-format/operation-sets/operation-specs/shape/reshape-1>` specification page). -4. Use the concatenated value as the second input to the ``Reshape`` operation. - -It is highly recommended to write shape-agnostic transformations to avoid model reshape-ability issues. For more information related to the reshaping of a model, refer to the :doc:`Using Shape Inference <../../../openvino-workflow/running-inference/changing-input-shape>` guide. - -More information on how to develop front phase transformations and dedicated API description is provided in the -:ref:`Front Phase Transformations `. - -.. _mo_partial_inference: - -Partial Inference -################# - -Model Optimizer performs a partial inference of a model during model conversion. This procedure includes output shapes -calculation of all operations in a model and constant folding (value calculation for constant sub-graphs). The constant -folding is needed for the shape inference because in some cases evaluation of constant sub-graph is needed to calculate -output shapes. For example, the output shape for the :doc:`Reshape <../../openvino-ir-format/operation-sets/operation-specs/shape/reshape-1>` operation may be -defined as a mathematical expression using the :doc:`ShapeOf <../../openvino-ir-format/operation-sets/operation-specs/shape/shape-of-3>` operation output. - -.. note:: - Model Optimizer does not fold sub-graphs starting from the :doc:`ShapeOf <../../openvino-ir-format/operation-sets/operation-specs/shape/shape-of-3>` operation by default because this leads to a model non-reshape-ability (the command-line parameter ``--static_shape`` can override this behavior). For more information related to reshaping of a model, refer to the :doc:`Using Shape Inference <../../../openvino-workflow/running-inference/changing-input-shape>` guide. - -Model Optimizer calculates output shapes for all operations in a model to write them to Intermediate Representation files. - -.. note:: - This is a legacy requirement. Starting with IR version 10, OpenVINO Runtime needs to know shapes of the :doc:`Const <../../openvino-ir-format/operation-sets/operation-specs/infrastructure/constant-1>` and the :doc:`Parameter <../../openvino-ir-format/operation-sets/operation-specs/infrastructure/parameter-1>` operations only. The OpenVINO Runtime calculates output shapes for all operations in a model, using shapes of :doc:`Parameter <../../openvino-ir-format/operation-sets/operation-specs/infrastructure/parameter-1>` and :doc:`Const <../../openvino-ir-format/operation-sets/operation-specs/infrastructure/constant-1>` operations defined with respective operation attributes. - -Model Optimizer inserts **data** nodes to the computation graph before starting the partial inference phase. The data node -corresponds to the specific tensor produced with the operation. Each data node contains two attributes: ``shape``, -containing the shape of the tensor, and ``value``, which may contain the actual value of the tensor. The value for a ``value`` -attribute is equal to ``None`` if this tensor value cannot be calculated. This happens in two cases: when a tensor value -depends on a values passed to the :doc:`Parameter <../../openvino-ir-format/operation-sets/operation-specs/infrastructure/parameter-1>` operation of a model or -Model Optimizer does not have value propagation implementation for the operation. - -Before running partial inference, the graph can be depicted like in the following example: - -.. image:: ../../../assets/images/MO_graph_before_partial_inference.svg - -The difference in a graph structure with a graph during the front phase is not only in the data nodes, but also in the -edge attributes. Note that an ``out`` attribute is specified for edges **from operation** nodes only, while an ``in`` -attribute is specified for edges **from data** nodes only. This corresponds to the fact that a tensor (data node) is -produced from a specific output port of an operation and is consumed with a specific input port of an operation. Also, -a unique data node is created for each output port of an operation. The node may be used as an input node for several -operation nodes. Similarly to the data node **data2_0**, which is consumed with the input **port 1** of the **Operation 3** and -input **port 0** of the **Operation 5**. - -Now, consider how Model Optimizer performs shape and value propagation. Model Optimizer performs graph nodes -topological sort. An error message is thrown if a graph contains a cycle. Then, shape inference functions are called for -each node in the graph, according to the topological order. Each node of the graph must have an attribute called ``infer`` -with a shape inference function, which is a function with one parameter – an instance of the ``Node`` class. The ``infer`` -attribute is usually set in the operation extractor or when a node is added in some transformation using the Model -Optimizer operation class inherited from the ``mo.pos.Op`` class. For more information on how to specify a shape inference function, -refer to the :doc:`Model Optimizer Operation ` and :doc:`Operation Extractor ` articles. - -A shape inference function should calculate an operation (node) output shape(s) based on input shape(s) and operation -(node) attribute(s) and update ``shape`` and optionally ``value`` attributes of the corresponding data node(s). A simplified -example of the shape infer function for the :doc:`Reshape <../../openvino-ir-format/operation-sets/operation-specs/shape/reshape-1>` operation (the full version is -available in the ``mo/ops/reshape.py`` file): - -.. code-block:: py - :force: - - @staticmethod - def infer(node: Node): - name = node.soft_get('name', node.id) - - input_shape = node.in_port(0).data.get_shape() # get the input tensor shape - new_shape = node.in_port(1).data.get_value() # get the value defining the output tensor shape. This tensor may - # have special values like 0 and -1 - - output_shape = ... # calculate output shape without special values like 0 and -1 - - if node.in_port(0).data.get_value() is not None: # if the input value is defined then calculate output value; - # shape will be updated automatically with the value shape - node.out_port(0).data.set_value(node.in_port(0).data.get_value().reshape(output_shape)) - else: # in the opposite case calculate the output shape only - node.out_port(0).data.set_shape(output_shape) - -Methods ``in_port()`` and ``output_port()`` of the ``Node`` class are used to get and set data node attributes. For more information on -how to use them, refer to the :doc:`Graph Traversal and Modification Using Ports and Connections ` article. - -.. note:: - A shape inference function should perform output shape calculation in the original model layout. For example, OpenVINO™ supports Convolution operations in NCHW layout only but TensorFlow supports NHWC layout as well. Model Optimizer shape inference function calculates output shapes for NHWC Convolutions in NHWC layout and only during the layout change phase the shape is converted to NCHW. - -.. note:: - There is a legacy approach to read data node attribute, like ``input_shape = op_node.in_node(0).shape`` and modify data nodes attributes, like ``op_node.out_node(0).shape = some_value``. This approach is still used in the Model Optimizer code but is not recommended. Instead, use the approach described in the :ref:`Ports `. - -Middle Phase -############ - -The middle phase starts after partial inference. At this phase, a graph contains data nodes and output shapes of all -operations in the graph have been calculated. Any transformation implemented at this stage must update the ``shape`` -attribute for all newly added operations. It is highly recommended to use API described in the -:doc:`Graph Traversal and Modification Using Ports and Connections ` because modification of a graph using this API causes automatic re-inference of affected nodes as well as necessary data nodes creation. - -More information on how to develop middle transformations and dedicated API description is provided in the -:ref:`Middle Phase Transformations `. - -NHWC to NCHW Layout Change -########################## - -There are several middle transformations responsible for changing model layout from NHWC to NCHW. These transformations are triggered by default for TensorFlow models as TensorFlow supports Convolution operations in the NHWC layout. - -This layout change is disabled automatically if the model does not have operations that OpenVINO™ needs to execute in the NCHW layout, for example, Convolutions in NHWC layout. - -For more details on how it works, refer to the source code of the transformations mentioned in the below summary of the process: - -1. Model Optimizer changes output shapes of most of operations producing 4D and 5D (four dimensional and five dimensional) tensors as if they were in NHWC layout to NCHW layout: ``nchw_shape = np.array(nhwc_shape)[0, 3, 1, 2]`` for 4D and ``nchw_shape = np.array(nhwc_shape)[0, 4, 1, 2, 3]`` for 5D. This permutation does not happen for some operations with specific conditions identified during a model conversion. -2. Model Optimizer inserts :doc:`Gather <../../openvino-ir-format/operation-sets/operation-specs/movement/gather-1>` operations to the sub-graph relates to shapes calculation in order to perform shape calculation in a correct layout. -3. Model Optimizer inserts :doc:`Transpose <../../openvino-ir-format/operation-sets/operation-specs/movement/transpose-1>` operations for some operations with specific conditions, identified during a model conversion, to produce correct inference results. - -The main transformations responsible for a layout change are: - -* ``extensions/middle/ApplyPermutations.py`` -* ``extensions/middle/InsertLayoutPropagationTransposes.py`` -* ``extensions/middle/MarkSubgraphsWithCorrectLayout.py`` -* ``extensions/middle/ApplyNHWCtoNCHWpermutation.py`` -* ``extensions/middle/LayoutChangeForConstantShapePaths.py`` - -Back Phase -########## - -The back phase starts after the layout change to NCHW. This phase contains mostly the following transformations: - -1. Transformations that should work with a graph in the NCHW layout and thus cannot be implemented in the middle phase. -2. Transformations that replace nodes corresponding to internal Model Optimizer operations with nodes corresponding to the :doc:`opset <../../openvino-ir-format/operation-sets/available-opsets>` operations. -3. Transformations that normalize operations inputs according to the specification. -4. Final optimization transformations. - -A graph structure during the back phase is the same as during the middle phase. There is no difference in writing middle -and back transformations. - -More information on how to develop back transformations and dedicated API description is provided in the -:ref:`Back Phase Transformations `. - -Intermediate Representation Emitting -#################################### - -The last phase of a model conversion is the Intermediate Representation emitting. Model Optimizer performs the following -steps: - -1. Iterates over all operation nodes in the graph and checks that all nodes have the ``type`` attribute set. This attribute defines the operation type and is used in the OpenVINO to instantiate proper operation from the :doc:`opset <../../openvino-ir-format/operation-sets/available-opsets>` specified in the ``version`` attribute of the node. If a node does not have attribute ``type`` or its value is equal to ``None``, Model Optimizer exits with an error. -2. Performs type inference of graph operations similar to the shape inference. Inferred data types are saved to a port attributes in the IR. -3. Performs topological sort of the graph and changes ``id`` attribute of all operation nodes to be sequential integer values starting from 0. -4. Saves all Constants values to the ``.bin`` file. Constants with the same value are shared among different operations. -5. Generates an ``.xml`` file defining a graph structure. The information about operation inputs and outputs are prepared uniformly for all operations regardless of their type. A list of attributes to be saved to the ``.xml`` file is defined with the ``backend_attrs()`` or ``supported_attrs()`` of the ``Op`` class used for a graph node instantiation. For more information on how the operation attributes are saved to XML, refer to the function ``prepare_emit_ir()`` in the ``mo/pipeline/common.py`` file and :doc:`Model Optimizer Operation ` article. - -==================== -Additional Resources -==================== - -* :doc:`Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™ <../../openvino-ir-format/operation-sets>` -* :doc:`Converting a Model to Intermediate Representation (IR) ` -* :doc:`OpenVINO Model Representation <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation>` -* :doc:`OpenVINO™ Extensibility Mechanism <../../openvino-extensibility>` -* :doc:`Graph Traversal and Modification Using Ports and Connections ` -* :doc:`Model Optimizer Extensions ` -* :doc:`Extending Model Optimizer with Caffe Python Layers ` - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-extending-model-optimizer-with-caffe-python-layers.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-extending-model-optimizer-with-caffe-python-layers.rst deleted file mode 100644 index 4277f68139845b..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-extending-model-optimizer-with-caffe-python-layers.rst +++ /dev/null @@ -1,110 +0,0 @@ -[LEGACY] Extending Model Optimizer with Caffe Python Layers -============================================================ - -.. meta:: - :description: Learn how to extract operator attributes in Model Optimizer to - support a custom Caffe operation written only in Python. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated TensorFlow conversion method. The guide on the new and recommended method, using a new frontend, can be found in the :doc:`Frontend Extensions <../../../openvino-extensibility/frontend-extensions>` article. - -This article provides instructions on how to support a custom Caffe operation written only in Python. For example, the -`Faster-R-CNN model `__ implemented in -Caffe contains a custom proposal layer written in Python. The layer is described in the -`Faster-R-CNN prototxt `__ in the following way: - -.. code-block:: sh - - layer { - name: 'proposal' - type: 'Python' - bottom: 'rpn_cls_prob_reshape' - bottom: 'rpn_bbox_pred' - bottom: 'im_info' - top: 'rois' - python_param { - module: 'rpn.proposal_layer' - layer: 'ProposalLayer' - param_str: "'feat_stride': 16" - } - } - - -This article describes only a procedure on how to extract operator attributes in Model Optimizer. The rest of the -operation enabling pipeline and information on how to support other Caffe operations (written in C++) is described in -the :doc:`Customize Model Optimizer <../legacy-model-optimizer-extensibility>` guide. - -======================================== -Writing Extractor for Caffe Python Layer -======================================== - -Custom Caffe Python layers have an attribute ``type`` (defining the type of the operation) equal to ``Python`` and two -mandatory attributes ``module`` and ``layer`` in the ``python_param`` dictionary. The ``module`` defines the Python module name -with the layer implementation, while ``layer`` value is an operation type defined by a user. In order to extract -attributes for such an operation it is necessary to implement extractor class inherited from the -``CaffePythonFrontExtractorOp`` class instead of ``FrontExtractorOp`` class, used for standard framework layers. The ``op`` -class attribute value should be set to the ``module + "." + layer`` value so the extractor is triggered for this kind of -operation. - -Below is a simplified example of the extractor for the custom operation Proposal from the mentioned Faster-R-CNN model. -The full code with additional checks can be found `here `__. - -The sample code uses operation ``ProposalOp`` which corresponds to ``Proposal`` operation described in the :doc:`Available Operations Sets <../../../openvino-ir-format/operation-sets/available-opsets>` -page. For a detailed explanation of the extractor, refer to the source code below. - -.. code-block:: py - :force: - - from openvino.tools.mo.ops.proposal import ProposalOp - from openvino.tools.mo.front.extractor import CaffePythonFrontExtractorOp - - - class ProposalPythonFrontExtractor(CaffePythonFrontExtractorOp): - op = 'rpn.proposal_layer.ProposalLayer' # module + "." + layer - enabled = True # extractor is enabled - - @staticmethod - def extract_proposal_params(node, defaults): - param = node.pb.python_param # get the protobuf message representation of the layer attributes - # parse attributes from the layer protobuf message to a Python dictionary - attrs = CaffePythonFrontExtractorOp.parse_param_str(param.param_str) - update_attrs = defaults - - # the operation expects ratio and scale values to be called "ratio" and "scale" while Caffe uses different names - if 'ratios' in attrs: - attrs['ratio'] = attrs['ratios'] - del attrs['ratios'] - if 'scales' in attrs: - attrs['scale'] = attrs['scales'] - del attrs['scales'] - - update_attrs.update(attrs) - ProposalOp.update_node_stat(node, update_attrs) # update the node attributes - - @classmethod - def extract(cls, node): - # define default values for the Proposal layer attributes - defaults = { - 'feat_stride': 16, - 'base_size': 16, - 'min_size': 16, - 'ratio': [0.5, 1, 2], - 'scale': [8, 16, 32], - 'pre_nms_topn': 6000, - 'post_nms_topn': 300, - 'nms_thresh': 0.7 - } - cls.extract_proposal_params(node, defaults) - return cls.enabled - -==================== -Additional Resources -==================== - -* :doc:`Model Optimizer Extensibility <../legacy-model-optimizer-extensibility>` -* :doc:`Graph Traversal and Modification Using Ports and Connections <[legacy]-graph-traversal-and-modification>` -* :doc:`Model Optimizer Extensions <[legacy]-model-optimizer-extensions>` - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-graph-traversal-and-modification.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-graph-traversal-and-modification.rst deleted file mode 100644 index 55b55a77335f2b..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-graph-traversal-and-modification.rst +++ /dev/null @@ -1,186 +0,0 @@ -[LEGACY] Graph Traversal and Modification -=========================================== - -.. meta:: - :description: Learn about deprecated APIs and the Port and Connection classes - in Model Optimizer used for graph traversal and transformation. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated TensorFlow conversion method. The guide on the new and recommended method, using a new frontend, can be found in the :doc:`Frontend Extensions <../../../openvino-extensibility/frontend-extensions>` article. - -There are three APIs for a graph traversal and transformation used in the Model Optimizer: - -1. The API provided with the ``networkx`` Python library for the ``networkx.MultiDiGraph`` class, which is the base class for -the ``mo.graph.graph.Graph`` object. For example, the following methods belong to this API level: - -* ``graph.add_edges_from([list])``, -* ``graph.add_node(x, attrs)``, -* ``graph.out_edges(node_id)`` -* other methods where ``graph`` is a an instance of the ``networkx.MultiDiGraph`` class. - -**This is the lowest-level API. Avoid using it in the Model Optimizer transformations**. For more details, refer to the :ref:`Model Representation in Memory ` section. - -2. The API built around the ``mo.graph.graph.Node`` class. The ``Node`` class is the primary class to work with graph nodes -and their attributes. Examples of such methods and functions are: - -* ``node.in_node(y)``, -* ``node.out_node(x)``, -* ``node.get_outputs()``, -* ``node.insert_node_after(n1, y)``, -* ``create_edge(n1, n2)`` - -**There are some "Node" class methods not recommended for use and some functions defined in the mo.graph.graph have been deprecated**. For more details, refer to the ``mo/graph/graph.py`` file. - -3. The high-level API called Model Optimizer Graph API, which uses ``mo.graph.graph.Graph``, ``mo.graph.port.Port`` and -``mo.graph.connection.Connection`` classes. For example, the following methods belong to this API level: - -* ``node.in_port(x)``, -* ``node.out_port(y)``, -* ``port.get_connection()``, -* ``connection.get_source()``, -* ``connection.set_destination(dest_port)`` - -**This is the recommended API for the Model Optimizer transformations and operations implementation**. - -The main benefit of using the Model Optimizer Graph API is that it hides some internal implementation details (the fact that -the graph contains data nodes), provides API to perform safe and predictable graph manipulations, and adds operation -semantic to the graph. This is achieved with introduction of concepts of ports and connections. - -.. note:: - This article is dedicated to the Model Optimizer Graph API only and does not cover other two non-recommended APIs. - -.. _mo_intro_ports: - -===== -Ports -===== - -An operation semantic describes how many inputs and outputs the operation has. For example, -:doc:`Parameter <../../../openvino-ir-format/operation-sets/operation-specs/infrastructure/parameter-1>` and :doc:`Const <../../../openvino-ir-format/operation-sets/operation-specs/infrastructure/constant-1>` operations have no -inputs and have one output, :doc:`ReLU <../../../openvino-ir-format/operation-sets/operation-specs/activation/relu-1>` operation has one input and one output, -:doc:`Split <../../../openvino-ir-format/operation-sets/operation-specs/movement/split-1>` operation has 2 inputs and a variable number of outputs depending on the value of the -attribute ``num_splits``. - -Each operation node in the graph (an instance of the ``Node`` class) has 0 or more input and output ports (instances of -the ``mo.graph.port.Port`` class). The ``Port`` object has several attributes: - -* ``node`` - the instance of the ``Node`` object the port belongs to. -* ``idx`` - the port number. Input and output ports are numbered independently, starting from ``0``. Thus, - :doc:`ReLU <../../../openvino-ir-format/operation-sets/operation-specs/activation/relu-1>` operation has one input port (with index ``0``) and one output port (with index ``0``). -* ``type`` - the type of the port. Could be equal to either ``"in"`` or ``"out"``. -* ``data`` - the object that should be used to get attributes of the corresponding data node. This object has methods ``get_shape()`` / ``set_shape()`` and ``get_value()`` / ``set_value()`` to get/set shape/value of the corresponding data node. For example, ``in_port.data.get_shape()`` returns an input shape of a tensor connected to input port ``in_port`` (``in_port.type == 'in'``), ``out_port.data.get_value()`` returns a value of a tensor produced from output port ``out_port`` (``out_port.type == 'out'``). - -.. note:: - Functions ``get_shape()`` and ``get_value()`` return ``None`` until the partial inference phase. For more information about model conversion phases, refer to the :ref:`Model Conversion Pipeline `. For information about partial inference phase, see the :ref:`Partial Inference `. - -There are several methods of the ``Node`` class to get the instance of a corresponding port: - -* ``in_port(x)`` and ``out_port(x)`` to get the input/output port with number ``x``. -* ``in_ports()`` and ``out_ports()`` to get a dictionary, where key is a port number and the value is the corresponding input/output port. - -Attributes ``in_ports_count`` and ``out_ports_count`` of the ``Op`` class instance define default number of input and output -ports to be created for the ``Node``. However, additional input/output ports can be added using methods -``add_input_port()`` and ``add_output_port()``. Port also can be removed, using the ``delete_input_port()`` and -``delete_output_port()`` methods. - -The ``Port`` class is just an abstraction that works with edges incoming/outgoing to/from a specific ``Node`` instance. For -example, output port with ``idx = 1`` corresponds to the outgoing edge of a node with an attribute ``out = 1``, the input -port with ``idx = 2`` corresponds to the incoming edge of a node with an attribute ``in = 2``. - -Consider the example of a graph part with 4 operation nodes "Op1", "Op2", "Op3", and "Op4" and a number of data nodes -depicted with light green boxes. - -.. image:: ../../../../assets/images/MO_ports_example_1.svg - :scale: 80 % - :align: center - -Operation nodes have input ports (yellow squares) and output ports (light purple squares). Input port may not be -connected. For example, the input **port 2** of node **Op1** does not have incoming edge, while output port always has an -associated data node (after the partial inference when the data nodes are added to the graph), which may have no -consumers. - -Ports can be used to traverse a graph. The method ``get_source()`` of an input port returns an output port producing the -tensor consumed by the input port. It is important that the method works the same during front, middle and back phases of a -model conversion even though the graph structure changes (there are no data nodes in the graph during the front phase). - -Let's assume that there are 4 instances of ``Node`` object ``op1, op2, op3``, and ``op4`` corresponding to nodes **Op1**, **Op2**, -**Op3**, and **Op4**, respectively. The result of ``op2.in_port(0).get_source()`` and ``op4.in_port(1).get_source()`` is the -same object ``op1.out_port(1)`` of type ``Port``. - -The method ``get_destination()`` of an output port returns the input port of the node consuming this tensor. If there are -multiple consumers of this tensor, the error is raised. The method ``get_destinations()`` of an output port returns a -list of input ports consuming the tensor. - -The method ``disconnect()`` removes a node incoming edge corresponding to the specific input port. The method removes -several edges if it is applied during the front phase for a node output port connected with multiple nodes. - -The method ``port.connect(another_port)`` connects output port ``port`` and input port ``another_port``. The method handles -situations when the graph contains data nodes (middle and back phases) and does not create an edge between two nodes -but also automatically creates data node or reuses existing data node. If the method is used during the front phase and -data nodes do not exist, the method creates edge and properly sets ``in`` and ``out`` edge attributes. - -For example, applying the following two methods to the graph above will result in the graph depicted below: - -.. code-block:: py - :force: - - op4.in_port(1).disconnect() - op3.out_port(0).connect(op4.in_port(1)) - -.. image:: ../../../../assets/images/MO_ports_example_2.svg - :scale: 80 % - :align: center - -.. note:: - For a full list of available methods, refer to the ``Node`` class implementation in the ``mo/graph/graph.py`` and ``Port`` class implementation in the ``mo/graph/port.py`` files. - -=========== -Connections -=========== - -Connection is a concept introduced to easily and reliably perform graph modifications. Connection corresponds to a -link between a source output port with one or more destination input ports or a link between a destination input port -and source output port producing data. So each port is connected with one or more ports with help of a connection. -Model Optimizer uses the ``mo.graph.connection.Connection`` class to represent a connection. - -There is only one ``get_connection()`` method of the ``Port`` class to get the instance of the corresponding ``Connection`` -object. If the port is not connected, the returned value is ``None``. - -For example, the ``op3.out_port(0).get_connection()`` method returns a ``Connection`` object encapsulating edges from node -**Op3** to data node **data_3_0** and two edges from data node **data_3_0** to two ports of the node **Op4**. - -The ``Connection`` class provides methods to get source and destination(s) ports the connection corresponds to: - -* ``connection.get_source()`` - returns an output ``Port`` object producing the tensor. -* ``connection.get_destinations()`` - returns a list of input ``Port`` consuming the data. -* ``connection.get_destination()`` - returns a single input ``Port`` consuming the data. If there are multiple consumers, the exception is raised. - -The ``Connection`` class provides methods to modify a graph by changing a source or destination(s) of a connection. For -example, the function call ``op3.out_port(0).get_connection().set_source(op1.out_port(0))`` changes source port of edges -consuming data from port ``op3.out_port(0)`` to ``op1.out_port(0)``. The transformed graph from the sample above is depicted -below: - -.. image:: ../../../../assets/images/MO_connection_example_1.svg - :scale: 80 % - :align: center - -Another example is the ``connection.set_destination(dest_port)`` method. It disconnects ``dest_port`` and all input ports to which -the connection is currently connected and connects the connection source port to ``dest_port``. - -Note that connection works seamlessly during front, middle, and back phases and hides the fact that the graph structure is -different. - -.. note:: - For a full list of available methods, refer to the ``Connection`` class implementation in the ``mo/graph/connection.py`` file. - -==================== -Additional Resources -==================== - -* :doc:`Model Optimizer Extensibility <../legacy-model-optimizer-extensibility>` -* :doc:`Model Optimizer Extensions <[legacy]-model-optimizer-extensions>` -* :doc:`Extending Model Optimizer with Caffe Python Layers <[legacy]-extending-model-optimizer-with-caffe-python-layers>` - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions.rst deleted file mode 100644 index db252965cb84e9..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions.rst +++ /dev/null @@ -1,60 +0,0 @@ -[LEGACY] Model Optimizer Extensions -===================================== - -.. meta:: - :description: Learn about deprecated extensions, which enable injecting logic - to the model conversion pipeline without changing the Model - Optimizer core code. - -.. toctree:: - :maxdepth: 1 - :hidden: - - [legacy]-model-optimizer-extensions/[legacy]-model-optimizer-operation - [legacy]-model-optimizer-extensions/[legacy]-optimizer-extractor - [legacy]-model-optimizer-extensions/[legacy]-graph-transformation-extensions - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated TensorFlow conversion method. The guide on the new and recommended method, using a new frontend, can be found in the :doc:`Frontend Extensions <../../../openvino-extensibility/frontend-extensions>` article. - -Model Optimizer extensions enable you to inject some logic to the model conversion pipeline without changing the Model -Optimizer core code. There are three types of the Model Optimizer extensions: - -1. :doc:`Model Optimizer operation <[legacy]-model-optimizer-extensions/[legacy]-model-optimizer-operation>`. -2. A :doc:`framework operation extractor <[legacy]-model-optimizer-extensions/[legacy]-optimizer-extractor>`. -3. A :doc:`model transformation <[legacy]-model-optimizer-extensions/[legacy]-graph-transformation-extensions>`, which can be executed during front, middle or back phase of the model conversion. - -An extension is just a plain text file with a Python code. The file should contain a class (or classes) inherited from -one of extension base classes. Extension files should be saved to a directory with the following structure: - -.. code-block:: sh - - .// - ops/ - custom operations - front/ - framework independent front transformations - / - front transformations for models only and extractors for operations - / - front transformations for models only and extractors for operations - ... - middle/ - middle transformations - back/ - back transformations - -Model Optimizer uses the same layout internally to keep built-in extensions. The only exception is that the -``mo/ops/`` directory is also used as a source of the Model Optimizer operations due to historical reasons. - -.. note:: - The name of a root directory with extensions should not be equal to "extensions" because it will result in a name conflict with the built-in Model Optimizer extensions. - -.. note:: - Model Optimizer itself is built by using these extensions, so there is a huge number of examples of their usage in the Model Optimizer code. - -==================== -Additional Resources -==================== - -* :doc:`Model Optimizer Extensibility <../legacy-model-optimizer-extensibility>` -* :doc:`Graph Traversal and Modification Using Ports and Connections <[legacy]-graph-traversal-and-modification>` -* :doc:`Extending Model Optimizer with Caffe Python Layers <[legacy]-extending-model-optimizer-with-caffe-python-layers>` - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-graph-transformation-extensions.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-graph-transformation-extensions.rst deleted file mode 100644 index 95f722ee063443..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-graph-transformation-extensions.rst +++ /dev/null @@ -1,605 +0,0 @@ -[LEGACY] Graph Transformation Extensions -========================================== - -.. meta:: - :description: Learn about various base classes for front, middle and back phase - transformations applied during model conversion with Model Optimizer. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated TensorFlow conversion method. The guide on the new and recommended method, using a new frontend, can be found in the :doc:`Frontend Extensions <../../../../openvino-extensibility/frontend-extensions>` article. - -Model Optimizer provides various base classes to implement :ref:`Front Phase Transformations `, -:ref:`Middle Phase Transformations `, and :ref:`Back Phase Transformations `. -All classes have the following common class attributes and methods: - -1. The ``enabled`` attribute specifies whether the transformation is enabled or not. The value can be changed during runtime to enable or disable execution of the transformation during a model conversion. Default value is ``True``. -2. The ``id`` attribute specifies a unique transformation string identifier. This transformation identifier can be used to enable (disable) the transformation by setting environment variable ``MO_ENABLED_TRANSFORMS`` (``MO_DISABLED_TRANSFORMS``) with a comma separated list of ``ids``. The environment variables override the value of the ``enabled`` attribute of the transformation. Instead of using ``id`` attribute value you can add fully defined class name to ``MO_ENABLED_TRANSFORMS`` (``MO_DISABLED_TRANSFORMS``) variable, ``extensions.back.NonmalizeToNormalizeL2.NormalizeToNormalizeL2`` for example. It is an optional attribute. -3. The ``run_not_recursively`` attribute specifies whether the transformation should be executed in the sub-graphs, for example, body of the :doc:`TensorIterator <../../../../openvino-ir-format/operation-sets/operation-specs/infrastructure/tensor-iterator-1>` and the :doc:`Loop <../../../../openvino-ir-format/operation-sets/operation-specs/infrastructure/loop-5>`. Default value is ``True``. -4. The ``force_clean_up`` attribute specifies whether the graph clean up should be executed after the transformation. The graph cleanup removes nodes of the graph not reachable from the model inputs. Default value is ``False``. -5. The ``force_shape_inference`` attribute specifies whether the nodes marked with ``need_shape_inference`` attribute equal to ``True`` should be re-inferred after the transformation. Model Optimizer sets this attribute automatically for nodes, input(s) of which were changed during the transformation, or you can set this attribute manually in the transformation for the specific nodes. Default value is ``False``. -6. Attribute ``graph_condition`` specifies a list of functions with one parameter -- ``Graph`` object. The transformation is executed if and only if all functions return ``True``. If the attribute is not set, no check is performed. -7. Method ``run_before()`` returns a list of transformation classes which this transformation should be executed before. -8. Method ``run_after()`` returns a list of transformation classes which this transformation should be executed after. - -.. note:: - Some of the transformation types have specific class attributes and methods, which are explained in the corresponding sections of this document. - -Model Optimizer builds a graph of dependencies between registered transformations and executes them in the topological -order. To execute the transformation during a proper model conversion phase, Model Optimizer defines several -anchor transformations that do nothing. All transformations are ordered with respect to these anchor transformations. -The diagram below shows anchor transformations, some of built-in transformations and dependencies between them: - -.. image:: ../../../../../assets/images/MO_transformations_graph.svg - -User-defined transformations are executed after the corresponding ``Start`` and before the corresponding ``Finish`` anchor -transformations by default (if ``run_before()`` and ``run_after()`` methods have not been overridden). - -.. note:: - The ``PreMiddleStart`` and ``PostMiddleStart`` anchors were introduced due to historical reasons to refactor the Model Optimizer pipeline, which initially had a hardcoded order of transformations. - -.. _mo_front_phase_transformations: - -=========================== -Front Phase Transformations -=========================== - -There are several types of a front phase transformation: - -1. :ref:`Pattern-Defined Front Phase Transformations ` triggered for each sub-graph of the original graph isomorphic to the specified pattern. -2. :ref:`Specific Operation Front Phase Transformations ` triggered for the node with a specific ``op`` attribute value. -3. :ref:`Generic Front Phase Transformations `. -4. Manually enabled transformation, defined with a JSON configuration file (for TensorFlow, ONNX, and PaddlePaddle models), specified using the ``--transformations_config`` command-line parameter: - - 1. :ref:`Node Name Pattern Front Phase Transformations `. - 2. :ref:`Front Phase Transformations Using Start and End Points `. - 3. :ref:`Generic Front Phase Transformations Enabled with Transformations Configuration File `. - -.. _pattern_defined_front_phase_transformations: - -Pattern-Defined Front Phase Transformations -########################################### - -This type of transformation is implemented using ``mo.front.common.replacement.FrontReplacementSubgraph`` and -``mo.front.common.replacement.FrontReplacementPattern`` as base classes and works as follows: - -1. Define a sub-graph to be matched, using a list of nodes with attributes and edges connecting them (edges may also have attributes). -2. Model Optimizer searches for all sub-graphs of the original graph, isomorphic to the specified sub-graph (pattern). -3. Model Optimizer executes the defined function performing graph transformation for each instance of a matched sub-graph. You can override different functions in the base transformation class so the Model Optimizer works differently: - - 1. The ``replace_sub_graph(self, graph, match)`` override the method. In this case Model Optimizer only executes the overridden function, pass the ``graph`` object and a dictionary describing the matched sub-graph. You are required to write the transformation and connect the newly created nodes to the rest of the graph. - 2. The ``generate_sub_graph(self, graph, match)`` override the method. This case is not recommended for use because it is the most complicated approach. It can be effectively replaced with one of two previous approaches. - -The sub-graph pattern is defined in the ``pattern()`` function. This function should return a dictionary with two keys: -``nodes`` and ``edges``: - -* The value for the ``nodes`` key is a list of tuples with two elements. - - * The first element is an alias name for a node that will be used to define edges between nodes and in the transformation function. - * The second element is a dictionary with attributes. The key is a name of an attribute that should exist in the node. The value for the attribute can be some specific value to match or a function that gets a single parameter - the attribute value from the node. The function should return the result of attribute comparison with a dedicated value. - -* The value for the ``edges`` key is a list of tuples with two or three elements. - - * The first element is the alias name of the node producing a tensor. - * The second element is the alias name of the node consuming the tensor. - * The third element (optional) is the dictionary with expected edge attributes. This dictionary usually contains attributes like ``in`` and ``out``, defining input and output ports. - -Consider the example of a front transformation implemented in the ``extensions/front/Mish_fusion.py`` file performing -fusing of the sub-graph defining the :doc:`Mish <../../../../openvino-ir-format/operation-sets/operation-specs/activation/mish-4>` activation function into a single -operation: - -.. code-block:: py - :force: - - from openvino.tools.mo.front.Softplus_fusion import SoftplusFusion - from openvino.tools.mo.ops.activation_ops import Mish - from openvino.tools.mo.front.common.replacement import FrontReplacementSubgraph - from openvino.tools.mo.front.subgraph_matcher import SubgraphMatch - from openvino.tools.mo.graph.graph import Graph, rename_nodes - - - class MishFusion(FrontReplacementSubgraph): - """ - The transformation looks for the pattern with Softplus defining the Mish function: Mish(x) = x * tanh(SoftPlus(x)). - """ - enabled = True # Transformation is enabled. - - def run_after(self): # Run this transformation after "SoftplusFusion" transformation. - return [SoftplusFusion] - - def pattern(self): # Define pattern according to formulae x * tanh(SoftPlus(x)). - return dict( - nodes=[ - ('mul', dict(op='Mul')), - ('tanh', dict(op='Tanh')), - ('softplus', dict(op='SoftPlus')), - ], - edges=[ - ('softplus', 'tanh'), - ('tanh', 'mul'), - ]) - - def replace_sub_graph(self, graph: Graph, match: [dict, SubgraphMatch]): # Entry point for the transformation. - mul = match['mul'] # Get the Node corresponding to matched "mul" node. - mul_name = mul.soft_get('name', mul.id) - softplus = match['softplus'] # Get the Node corresponding to the matched "softplus" node. - - # Determine the input port of Mul which gets the 'input' node output. - input_port_idx = int(mul.in_port(0).get_connection().get_source().node.soft_get('op') == 'Tanh') - - # Check that the same tensor is provided as input to Mul and SoftPlus. - if mul.in_port(input_port_idx).get_source() != softplus.in_port(0).get_source(): - return - - mish = Mish(graph, {}).create_node() # Create Mish operation. - mish.in_port(0).connect(mul.in_port(input_port_idx).get_source()) # Connect input to the Mish. - mul.out_port(0).get_connection().set_source(mish.out_port(0)) # Reconnect outgoing edge from "mul" to Mish. - - # Rename the created Mish operation to have the name of the "mul" node, which produced the value equal to the - # Mish output. - rename_nodes([(mul, mul_name + '/TBR'), (mish, mul_name)]) - -.. _specific_operation_front_phase_transformations: - -Specific Operation Front Phase Transformations -############################################## - -This type of transformation is implemented using ``mo.front.common.replacement.FrontReplacementOp`` as base class and -works as follows: - -1. Define an operation type to trigger the transformation. -2. Model Optimizer searches for all nodes in the graph with the attribute ``op`` equal to the specified value. -3. Model Optimizer executes the defined function performing graph transformation for each instance of a matched node. You can override different functions in the base transformation class and Model Optimizer works differently: - - 1. The ``replace_sub_graph(self, graph, match)`` override method. In this case, Model Optimizer only executes the overridden function. Pass the ``graph`` object and a dictionary with a single key ``op`` with the matched node as value. You are required to write the transformation and connect the newly created nodes to the rest of the graph. - 2. The ``replace_op(self, graph, node)`` override method. In this case, Model Optimizer executes the overridden function. Pass the ``graph`` object and the matched node as ``node`` parameter. If the function returns an ``id`` of some node, then the ``Node`` with this ``id`` is connected to the consumers of the matched node. After applying the transformation, the matched node is removed from the graph. - -The ``FrontReplacementOp`` class provides a simpler mechanism to match a single operation with specific value of the ``op`` -(write the ``op`` attribute in the class instead of defining a ``pattern()`` function) attribute and perform the -transformation. - -Consider an example transformation from the ``extensions/front/Pack.py`` file, which replaces ``Pack`` operation from -the TensorFlow: - -.. code-block:: py - :force: - - from openvino.tools.mo.front.common.partial_infer.utils import int64_array - from openvino.tools.mo.front.common.replacement import FrontReplacementOp - from openvino.tools.mo.front.tf.graph_utils import create_op_with_const_inputs - from openvino.tools.mo.graph.graph import Node, Graph, rename_nodes - from openvino.tools.mo.ops.concat import Concat - from openvino.tools.mo.ops.unsqueeze import Unsqueeze - - - class Pack(FrontReplacementOp): - op = "Pack" # Trigger transformation for all nodes in the graph with the op = "Pack" attribute - enabled = True # Transformation is enabled. - - def replace_op(self, graph: Graph, node: Node): # Entry point for the transformation. - # Create a Concat operation with a number of inputs equal to a number of inputs to Pack. - out_node = Concat(graph, {'axis': node.axis, 'in_ports_count': len(node.in_ports())}).create_node() - pack_name = node.soft_get('name', node.id) - - for ind in node.in_ports(): - # Add dimension of size 1 to all inputs of the Pack operation and add them as Concat inputs. - unsqueeze_node = create_op_with_const_inputs(graph, Unsqueeze, {1: int64_array([node.axis])}, - {'name': node.soft_get('name', node.id) + '/Unsqueeze'}) - node.in_port(ind).get_connection().set_destination(unsqueeze_node.in_port(0)) - unsqueeze_node.out_port(0).connect(out_node.in_port(ind)) - - # Rename the created Concat operation to have the name of the "pack" node, which produced the value equal to the - # Concat output. - rename_nodes([(node, pack_name + '/TBR'), (out_node, pack_name)]) - return [out_node.id] # Reconnect the Pack operation consumers to get input from Concat instead. - - -.. _generic_front_phase_transformations: - -Generic Front Phase Transformations -################################### - -Model Optimizer provides a mechanism to implement generic front phase transformation. This type of transformation is -implemented using ``mo.front.common.replacement.FrontReplacementSubgraph`` or -``mo.front.common.replacement.FrontReplacementPattern`` as base classes. Make sure the transformation is enabled before trying to execute it. -Then, Model Optimizer executes the ``find_and_replace_pattern(self, graph)`` method and -provides a ``Graph`` object as an input. - -Consider the example of a generic front transformation from the ``extensions/front/SqueezeNormalize.py`` file performing -normalization of the :doc:`Squeeze <../../../../openvino-ir-format/operation-sets/operation-specs/shape/squeeze-1>` operation. Older version of the operation had a list of -axes to squeeze as an attribute, but now it is a separate input. For backward compatibility, the Model Optimizer -operation supports both semantics. Before IR generation, however, the operation should be normalized according to the -specification. - -.. code-block:: py - :force: - - import logging as log - - from openvino.tools.mo.front.common.partial_infer.utils import int64_array - from openvino.tools.mo.front.common.replacement import FrontReplacementPattern - from openvino.tools.mo.graph.graph import Graph - from openvino.tools.mo.ops.const import Const - from openvino.tools.mo.utils.error import Error - - - class SqueezeNormalize(FrontReplacementPattern): - """ - Normalizes inputs of the Squeeze layers. The layers should have two inputs: the input with data and input with the - dimensions to squeeze. If the second input is omitted then all dimensions of size 1 should be removed. - """ - enabled = True # The transformation is enabled. - - def find_and_replace_pattern(self, graph: Graph): # The function is called unconditionally. - for squeeze_node in graph.get_op_nodes(op='Squeeze'): # Iterate over all nodes with op='Squeeze'. - # If the operation has only 1 input node and no 'squeeze_dims' Node attribute, then convert the attribute to - # the operation input. - if len(squeeze_node.in_nodes()) == 1 and squeeze_node.has_valid('squeeze_dims'): - dims_node = Const(graph, {'name': squeeze_node.id + '/Dims', - 'value': int64_array(squeeze_node.squeeze_dims)}).create_node() - squeeze_node.in_port(1).connect(dims_node.out_port(0)) - del squeeze_node['squeeze_dims'] - # If two inputs already exist, that means the operation is already normalized. - elif len(squeeze_node.in_nodes()) == 2: - log.debug('The Squeeze node "{}" is already normalized'.format(squeeze_node.name)) - # In all other cases, raise an error. - else: - raise Error('The Squeeze layer "{}" should either have 2 inputs or one input and an "squeeze_dims" ' - 'attribute'.format(squeeze_node.soft_get('name'))) - -For the details on implementation and how these front phase transformations work, refer to the ``mo/front/common/replacement.py`` -file. - -.. _node_name_pattern_front_phase_transformations: - -Node Name Pattern Front Phase Transformations -############################################# - -TensorFlow uses a mechanism of scope to group related operation nodes. It is a good practice to put nodes performing -particular task into the same scope. This approach divides a graph into logical blocks that are easier to review in the -TensorBoard. The scope, in fact, just defines a common name prefix for the nodes belonging to it. - -For example, Inception topologies contain several types of so-called **Inception blocks**. Some of them are equal to each -other, but located in different places of the network. For example, Inception V4 from the -`TensorFlow-Slim image classification model library `__ has -``Mixed_5b``, ``Mixed_5c`` and ``Mixed_5d`` inception blocks with exactly the same nodes, with the same set of attributes. - -Consider a situation when these Inception blocks are implemented extremely efficiently using a single Inference -Engine operation called ``InceptionBlock`` and these blocks in the model need to be replaced with instances of this operation. -Model Optimizer provides mechanism to trigger the transformation for a sub-graph of operations defined by the node name -regular expressions (scope). In this particular case, some of the patterns are: ``.*InceptionV4/Mixed_5b``, -``.*InceptionV4/Mixed_5c`` and ``.*InceptionV4/Mixed_5d``. Each pattern starts with ``.*``, because the ``InceptionV4`` prefix -is added to all nodes names during a model freeze. - -This type of transformation is implemented using ``mo.front.tf.replacement.FrontReplacementFromConfigFileSubGraph`` as a -base class and works as follows: - -1. Prepare a JSON configuration file template defining node names patterns. -2. Run Model Optimizer with the ``--tensorflow_custom_operations_config_update`` command-line parameter, and Model Optimizer adds information about input and output nodes of the specified sub-graphs. -3. Model Optimizer executes the defined transformation **only** when you specify the path to the configuration file updated in step 2 using the ``--transformations_config`` command-line parameter. - -Consider the following possible configuration file template for the Inception Block transformation: - -.. code-block:: json - - [ - { - "custom_attributes": { - "attr1_key": "attr1_value", - "attr2_key": 123456 - }, - "id": "InceptionBlockTransformation", - "instances": [ - ".*InceptionV4/Mixed_5b", - ".*InceptionV4/Mixed_5c", - ".*InceptionV4/Mixed_5d" - ], - "match_kind": "scope" - } - ] - -The configuration file contains a list of dictionaries. Each dictionary defines one transformation. Each transformation -is defined with several parameters: - -* ``id`` - **(Mandatory)** — is a unique identifier of the transformation. It is used in the Python code that implements the transformation to link the class and the transformation description from the configuration file. -* ``match_kind`` - **(Mandatory)** — is a string that specifies the matching algorithm. For the node name pattern case, the value should be equal to ``scope``. Another possible values are described in the dedicated sections below. -* ``instances`` - **(Mandatory)** — specifies instances of the sub-graph to be matched. It contains a list of node names prefixes patterns for the match kind of the ``scope`` type. -* ``custom_attributes`` - **(Optional)** — is a dictionary with attributes that can be used in the transformation code. - -After running Model Optimizer with additional ``--tensorflow_custom_operations_config_update`` parameter pointing to -the template configuration file, the content of the file should be updated with two new sections ``inputs`` and ``outputs``. -The file content after the update is as follows: - -.. code-block:: json - - [ - { - "id": "InceptionBlockTransformation", - "custom_attributes": { - "attr1_key": "attr1_value", - "attr2_key": 123456 - }, - "instances": [ - ".*InceptionV4/Mixed_5b", - ".*InceptionV4/Mixed_5c", - ".*InceptionV4/Mixed_5d" - ], - "match_kind": "scope", - "inputs": [ - [ - { - "node": "Branch_2/Conv2d_0a_1x1/Conv2D$", - "port": 0 - }, - { - "node": "Branch_3/AvgPool_0a_3x3/AvgPool$", - "port": 0 - }, - { - "node": "Branch_1/Conv2d_0a_1x1/Conv2D$", - "port": 0 - }, - { - "node": "Branch_0/Conv2d_0a_1x1/Conv2D$", - "port": 0 - } - ] - ], - "outputs": [ - { - "node": "concat$", - "port": 0 - } - ] - } - ] - -The value for ``inputs`` key is a list of lists describing input tensors of the sub-graph. Each element of the top-level -list corresponds to one unique input tensor of the sub-graph. Each internal list describes a list of nodes consuming -this tensor and port numbers, where the tensor is consumed. Model Optimizer generates regular expressions for the input -nodes names to uniquely identify them in each instance of the sub-graph, defined by the ``instances``. Denote these nodes -as input nodes of the sub-graph. - -In the InceptionV4 topology, the ``InceptionV4/Mixed_5b`` block has four input tensors from outside of the sub-graph, -but all of them are produced by the ``InceptionV4/Mixed_5a/concat`` node. Therefore, the top-level list of the ``inputs`` -contains one list corresponding to this tensor. Four input nodes of the sub-graph consume the tensor produced by -``InceptionV4/Mixed_5a/concat`` node. In this case, all four input nodes consume input tensor into "port 0". - -The order of items in the internal list describing nodes does not matter, but the order of elements in the top-level -list is important. This order defines how Model Optimizer attaches input tensors to a new generated -node if the sub-graph is replaced with a single node. The ``i``-th input node of the sub-graph is obtained using -``match.single_input_node(i)`` call in the sub-graph transformation code. More information about API is given below. If it is -necessary to change the order of input tensors, the configuration file can be edited in the text editor. - -The value for the ``outputs`` key is a list describing nodes of the sub-graph producing tensor, that goes outside of the -sub-graph or does not have child nodes. Denote these nodes as output nodes of the sub-graph. The order of elements in -the list is important. The ``i``-th element of the list describes the ``i``-th output tensor of the sub-graph, which could be -obtained using ``match.output_node(i)`` call. The order of elements can be manually changed in the configuration file. -Model Optimizer uses this order to connect output edges if the sub-graph is replaced with a single node. - -For more examples of this type of transformation, refer to the :doc:`Converting TensorFlow Object Detection API Models <../../legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-object-detection>` guide. - -.. _start_end_points_front_phase_transformations: - -Front Phase Transformations Using Start and End Points -###################################################### - -This type of transformation is implemented using ``mo.front.tf.replacement.FrontReplacementFromConfigFileSubGraph`` as a -base class and works as follows: - -1. Prepare a JSON configuration file that defines the sub-graph to match, using two lists of node names: "start" and "end" nodes. -2. Model Optimizer executes the defined transformation **only** when you specify the path to the configuration file using the ``--transformations_config`` command-line parameter . Model Optimizer performs the following steps to match the sub-graph: - - 1. Starts a graph traversal from every start node following the direction of the graph edges. The search stops in an end node or in the case of a node without consumers. All visited nodes are added to the matched sub-graph. - 2. Starts another graph traversal from each non-start node of the sub-graph, i.e. every node except nodes from the "start" list. In this step, the edges are traversed in the opposite edge direction. All newly visited nodes are added to the matched sub-graph. This step is needed to add nodes required for calculation values of internal nodes of the matched sub-graph. - 3. Checks that all "end" nodes were reached from "start" nodes. If not, it exits with an error. - 4. Checks that there are no :doc:`Parameter <../../../../openvino-ir-format/operation-sets/operation-specs/infrastructure/parameter-1>` operations among added nodes. If they exist, the sub-graph depends on the inputs of the model. Such configuration is considered incorrect so Model Optimizer exits with an error. - -This algorithm finds all nodes "between" start and end nodes and nodes needed for calculation of non-input nodes of the -matched sub-graph. - -The example of a JSON configuration file for a transformation with start and end points is -``extensions/front/tf/ssd_support_api_v1.15.json``: - -.. code-block:: json - - [ - { - "custom_attributes": { - "code_type": "caffe.PriorBoxParameter.CENTER_SIZE", - "pad_mode": "caffe.ResizeParameter.CONSTANT", - "resize_mode": "caffe.ResizeParameter.WARP", - "clip_before_nms": false, - "clip_after_nms": true - }, - "id": "ObjectDetectionAPISSDPostprocessorReplacement", - "include_inputs_to_sub_graph": true, - "include_outputs_to_sub_graph": true, - "instances": { - "end_points": [ - "detection_boxes", - "detection_scores", - "num_detections" - ], - "start_points": [ - "Postprocessor/Shape", - "Postprocessor/scale_logits", - "Postprocessor/Tile", - "Postprocessor/Reshape_1", - "Postprocessor/Cast_1" - ] - }, - "match_kind": "points" - } - ] - -The format of the file is similar to the one provided as an example in the -:ref:`Node Name Pattern Front Phase Transformations ` section. The difference is in -the value of the ``match_kind`` parameter, which should be equal to the ``points`` and the format of the ``instances`` parameter, -which should be a dictionary with two keys ``start_points`` and ``end_points``, defining start and end node names -respectively. - -.. note:: - The ``include_inputs_to_sub_graph`` and ``include_outputs_to_sub_graph`` parameters are redundant and should be always equal to ``true``. - -.. note:: - This sub-graph match algorithm has a limitation that each start node must have only one input. Therefore, it is not possible to specify, for example, the :doc:`Convolution <../../../../openvino-ir-format/operation-sets/operation-specs/convolution/convolution-1>` node as input because it has two inputs: data tensor and tensor with weights. - -For other examples of transformations with points, refer to the -:doc:`Converting TensorFlow Object Detection API Models <../../legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-object-detection>` guide. - -.. _generic_transformations_config_front_phase_transformations: - -Generic Front Phase Transformations Enabled with Transformations Configuration File -################################################################################### - -This type of transformation works similarly to the :ref:`Generic Front Phase Transformations ` -but require a JSON configuration file to enable it similarly to -:ref:`Node Name Pattern Front Phase Transformations ` and -:ref:`Front Phase Transformations Using Start and End Points `. - -The base class for this type of transformation is -``mo.front.common.replacement.FrontReplacementFromConfigFileGeneral``. Model Optimizer executes the -``transform_graph(self, graph, replacement_descriptions)`` method and provides the ``Graph`` object and dictionary with values -parsed from the `custom_attributes` attribute of the provided JSON configuration file. - -The example of the configuration file for this type of transformation is ``extensions/front/tf/yolo_v1_tiny.json``: - -.. code-block:: json - - [ - { - "id": "TFYOLO", - "match_kind": "general", - "custom_attributes": { - "classes": 20, - "coords": 4, - "num": 2, - "do_softmax": 0 - } - } - ] - -and the corresponding transformation file is ``./extensions/front/YOLO.py``: - -.. code-block:: py - :force: - - from openvino.tools.mo.front.no_op_eraser import NoOpEraser - from openvino.tools.mo.front.standalone_const_eraser import StandaloneConstEraser - from openvino.tools.mo.ops.regionyolo import RegionYoloOp - from openvino.tools.mo.front.tf.replacement import FrontReplacementFromConfigFileGeneral - from openvino.tools.mo.graph.graph import Node, Graph - from openvino.tools.mo.ops.result import Result - from openvino.tools.mo.utils.error import Error - - - class YoloRegionAddon(FrontReplacementFromConfigFileGeneral): - """ - Replaces all Result nodes in graph with YoloRegion->Result nodes chain. - YoloRegion node attributes are taken from configuration file - """ - replacement_id = 'TFYOLO' # The identifier matching the "id" attribute in the JSON file. - - def run_after(self): - return [NoOpEraser, StandaloneConstEraser] - - def transform_graph(self, graph: Graph, replacement_descriptions): - op_outputs = [n for n, d in graph.nodes(data=True) if 'op' in d and d['op'] == 'Result'] - for op_output in op_outputs: - last_node = Node(graph, op_output).in_node(0) - op_params = dict(name=last_node.id + '/YoloRegion', axis=1, end_axis=-1) - op_params.update(replacement_descriptions) - region_layer = RegionYoloOp(graph, op_params) - region_layer_node = region_layer.create_node([last_node]) - # In here, 'axis' from 'dim_attrs' can be removed to avoid permutation from axis = 1 to axis = 2. - region_layer_node.dim_attrs.remove('axis') - Result(graph).create_node([region_layer_node]) - graph.remove_node(op_output) - -The configuration file has only 3 parameters: ``id`` identifier of the transformation , ``match_kind`` (which should be equal -to ``general``) and the ``custom_attributes`` dictionary with custom attributes accessible in the transformation. - -.. _mo_middle_phase_transformations: - -============================ -Middle Phase Transformations -============================ - -There are two types of middle phase transformations: - -1. :ref:`Pattern-Defined Middle Phase Transformations ` triggered for each sub-graph of the original graph, isomorphic to the specified pattern. -2. :ref:`Generic Middle Phase Transformations `. - -.. _pattern_defined_middle_phase_transformations: - -Pattern-Defined Middle Phase Transformations -############################################ - -This type of transformation is implemented using ``mo.middle.replacement.MiddleReplacementPattern`` as a base class and -works similarly to the :ref:`Pattern-Defined Middle Phase Transformations ` -The are two differences: - -1. The transformation entry function name is ``replace_pattern(self, graph, match)``. -2. The pattern defining the graph should contain data nodes because the structure of the graph is different between front and middle phases. For more information about the graph structure changes, refer to the :ref:`Partial Inference `. - -For the example of a pattern-defined middle transformation, refer to the ``extensions/middle/L2NormToNorm.py`` file. - -.. _generic_middle_phase_transformations: - -Generic Middle Phase Transformations -#################################### - -Model Optimizer provides a mechanism to implement generic middle phase transformations. This type of transformation is -implemented using ``mo.middle.replacement.MiddleReplacementPattern`` as a base class and works similarly to the -:ref:`Generic Front Phase Transformations `. The only difference is that the -transformation entry function name is ``find_and_replace_pattern(self, graph: Graph)``. - -For the example of this transformation, refer to the ``extensions/middle/CheckForCycle.py`` file. - -.. _mo_back_phase_transformations: - -========================== -Back Phase Transformations -========================== - -There are two types of back phase transformations: - -1. :ref:`Pattern-Defined Back Phase Transformations ` triggered for each sub-graph of the original graph, isomorphic to the specified pattern. -2. :ref:`Generic Back Phase Transformations `. - -.. note:: - The graph layout during the back phase is always NCHW. However, during the front and middle phases it could be NHWC if the original model was using it. For more details, refer to :ref:`Model Conversion Pipeline `. - -.. _pattern_defined_back_phase_transformations: - -Pattern-Defined Back Phase Transformations -########################################## - -This type of transformation is implemented using ``mo.back.replacement.MiddleReplacementPattern`` as a base class and -works the same way as :ref:`Pattern-Defined Middle Phase Transformations `. - -For the example of a pattern-defined back transformation, refer to the ``extensions/back/ShufflenetReLUReorder.py`` file. - -.. _generic_back_phase_transformations: - -Generic Back Phase Transformations -################################## - -Model Optimizer provides mechanism to implement generic back phase transformations. This type of transformation is -implemented using ``mo.back.replacement.BackReplacementPattern`` as a base class and works the same way as -:ref:`Generic Middle Phase Transformations `. - -For the example of this transformation, refer to the ``extensions/back/GatherNormalizer.py`` file. - -==================== -Additional Resources -==================== - -* :doc:`Model Optimizer Extensibility <../../legacy-model-optimizer-extensibility>` -* :doc:`Graph Traversal and Modification Using Ports and Connections <../../legacy-model-optimizer-extensibility/[legacy]-graph-traversal-and-modification>` -* :doc:`Model Optimizer Extensions <../[legacy]-model-optimizer-extensions>` -* :doc:`Extending Model Optimizer with Caffe Python Layers <../[legacy]-extending-model-optimizer-with-caffe-python-layers>` - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-model-optimizer-operation.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-model-optimizer-operation.rst deleted file mode 100644 index 61c43f72dfade9..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-model-optimizer-operation.rst +++ /dev/null @@ -1,110 +0,0 @@ -[LEGACY] Model Optimizer Operation -=================================== - -.. meta:: - :description: Learn about the Op class, that contains operation attributes, - which are set to a node of the graph created during model - conversion with Model Optimizer. - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated TensorFlow conversion method. The guide on the new and recommended method, using a new frontend, can be found in the :doc:`Frontend Extensions <../../../../openvino-extensibility/frontend-extensions>` article. - -Model Optimizer defines a ``mo.ops.Op`` class (``Op`` will be used later in the document to be short), which is a base class -for an operation used in the Model Optimizer. The instance of the ``Op`` class serves several purposes: - -1. Stores the operation attributes. -2. Stores the operation shape/value and type inference functions. -3. Defines operation attributes to be saved to the corresponding IR section. -4. Contains convenient methods to create a graph node from an ``Op`` object instance and connect it with the existing graph. -5. Used in the extractors to store parsed attributes and operation specific attributes in the dedicated graph node. - -It is important to mention that there is no connection between the instance of the ``Op`` class and the ``Node`` object -created from it. The ``Op`` class is just a container for attributes describing the operation. Model Optimizer uses the ``Op`` -class during a model conversion to create a node of the graph with attributes copied from the ``Op`` class instance. Graph -manipulations are performed with graph ``Nodes`` and their attributes and does not involve ``Ops``. - -There are a number of common attributes used in the operations. Below is the list of these attributes with description. - -* ``id`` — **(Mandatory)** — unique identifier of a node in a graph. Generated automatically, equal to the number of nodes in the graph plus 1 if not specified. -* ``name`` — **(Mandatory)** — name of the operation. Generated automatically, equal to the ``id`` if not specified. -* ``type`` — **(Mandatory)** — type of the operation according to the :doc:`opset specification <../../../../openvino-ir-format/operation-sets/available-opsets>`. For the internal Model Optimizer operations, this attribute should be set to ``None``. The model conversion fails if an operation with ``type`` equal to ``None`` comes to the IR emitting phase. -* ``version`` — **(Mandatory)** — the operation set (opset) name the operation belongs to. If not specified, Model Optimizer sets it equal to ``experimental``. For more information about operation sets, refer to :doc:`OpenVINO Model Representation <../../../../../openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation>` section. -* ``op`` — Model Optimizer type of the operation. In many cases, the value of ``type`` is equal to the value of ``op``. However, when Model Optimizer cannot instantiate the opset operation during model loading, it creates an instance of an internal operation. Thus, the attribute ``op`` is used as a type of this internal operation. Later in the pipeline, the node created from an internal operation will be replaced during front, middle or back phase with node(s) created from the opset. -* ``infer`` — the attribute defines a function calculating output tensor(s) shape and optional value(s). The attribute may be set to ``None`` for the internal Model Optimizer operations used during the front phase only. For more information about the shape inference function, refer to the :ref:`Partial Inference `. -* ``type_infer`` — the attribute defines a function calculating output tensor(s) data type. If the attribute is not defined, the default function is used. The function checks if the ``data_type`` node attribute is set and then propagates this type to the output tensor from the **port 0**. Otherwise, it propagates the data type of the tensor coming into the input **port 0** to the output tensor from the **port 0**. -* ``in_ports_count`` — default number of input ports to be created for the operation. Additional ports can be created or redundant ports can be removed using dedicated ``Node`` class API methods. -* ``out_ports_count`` — default number of output ports to be created for the operation. Additional ports can be created or redundant ports can be removed using dedicated ``Node`` class API methods. - -Below is an example of the Model Optimizer class for the :doc:`SoftMax <../../../../openvino-ir-format/operation-sets/operation-specs/activation/softmax-1>` operation from -the ``mo/ops/softmax.py`` file with the comments in code. - -.. code-block:: py - - class Softmax(Op): - # The class attribute defines a name of the operation so the operation class can be obtained using the - # "Op.get_op_class_by_name()" static method - op = 'SoftMax' - - # The operation works as an extractor by default. This is a legacy behavior, currently not recommended for use, - # thus "enabled" class attribute is set to False. The recommended approach is to use dedicated extractor extension. - enabled = False - - def __init__(self, graph: Graph, attrs: dict): - super().__init__(graph, { # The constructor of the base class Op is called with additional default attributes. - 'type': __class__.op, # The operation is from the opset so the type is set to 'SoftMax'. - 'op': __class__.op, # Internal Model Optimizer operation has the same type. - 'version': 'opset1', # The operation corresponds to opset1. - 'infer': Softmax.infer, # Shape inference function is defined below. - 'axis': 1, # Default value for the "axis" attribute of the operation SoftMax. - 'in_ports_count': 1, # The operation has one input. - 'out_ports_count': 1, # The operation produces one output. - }, attrs) - - # The method returns operation specific attributes list. This method is important when implementing - # extractor inherited from CaffePythonFrontExtractorOp class to extract attribute for Caffe Python operation. - # However, it is currently used interchangeably with the "backend_attrs()" method. If the "backend_attrs()" is not used, - # then the "supported_attrs()" is used instead. In this particular case, the operation has just one attribute "axis". - def supported_attrs(self): - return ['axis'] - - @staticmethod - def infer(node: Node): - "some code calculating output shape and values" - -There is a dedicated method called ``backend_attrs()`` defining a list of attributes to be saved to the IR. Consider an -example from the ``mo/ops/pooling.py`` file: - -.. code-block:: py - - def backend_attrs(self): - return [ - ('strides', lambda node: ','.join(map(str, node['stride'][node.spatial_dims]))), - ('kernel', lambda node: ','.join(map(str, node['window'][node.spatial_dims]))), - - ('pads_begin', lambda node: ','.join(map(str, get_backend_pad(node.pad, node.spatial_dims, 0)))), - ('pads_end', lambda node: ','.join(map(str, get_backend_pad(node.pad, node.spatial_dims, 1)))), - - ('pool-method', 'pool_method'), - ('exclude-pad', 'exclude_pad'), - - 'rounding_type', - 'auto_pad', - ] - -The ``backend_attrs()`` function returns a list of records. A record can be of one of the following formats: -1. A string defining the attribute to be saved to the IR. If the value of the attribute is ``None``, the attribute is not saved. Examples of this case are ``rounding_type`` and ``auto_pad``. -2. A tuple, where the first element is a string defining the name of the attribute as it will appear in the IR and the second element is a function to produce the value for this attribute. The function gets an instance of the ``Node`` as the only parameter and returns a string with the value to be saved to the IR. Examples of this case are ``strides``, ``kernel``, ``pads_begin`` and ``pads_end``. -3. A tuple, where the first element is a string defining the name of the attribute as it will appear in the IR and the second element is the name of the ``Node`` attribute to get the value from. Examples of this case are ``pool-method`` and ``exclude-pad``. - -==================== -Additional Resources -==================== - -* :doc:`Model Optimizer Extensibility <../../legacy-model-optimizer-extensibility>` -* :doc:`Graph Traversal and Modification Using Ports and Connections <../../legacy-model-optimizer-extensibility/[legacy]-graph-traversal-and-modification>` -* :doc:`Model Optimizer Extensions <../[legacy]-model-optimizer-extensions>` -* :doc:`Extending Model Optimizer with Caffe Python Layers <../[legacy]-extending-model-optimizer-with-caffe-python-layers>` - diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-optimizer-extractor.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-optimizer-extractor.rst deleted file mode 100644 index 5de7ae93f86a7c..00000000000000 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-optimizer-extractor.rst +++ /dev/null @@ -1,113 +0,0 @@ -[LEGACY] Operation Extractor -============================= - -.. meta:: - :description: Learn about a deprecated generic extension in Model Optimizer, - which provides the operation extractor usable for all model - frameworks. - - -.. danger:: - - The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - - This guide describes a deprecated TensorFlow conversion method. The guide on the new and recommended method, using a new frontend, can be found in the :doc:`Frontend Extensions <../../../../openvino-extensibility/frontend-extensions>` article. - -Model Optimizer runs specific extractor for each operation in the model during the model loading. - -There are several types of Model Optimizer extractor extensions: - -1. The generic one, which is described in this article. -2. The special extractor for Caffe models with Python layers. This kind of extractor is described in the :doc:`Extending Model Optimizer with Caffe Python Layers <../[legacy]-extending-model-optimizer-with-caffe-python-layers>` guide. - -Generic extension provides a generic mechanism for the operation extractor applicable for all frameworks. Model Optimizer provides the ``mo.front.extractor.FrontExtractorOp`` class as a base class to implement the extractor. It has the ``extract`` class method, which gets the only parameter ``Node``, which corresponds to the graph node to extract data from. The operation description in the original framework format is stored in the attribute ``pb`` of the node. The extractor goal is to parse this attribute and save necessary attributes to the corresponding node of the graph. Consider the extractor for the ``Const`` TensorFlow operation (refer to the ``extensions/front/tf/const_ext.py`` file): - -.. code-block:: py - :force: - - from openvino.tools.mo.front.extractor import FrontExtractorOp - from openvino.tools.mo.front.tf.extractors.utils import tf_dtype_extractor, tf_tensor_shape, tf_tensor_content - from openvino.tools.mo.ops.const import Const - - - class ConstExtractor(FrontExtractorOp): - # The "op" class attribute defines a type of the operation in the framework (in this case it is a TensorFlow), - # for which the extractor should be triggered. - op = 'Const' - enabled = True # The flag that indicates that this extractor is enabled. - - @classmethod - def extract(cls, node): # The entry point of the extractor. - # The `node.pb` attribute stores the TensorFlow representation of the operation, which is a Protobuf message of the - # specific format. In particular, the message contains the attribute called "value" containing the description of - # the constant. The string "pb.attr["value"].tensor" is just a Python binding for Protobuf message parsing. - pb_tensor = node.pb.attr["value"].tensor - # Get the shape of the tensor from the protobuf message, using the helper function "tf_tensor_shape". - shape = tf_tensor_shape(pb_tensor.tensor_shape) - # Create a dictionary with necessary attributes. - attrs = { - 'shape': shape, - # Get the tensor value, using "tf_tensor_content" helper function. - 'value': tf_tensor_content(pb_tensor.dtype, shape, pb_tensor), - # Get the tensor data type, using "tf_dtype_extractor" helper function. - 'data_type': tf_dtype_extractor(pb_tensor.dtype), - } - # Update the node attributes, using default attributes from the "Const" operation and attributes saved to the - # "attrs" dictionary. - Const.update_node_stat(node, attrs) - return cls.enabled - -Consider another example with an extractor of the ``Constant`` ONNX operation (refer to the ``extensions/front/onnx/const_ext.py`` file): - -.. code-block:: py - :force: - - from onnx import numpy_helper - from onnx.numpy_helper import to_array - - from openvino.tools.mo.front.extractor import FrontExtractorOp - from openvino.tools.mo.front.onnx.extractors.utils import onnx_attr - from openvino.tools.mo.ops.const import Const - - - class ConstantExtractor(FrontExtractorOp): - op = 'Constant' - enabled = True - - @classmethod - def extract(cls, node): - # Use "onnx_attr" helper method, which parses the Protobuf representation of the operation saved in the "node". - # Gets the value of the attribute with name "value" as "TensorProto" type (specified with a keyword "t"). - pb_value = onnx_attr(node, 'value', 't') - # Use "numpy_helper.to_array()" ONNX helper method to convert "TensorProto" object to a numpy array. - value = numpy_helper.to_array(pb_value) - - attrs = { - 'data_type': value.dtype, - 'value': value, - } - # Update the node attributes, using default attributes from the "Const" operation and attributes saved to the - # "attrs" dictionary. - Const.update_node_stat(node, attrs) - return cls.enabled - -The extractors for operations from different frameworks work similarly. The only difference is in the helper methods used to parse operation attributes encoded with a framework-specific representation. - -A common practice is to use ``update_node_stat()`` method of the dedicated ``Op`` class to update the node attributes. This method does the following: - -1. Sets values for common attributes like ``op``, ``type``, ``infer``, ``in_ports_count``, ``out_ports_count``, ``version`` to values specific to the dedicated operation (``Const`` operation in this case). -2. Uses ``supported_attrs()`` and ``backend_attrs()`` methods, defined in the ``Op`` class to update specific node attribute ``IE``. The IR emitter uses the value stored in the ``IE`` attribute to pre-process attribute values and save them to IR. -3. Optionally sets additional attributes provided to the ``update_node_stat()`` function as a second parameter. Usually these attributes are parsed from the particular instance of the operation. - -.. note:: - Model Optimizer uses numpy arrays to store values and numpy arrays of ``np.int64`` type to store shapes in the graph. - -==================== -Additional Resources -==================== - -* :doc:`Model Optimizer Extensibility <../../legacy-model-optimizer-extensibility>` -* :doc:`Graph Traversal and Modification Using Ports and Connections <../../legacy-model-optimizer-extensibility/[legacy]-graph-traversal-and-modification>` -* :doc:`Model Optimizer Extensions <../[legacy]-model-optimizer-extensions>` -* :doc:`Extending Model Optimizer with Caffe Python Layers <../[legacy]-extending-model-optimizer-with-caffe-python-layers>` - diff --git a/docs/articles_en/documentation/openvino-ecosystem.rst b/docs/articles_en/documentation/openvino-ecosystem.rst index 6735192e95f674..dea2065f6e3f4a 100644 --- a/docs/articles_en/documentation/openvino-ecosystem.rst +++ b/docs/articles_en/documentation/openvino-ecosystem.rst @@ -107,15 +107,6 @@ development process, empowering teams to produce custom AI models at scale. :bdg-link-success:`User Guide ` OpenVINO Tokenizers add text processing operations to OpenVINO. -|hr| - - -| **OpenVINO's Open Model Zoo** -| :bdg-link-dark:`Github ` - :bdg-link-success:`User Guide ` - -Open Model Zoo includes optimized deep learning models and a set of demos to -expedite development of high-performance deep learning inference applications. OpenVINO-based AI projects ########################## diff --git a/docs/articles_en/documentation/openvino-ecosystem/openvino-security-add-on.rst b/docs/articles_en/documentation/openvino-ecosystem/openvino-security-add-on.rst index 3959ebefb09a4a..043f05a90e2342 100644 --- a/docs/articles_en/documentation/openvino-ecosystem/openvino-security-add-on.rst +++ b/docs/articles_en/documentation/openvino-ecosystem/openvino-security-add-on.rst @@ -735,7 +735,7 @@ How to Use the OpenVINO™ Security Add-on This section requires interactions between the Model Developer/Independent Software vendor and the User. All roles must complete all applicable :ref:`set up steps ` and :ref:`installation steps ` before beginning this section. -This document uses the `face-detection-retail-0004 `__ model as an example. +This document uses a face-detection model as an example. The following figure describes the interactions between the Model Developer, Independent Software Vendor, and User. @@ -793,15 +793,8 @@ Step 2: Create a key store and add a certificate to it Step 3: Create the model ------------------------ -This example uses ``curl`` to download the ``face-detection-retail-004`` model from the OpenVINO Model Zoo. If you are behind a firewall, check and set your proxy settings. - -Download a model from the Model Zoo: - -.. code-block:: sh - - curl --create-dirs https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_../legacy-features/model-zoo/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.xml https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_../legacy-features/model-zoo/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.bin -o model/face-detection-retail-0004.xml -o model/face-detection-retail-0004.bin - -The model is downloaded to the ``OVSA_DEV_ARTEFACTS/model`` directory +Download a `model `__ in OpenVINO IR format to +the ``OVSA_DEV_ARTEFACTS/model`` directory. Step 4: Define access control for the model and create a master license for it ------------------------------------------------------------------------------- @@ -811,9 +804,9 @@ Define and enable the model access control and master license: .. code-block:: sh uuid=$(uuidgen) - /opt/ovsa/bin/ovsatool controlAccess -i model/face-detection-retail-0004.xml model/face-detection-retail-0004.bin -n "face detection" -d "face detection retail" -v 0004 -p face_detection_model.dat -m face_detection_model.masterlic -k isv_keystore -g $uuid + /opt/ovsa/bin/ovsatool controlAccess -i model/.xml model/.bin -n "name of the model" -d "detailed name of the model" -p .dat -m .masterlic -k isv_keystore -g $uuid -The Intermediate Representation files for the ``face-detection-retail-0004`` model are encrypted as ``face_detection_model.dat`` and a master license is generated as ``face_detection_model.masterlic`` +The Intermediate Representation files for the model are encrypted as ``.dat`` and a master license is generated as ``.masterlic`` Step 5: Create a Runtime Reference TCB -------------------------------------- @@ -824,7 +817,7 @@ Generate the reference TCB for the runtime .. code-block:: sh - /opt/ovsa/bin/ovsaruntime gen-tcb-signature -n "Face Detect @ Runtime VM" -v "1.0" -f face_detect_runtime_vm.tcb -k isv_keystore + /opt/ovsa/bin/ovsaruntime gen-tcb-signature -n "Face Detect @ Runtime VM" -v "1.0" -f model_inference_runtime_vm.tcb -k isv_keystore Step 6: Publish the access controlled Model and Runtime Reference TCB @@ -856,7 +849,7 @@ Step 7: Receive a User Request .. code-block:: sh cd $OVSA_DEV_ARTEFACTS - /opt/ovsa/bin/ovsatool sale -m face_detection_model.masterlic -k isv_keystore -l 30daylicense.config -t face_detect_runtime_vm.tcb -p custkeystore.csr.crt -c face_detection_model.lic + /opt/ovsa/bin/ovsatool sale -m .masterlic -k isv_keystore -l 30daylicense.config -t detect_runtime_vm.tcb -p custkeystore.csr.crt -c .lic 4. Update the license server database with the license. @@ -864,13 +857,13 @@ Step 7: Receive a User Request .. code-block:: sh cd /opt/ovsa/DB - python3 ovsa_store_customer_lic_cert_db.py ovsa.db $OVSA_DEV_ARTEFACTS/face_detection_model.lic $OVSA_DEV_ARTEFACTS/custkeystore.csr.crt + python3 ovsa_store_customer_lic_cert_db.py ovsa.db $OVSA_DEV_ARTEFACTS/.lic $OVSA_DEV_ARTEFACTS/custkeystore.csr.crt 5. Provide these files to the User: - * ``face_detection_model.dat`` - * ``face_detection_model.lic`` + * ``.dat`` + * ``.lic`` Model User Instructions +++++++++++++++++++++++ @@ -930,14 +923,14 @@ Step 4: Receive and load the access controlled model into the OpenVINO™ Model 1. Receive the model as files named: - * face_detection_model.dat - * face_detection_model.lic + * .dat + * .lic .. code-block:: sh cd $OVSA_RUNTIME_ARTEFACTS - scp username@://OVSA/artefacts/face_detection_model.dat . - scp username@://OVSA/artefacts/face_detection_model.lic . + scp username@://OVSA/artefacts/.dat . + scp username@://OVSA/artefacts/.lic . 2. Prepare the environment: @@ -954,8 +947,8 @@ Step 4: Receive and load the access controlled model into the OpenVINO™ Model .. code-block:: sh cd $OVSA_RUNTIME_ARTEFACTS/../ovms - cp $OVSA_RUNTIME_ARTEFACTS/face_detection_model.dat model/fd/1/. - cp $OVSA_RUNTIME_ARTEFACTS/face_detection_model.lic model/fd/1/. + cp $OVSA_RUNTIME_ARTEFACTS/.dat model/fd/1/. + cp $OVSA_RUNTIME_ARTEFACTS/.lic model/fd/1/. cp $OVSA_RUNTIME_ARTEFACTS/custkeystore model/fd/1/. 4. Rename and edit ``sample.json`` to include the names of the access controlled model artefacts you received from the Model Developer. The file looks like this: @@ -976,7 +969,7 @@ Step 4: Receive and load the access controlled model into the OpenVINO™ Model "config":{ "name":"controlled-access-model", "base_path":"/sampleloader/model/fd", - "custom_loader_options": {"loader_name": "ovsa", "keystore": "custkeystore", "controlled_access_file": "face_detection_model"} + "custom_loader_options": {"loader_name": "ovsa", "keystore": "custkeystore", "controlled_access_file": ""} } } ] @@ -1010,7 +1003,7 @@ Step 6: Prepare to run Inference pip3 install futures==3.1.1 pip3 install tensorflow-serving-api==1.14.0 -3. Copy the ``face_detection.py`` from the example_client in ``/opt/ovsa/example_client`` +3. Copy the ``detection.py`` from the example_client in ``/opt/ovsa/example_client`` .. code-block:: sh @@ -1027,11 +1020,11 @@ Step 6: Prepare to run Inference Step 7: Run Inference --------------------- -Run the ``face_detection.py`` script: +Run the ``detection.py`` script: .. code-block:: sh - python3 face_detection.py --grpc_port 3335 --batch_size 1 --width 300 --height 300 --input_images_dir images --output_dir results --tls --server_cert /var/OVSA/Modelserver/server.pem --client_cert /var/OVSA/Modelserver/client.pem --client_key /var/OVSA/Modelserver/client.key --model_name controlled-access-model + python3 detection.py --grpc_port 3335 --batch_size 1 --width 300 --height 300 --input_images_dir images --output_dir results --tls --server_cert /var/OVSA/Modelserver/server.pem --client_cert /var/OVSA/Modelserver/client.pem --client_key /var/OVSA/Modelserver/client.key --model_name controlled-access-model Summary diff --git a/docs/articles_en/documentation/openvino-extensibility.rst b/docs/articles_en/documentation/openvino-extensibility.rst index 216135009b1806..80fe342b31a6c2 100644 --- a/docs/articles_en/documentation/openvino-extensibility.rst +++ b/docs/articles_en/documentation/openvino-extensibility.rst @@ -32,7 +32,7 @@ Custom operations, which are not included in the list, are not recognized by Ope 1. A new or rarely used regular framework operation is not supported in OpenVINO yet. 2. A new user operation that was created for some specific model topology by the author of the model using framework extension capabilities. -Importing models with such operations requires additional steps. This guide illustrates the workflow for running inference on models featuring custom operations. This allows plugging in your own implementation for them. OpenVINO Extensibility API enables adding support for those custom operations and using one implementation for Model Optimizer and OpenVINO Runtime. +Importing models with such operations requires additional steps. This guide illustrates the workflow for running inference on models featuring custom operations. This allows plugging in your own implementation for them. OpenVINO Extensibility API enables adding support for those custom operations and using one implementation for model conversion API and OpenVINO Runtime. Defining a new custom operation basically consists of two parts: @@ -56,21 +56,9 @@ Mapping from Framework Operation Mapping of custom operation is implemented differently, depending on model format used for import. If a model is represented in the ONNX (including models exported from PyTorch in ONNX), TensorFlow Lite, PaddlePaddle or -TensorFlow formats, then one of the classes from :doc:`Frontend Extension API ` -should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer -or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. -Python API is also available for runtime model import. +TensorFlow formats, then you should use one of the classes from :doc:`Frontend Extension API `, +the application of which is described below. -If you are implementing extensions for new ONNX, PaddlePaddle, TensorFlow Lite or TensorFlow frontends and plan to use the ``--extensions`` -option in Model Optimizer for model conversion, then the extensions should be: - -1. Implemented in C++ only. - -2. Compiled as a separate shared library (see details on how to do this further in this guide). - -Model Optimizer does not support new frontend extensions written in Python API. - -Remaining part of this guide describes application of Frontend Extension API for new frontends. Registering Extensions ###################### @@ -104,7 +92,7 @@ Extensions can be loaded from a code with the ``ov::Core::add_extension`` metho :fragment: [add_extension] -The ``Identity`` is a custom operation class defined in :doc:`Custom Operation Guide `. This is sufficient to enable reading OpenVINO IR which uses the ``Identity`` extension operation emitted by Model Optimizer. In order to load original model directly to the runtime, add a mapping extension: +The ``Identity`` is a custom operation class defined in :doc:`Custom Operation Guide `. This is sufficient to enable reading OpenVINO IR which uses the ``Identity`` extension operation. In order to load original model directly to the runtime, add a mapping extension: .. tab-set:: @@ -133,11 +121,11 @@ Create a Library with Extensions An extension library should be created in the following cases: -* Conversion of a model with custom operations in Model Optimizer. +* Conversion of a model with custom operations in model conversion API * Loading a model with custom operations in a Python application. This applies to both framework model and OpenVINO IR. * Loading models with custom operations in tools that support loading extensions from a library, for example the ``benchmark_app``. -To create an extension library, for example, to load the extensions into Model Optimizer, perform the following: +To create an extension library, perform the following: 1. Create an entry point for extension library. OpenVINO provides the ``OPENVINO_CREATE_EXTENSIONS()`` macro, which allows to define an entry point to a library with OpenVINO Extensions. This macro should have a vector of all OpenVINO Extensions as an argument. diff --git a/docs/articles_en/documentation/openvino-extensibility/custom-gpu-operations.rst b/docs/articles_en/documentation/openvino-extensibility/custom-gpu-operations.rst index 92914223ac123c..9717c6c8ac4e33 100644 --- a/docs/articles_en/documentation/openvino-extensibility/custom-gpu-operations.rst +++ b/docs/articles_en/documentation/openvino-extensibility/custom-gpu-operations.rst @@ -40,8 +40,8 @@ There are two options for using the custom operation configuration file: :fragment: [part0] -All OpenVINO samples, except the trivial ``hello_classification``, and most Open -Model Zoo demos feature a dedicated command-line option ``-c`` to load custom kernels. +All OpenVINO samples, except the trivial ``hello_classification``, +feature a dedicated command-line option ``-c`` to load custom kernels. For example, to load custom operations for the classification sample, run the command below: .. code-block:: cpp @@ -49,11 +49,6 @@ For example, to load custom operations for the classification sample, run the co $ ./classification_sample -m /bvlc_alexnet_fp16.xml -i ./validation_set/daily/227x227/apron.bmp -d GPU -c /custom_layer_example.xml -.. important:: - - Due to the deprecation of Open Model Zoo, models in the OpenVINO IR format are now - published on `Hugging Face `__. - .. _config-file-format: @@ -393,3 +388,7 @@ execution ends. For more information, refer to the `printf Function `__. +Additional Resources +#################### + +* Models in the OpenVINO IR format published on `Hugging Face `__. diff --git a/docs/articles_en/documentation/openvino-extensibility/frontend-extensions.rst b/docs/articles_en/documentation/openvino-extensibility/frontend-extensions.rst index 115f149657821c..08b7c6f6b98018 100644 --- a/docs/articles_en/documentation/openvino-extensibility/frontend-extensions.rst +++ b/docs/articles_en/documentation/openvino-extensibility/frontend-extensions.rst @@ -14,9 +14,6 @@ Refer to :doc:`Introduction to OpenVINO Extension <../openvino-extensibility>` t understand the entire flow. This API is applicable to new frontends only, which exist for ONNX, TensorFlow Lite, PaddlePaddle, and TensorFlow. -If a different model format is used, follow legacy -:doc:`Model Optimizer Extensions <../legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility>` -guide. .. note:: diff --git a/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library/advanced-guides/low-precision-transformations.rst b/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library/advanced-guides/low-precision-transformations.rst index 9451fabd6219d8..4b64b2177af361 100644 --- a/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library/advanced-guides/low-precision-transformations.rst +++ b/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library/advanced-guides/low-precision-transformations.rst @@ -312,17 +312,11 @@ This step is optional. It modifies the transformation function to a device-speci Result model overview ##################### -Let's explore quantized `TensorFlow implementation of ResNet-50 `__ model. Use `Model Downloader `__ tool to download the ``fp16`` model from `OpenVINO™ Toolkit - Open Model Zoo repository `__: - -.. code-block:: sh - - omz_downloader --name resnet-50-tf --precisions FP16-INT8 - -After that you should quantize model by the `Model Quantizer `__ tool. - -.. code-block:: sh - - omz_quantizer --model_dir public/resnet-50-tf --dataset_dir --precisions=FP16-INT8 +Let's explore the resnet-50-tf model, quantized to ``fp16``, which is a TensorFlow +implementation of `ResNet-50 `__ +- an image classification model pre-trained on the ImageNet dataset. Originally +redistributed in the "Saved model" format, converted to a frozen graph using the +"tf.graph_util" module. Inference @@ -346,7 +340,7 @@ Result model depends on different factors: Information about layer precision is stored in the performance counters that are -available from the OpenVINO Runtime API. For example, the part of performance counters table for quantized `TensorFlow implementation of ResNet-50 `__ model inference on CPU Plugin looks as follows: +available from the OpenVINO Runtime API. For example, the part of performance counters table for the resnet-50-tf model inferred on CPU Plugin looks as follows: .. list-table:: :header-rows: 1 diff --git a/docs/articles_en/documentation/openvino-security.rst b/docs/articles_en/documentation/openvino-security.rst index 99cf13161bf243..b3436b8b6f4914 100644 --- a/docs/articles_en/documentation/openvino-security.rst +++ b/docs/articles_en/documentation/openvino-security.rst @@ -69,6 +69,6 @@ Additional Resources #################### - Intel® Distribution of OpenVINO™ toolkit `home page `__. -- :doc:`Convert a Model `. +- :doc:`Convert a Model <../openvino-workflow/model-preparation/convert-model-to-ir>`. - :doc:`OpenVINO™ Runtime User Guide <../openvino-workflow/running-inference>`. - For more information on Sample Applications, see the :doc:`OpenVINO Samples Overview <../learn-openvino/openvino-samples>` diff --git a/docs/articles_en/get-started.rst b/docs/articles_en/get-started.rst index 28a39d3c0a4e84..9b46cc416605f3 100644 --- a/docs/articles_en/get-started.rst +++ b/docs/articles_en/get-started.rst @@ -62,14 +62,14 @@ OpenVINO provides a wide array of examples and documentation showing how to work OpenVINO Basics +++++++++++++++ -Learn the basics of working with models and inference in OpenVINO. Begin with “Hello World” Interactive Tutorials that show how to prepare models, run inference, and retrieve results using the OpenVINO API. Then, explore other examples from the Open Model Zoo and OpenVINO Code Samples that can be adapted for your own application. +Learn the basics of working with models and inference in OpenVINO. Begin with “Hello World” Interactive Tutorials that show how to prepare models, run inference, and retrieve results using the OpenVINO API. Then, explore OpenVINO Code Samples that can be adapted for your own application. .. _interactive-learn-openvino/interactive-tutorials-python: Interactive Tutorials - Jupyter Notebooks ----------------------------------------- -Start with :doc:`interactive Python ` that show the basics of model inferencing, the OpenVINO API, how to convert models to OpenVINO format, and more. +Start with :doc:`interactive Python ` that show the basics of model inference, the OpenVINO API, how to convert models to OpenVINO format, and more. * `Hello Image Classification `__ - Load an image classification model in OpenVINO and use it to apply a label to an image * `OpenVINO Runtime API Tutorial `__ - Learn the basic Python API for working with models in OpenVINO diff --git a/docs/articles_en/get-started/install-openvino.rst b/docs/articles_en/get-started/install-openvino.rst index be00804faa01d2..9ada7592a91773 100644 --- a/docs/articles_en/get-started/install-openvino.rst +++ b/docs/articles_en/get-started/install-openvino.rst @@ -38,20 +38,7 @@ All currently supported versions are: :doc:`Install OpenVINO GenAI Flavor <../learn-openvino/llm_inference_guide/genai-guide>` and :doc:`Run LLMs with OpenVINO GenAI Flavor <../learn-openvino/llm_inference_guide/genai-guide>`. -.. dropdown:: Deprecation of OpenVINO™ Development Tools Package - - The OpenVINO™ Development Tools package has been deprecated and removed from the default - installation options. For new projects, the OpenVINO runtime package now includes - all necessary components. - - The OpenVINO Development Tools is still available for older versions of OpenVINO, - as well as the current one, from the GitHub repository and PyPI. :doc:`Learn more <../documentation/legacy-features/install-dev-tools>`. - .. dropdown:: Building OpenVINO from Source OpenVINO Toolkit source files are available on GitHub as open source. If you want to build your own version of OpenVINO for your platform, follow the `OpenVINO Build Instructions `__. - - - - diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-archive-linux.rst b/docs/articles_en/get-started/install-openvino/install-openvino-archive-linux.rst index 20965f2f22d095..77b23ca9b2d6a4 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-archive-linux.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-archive-linux.rst @@ -277,4 +277,4 @@ Additional Resources * Converting models for use with OpenVINO™: :doc:`Convert a Model <../../../openvino-workflow/model-preparation>` * Writing your own OpenVINO™ applications: :doc:`OpenVINO™ Runtime User Guide <../../../openvino-workflow/running-inference>` * Sample applications: :doc:`OpenVINO™ Toolkit Samples Overview <../../../learn-openvino/openvino-samples>` -* Pre-trained deep learning models: :doc:`Overview of OpenVINO™ Toolkit Pre-Trained Models <../../../documentation/legacy-features/model-zoo>` +* Pre-trained deep learning models on `Hugging Face `__. diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-archive-macos.rst b/docs/articles_en/get-started/install-openvino/install-openvino-archive-macos.rst index e4bff378106122..b02d7f4f1984fc 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-archive-macos.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-archive-macos.rst @@ -190,4 +190,4 @@ Additional Resources * :doc:`Convert models for use with OpenVINO™ <../../../openvino-workflow/model-preparation/convert-model-to-ir>` * :doc:`Write your own OpenVINO™ applications <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` * Sample applications: :doc:`OpenVINO™ Toolkit Samples Overview <../../../learn-openvino/openvino-samples>` -* Pre-trained deep learning models: :doc:`Overview of OpenVINO™ Toolkit Pre-Trained Models <../../../documentation/legacy-features/model-zoo>` +* Pre-trained deep learning models on `Hugging Face `__ diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-archive-windows.rst b/docs/articles_en/get-started/install-openvino/install-openvino-archive-windows.rst index 9db280ec81472e..bdcd89d6b195b1 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-archive-windows.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-archive-windows.rst @@ -213,4 +213,4 @@ Additional Resources * :doc:`Convert models for use with OpenVINO™ <../../../openvino-workflow/model-preparation/convert-model-to-ir>` * :doc:`Write your own OpenVINO™ applications <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` * Sample applications: :doc:`OpenVINO™ Toolkit Samples Overview <../../../learn-openvino/openvino-samples>` -* Pre-trained deep learning models: :doc:`Overview of OpenVINO™ Toolkit Pre-Trained Models <../../../documentation/legacy-features/model-zoo>` +* Pre-trained deep learning models on `Hugging Face `__. diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-brew.rst b/docs/articles_en/get-started/install-openvino/install-openvino-brew.rst index b1710f3bb358e8..612a873e4ff5ed 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-brew.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-brew.rst @@ -59,14 +59,7 @@ Now that you've installed OpenVINO Runtime, you can try the following things: * Learn more about :doc:`OpenVINO Workflow <../../../openvino-workflow>`. * To prepare your models for working with OpenVINO, see :doc:`Model Preparation <../../../openvino-workflow/model-preparation>`. -* See pre-trained deep learning models in our - :doc:`Open Model Zoo <../../../documentation/legacy-features/model-zoo>`. - - .. important:: - - Due to the deprecation of Open Model Zoo, models in the OpenVINO IR format are now - published on `Hugging Face `__. - +* See pre-trained deep learning models on `Hugging Face `__. * Learn more about :doc:`Inference with OpenVINO Runtime <../../../openvino-workflow/running-inference>`. * See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../learn-openvino/openvino-samples>`. * Check out the OpenVINO `product home page `__. diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-conda.rst b/docs/articles_en/get-started/install-openvino/install-openvino-conda.rst index d1392d3f46a513..df3c8c7e0dc53b 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-conda.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-conda.rst @@ -108,7 +108,6 @@ components by using: - ``libopenvino-pytorch-frontend`` - ``libopenvino-tensorflow-frontend`` - ``libopenvino-tensorflow-lite-frontend`` -- ``libopenvino-dev`` - ``libopenvino-python`` - ``libopenvino-arm-cpu-plugin`` diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-vcpkg.rst b/docs/articles_en/get-started/install-openvino/install-openvino-vcpkg.rst index af9fe85528ca5d..6d739b350f5b38 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-vcpkg.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-vcpkg.rst @@ -81,13 +81,7 @@ Now that you've installed OpenVINO Runtime, you can try the following things: * Learn more about :doc:`OpenVINO Workflow <../../../openvino-workflow>`. * To prepare your models for working with OpenVINO, see :doc:`Model Preparation <../../../openvino-workflow/model-preparation>`. -* See pre-trained deep learning models in our :doc:`Open Model Zoo <../../../documentation/legacy-features/model-zoo>`. - - .. important:: - - Due to the deprecation of Open Model Zoo, models in the OpenVINO IR format are now - published on `Hugging Face `__. - +* See pre-trained deep learning models on `Hugging Face `__. * Learn more about :doc:`Inference with OpenVINO Runtime <../../../openvino-workflow/running-inference>`. * See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../learn-openvino/openvino-samples>`. * Check out the OpenVINO `product home page `__ . diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-yum.rst b/docs/articles_en/get-started/install-openvino/install-openvino-yum.rst index 970bb47a095d5b..fc413f194a1e63 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-yum.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-yum.rst @@ -190,13 +190,7 @@ You can also try the following things: * Learn more about :doc:`OpenVINO Workflow <../../../openvino-workflow>`. * To prepare your models for working with OpenVINO, see :doc:`Model Preparation <../../../openvino-workflow/model-preparation>`. -* See pre-trained deep learning models in our :doc:`Open Model Zoo <../../../documentation/legacy-features/model-zoo>`. - - .. important:: - - Due to the deprecation of Open Model Zoo, models in the OpenVINO IR format are now - published on `Hugging Face `__. - +* See pre-trained deep learning models on `Hugging Face `__. * Learn more about :doc:`Inference with OpenVINO Runtime <../../../openvino-workflow/running-inference>`. * See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../learn-openvino/openvino-samples>`. * Take a glance at the OpenVINO `product home page `__ . diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-zypper.rst b/docs/articles_en/get-started/install-openvino/install-openvino-zypper.rst index 127b26cac0590f..bc589dfdb48a8b 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-zypper.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-zypper.rst @@ -142,13 +142,7 @@ You can also try the following things: * Learn more about :doc:`OpenVINO Workflow <../../../openvino-workflow>`. * To prepare your models for working with OpenVINO, see :doc:`Model Preparation <../../../openvino-workflow/model-preparation>`. -* See pre-trained deep learning models in our :doc:`Open Model Zoo <../../../documentation/legacy-features/model-zoo>`. - - .. important:: - - Due to the deprecation of Open Model Zoo, models in the OpenVINO IR format are now - published on `Hugging Face `__. - +* See pre-trained deep learning models on `Hugging Face `__. * Learn more about :doc:`Inference with OpenVINO Runtime <../../../openvino-workflow/running-inference>`. * See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../learn-openvino/openvino-samples>`. * Take a glance at the OpenVINO `product home page `__ . diff --git a/docs/articles_en/learn-openvino/interactive-tutorials-python/notebooks-installation.rst b/docs/articles_en/learn-openvino/interactive-tutorials-python/notebooks-installation.rst index eb02caa06852fd..ba7859a0c9f5d1 100644 --- a/docs/articles_en/learn-openvino/interactive-tutorials-python/notebooks-installation.rst +++ b/docs/articles_en/learn-openvino/interactive-tutorials-python/notebooks-installation.rst @@ -312,8 +312,6 @@ Installing notebooks 1. **Create a Virtual Environment** - If you already have installed *openvino-dev*, you may skip this step and proceed with the next one. - .. code-block:: sh python -m venv openvino_env @@ -364,8 +362,6 @@ Installing notebooks 1. **Create a Virtual Environment** - If you already have installed *openvino-dev*, you may skip this step and proceed with the next one. - .. code-block:: sh python3 -m venv openvino_env @@ -415,8 +411,6 @@ Installing notebooks 1. **Create a Virtual Environment** - If you already have installed *openvino-dev*, you may skip this step and proceed with the next one. - .. code-block:: sh python3 -m venv openvino_env diff --git a/docs/articles_en/learn-openvino/openvino-samples/benchmark-tool.rst b/docs/articles_en/learn-openvino/openvino-samples/benchmark-tool.rst index 390fe00605f2c6..8ab8a43031ca39 100644 --- a/docs/articles_en/learn-openvino/openvino-samples/benchmark-tool.rst +++ b/docs/articles_en/learn-openvino/openvino-samples/benchmark-tool.rst @@ -30,7 +30,7 @@ Basic Usage The benchmarking application works with models in the OpenVINO IR (``model.xml`` and ``model.bin``) and ONNX (``model.onnx``) formats. - Make sure to :doc:`convert your models <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>` + Make sure to :doc:`convert your models <../../openvino-workflow/model-preparation/convert-model-to-ir>` if necessary. To run benchmarking with default options on a model, use the following command: @@ -56,7 +56,7 @@ Basic Usage The benchmarking application works with models in the OpenVINO IR, TensorFlow, TensorFlow Lite, PaddlePaddle, PyTorch and ONNX formats. If you need it, - OpenVINO also allows you to :doc:`convert your models <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>`. + OpenVINO also allows you to :doc:`convert your models <../../openvino-workflow/model-preparation/convert-model-to-ir>`. To run benchmarking with default options on a model, use the following command: @@ -937,4 +937,4 @@ Additional Resources - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>` +- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` diff --git a/docs/articles_en/learn-openvino/openvino-samples/bert-benchmark.rst b/docs/articles_en/learn-openvino/openvino-samples/bert-benchmark.rst index 92f6a410219f43..13f18fc3272b34 100644 --- a/docs/articles_en/learn-openvino/openvino-samples/bert-benchmark.rst +++ b/docs/articles_en/learn-openvino/openvino-samples/bert-benchmark.rst @@ -7,8 +7,7 @@ Bert Benchmark Python Sample This sample demonstrates how to estimate performance of a Bert model using Asynchronous -Inference Request API. Unlike `demos `__ -this sample does not have +Inference Request API. This sample does not have configurable command line arguments. Feel free to modify sample's source code to try out different options. @@ -64,5 +63,5 @@ Additional Resources - :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>` +- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` - `Bert Benchmark Python Sample on Github `__ diff --git a/docs/articles_en/learn-openvino/openvino-samples/hello-classification.rst b/docs/articles_en/learn-openvino/openvino-samples/hello-classification.rst index f8222e495c7387..7a9a7d449d628d 100644 --- a/docs/articles_en/learn-openvino/openvino-samples/hello-classification.rst +++ b/docs/articles_en/learn-openvino/openvino-samples/hello-classification.rst @@ -93,11 +93,11 @@ To run the sample, you need to specify a model and an image: to manually rearrange the default channels order in the sample or demo application or reconvert your model using model conversion API with ``reverse_input_channels`` argument specified. For more information about - the argument, refer to **When to Reverse Input Channels** section of - :doc:`Embedding Preprocessing Computation <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-setting-input-shapes>`. + the argument, refer to the **Color Conversion** section of + :doc:`Preprocessing API <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`. - Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) - using the :doc:`model conversion API <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>`. + using the :doc:`model conversion API <../../openvino-workflow/model-preparation/convert-model-to-ir>`. - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. - The sample supports NCHW model layout only. @@ -257,7 +257,7 @@ Additional Resources - :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>` +- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` - `OpenVINO Runtime C API `__ - `Hello Classification Python Sample on Github `__ - `Hello Classification C++ Sample on Github `__ diff --git a/docs/articles_en/learn-openvino/openvino-samples/hello-nv12-input-classification.rst b/docs/articles_en/learn-openvino/openvino-samples/hello-nv12-input-classification.rst index 19219070cbfbe2..3d1c069e2c8cb1 100644 --- a/docs/articles_en/learn-openvino/openvino-samples/hello-nv12-input-classification.rst +++ b/docs/articles_en/learn-openvino/openvino-samples/hello-nv12-input-classification.rst @@ -95,11 +95,11 @@ the following command, you can convert an ordinary image to an uncompressed NV12 - By default, this sample expects that model input has BGR channels order. If you trained your model to work with RGB order, you need to reconvert your model using model conversion API with ``reverse_input_channels`` argument - specified. For more information about the argument, refer to **When to Reverse - Input Channels** section of :doc:`Embedding Preprocessing Computation <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-setting-input-shapes>`. + specified. For more information about the argument, refer to the + **Color Conversion** section of :doc:`Preprocessing API <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`. - Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) - using the :doc:`model conversion API <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>`. + using the :doc:`model conversion API <../../openvino-workflow/model-preparation/convert-model-to-ir>`. - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. Example @@ -208,7 +208,7 @@ Additional Resources - :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>` +- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` - `API Reference `__ - `Hello NV12 Input Classification C++ Sample on Github `__ - `Hello NV12 Input Classification C Sample on Github `__ diff --git a/docs/articles_en/learn-openvino/openvino-samples/hello-reshape-ssd.rst b/docs/articles_en/learn-openvino/openvino-samples/hello-reshape-ssd.rst index 23de8eb1979824..0e929bb5ed2701 100644 --- a/docs/articles_en/learn-openvino/openvino-samples/hello-reshape-ssd.rst +++ b/docs/articles_en/learn-openvino/openvino-samples/hello-reshape-ssd.rst @@ -14,8 +14,8 @@ using the sample, refer to the following requirements: - Models with only one input and output are supported. - The sample accepts any file format supported by ``core.read_model``. -- The sample has been validated with: `person-detection-retail-0013 `__ - models and the NCHW layout format. +- The sample has been validated with the person-detection-retail-0013 + model and the NCHW layout format. - To build the sample, use instructions available at :ref:`Build the Sample Applications ` section in "Get Started with Samples" guide. @@ -82,12 +82,12 @@ To run the sample, you need to specify a model and an image: order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using model conversion API with ``reverse_input_channels`` - argument specified. For more information about the argument, refer to - **When to Reverse Input Channels** section of - :doc:`Embedding Preprocessing Computation <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-setting-input-shapes>`. + argument specified. For more information about the argument, refer to the + **Color Conversion** section of + :doc:`Preprocessing API <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`. - Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) - using :doc:`model conversion API <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>`. + using :doc:`model conversion API <../../openvino-workflow/model-preparation/convert-model-to-ir>`. - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. Example @@ -204,7 +204,7 @@ Additional Resources - :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>` +- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` - `Hello Reshape SSD Python Sample on Github `__ - `Hello Reshape SSD C++ Sample on Github `__ diff --git a/docs/articles_en/learn-openvino/openvino-samples/image-classification-async.rst b/docs/articles_en/learn-openvino/openvino-samples/image-classification-async.rst index b112452e932c72..d88b950463210d 100644 --- a/docs/articles_en/learn-openvino/openvino-samples/image-classification-async.rst +++ b/docs/articles_en/learn-openvino/openvino-samples/image-classification-async.rst @@ -129,9 +129,9 @@ To run the sample, you need to specify a model and an image: .. note:: - - By default, OpenVINO™ Toolkit Samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using model conversion API with ``reverse_input_channels`` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of :doc:`Embedding Preprocessing Computation <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-setting-input-shapes>`. + - By default, OpenVINO™ Toolkit Samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using model conversion API with ``reverse_input_channels`` argument specified. For more information about the argument, refer to the **Color Conversion** section of :doc:`Preprocessing API <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`. - - Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) using :doc:`model conversion API <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>`. + - Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) using :doc:`model conversion API <../../openvino-workflow/model-preparation/convert-model-to-ir>`. - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. @@ -326,6 +326,6 @@ Additional Resources - :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO™ Toolkit Samples <../openvino-samples>` -- :doc:`Convert a Model <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>` +- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` - `Image Classification Async Python Sample on Github `__ - `Image Classification Async C++ Sample on Github `__ diff --git a/docs/articles_en/learn-openvino/openvino-samples/model-creation.rst b/docs/articles_en/learn-openvino/openvino-samples/model-creation.rst index e0e3034c225763..ad01cee53a69b1 100644 --- a/docs/articles_en/learn-openvino/openvino-samples/model-creation.rst +++ b/docs/articles_en/learn-openvino/openvino-samples/model-creation.rst @@ -76,7 +76,7 @@ To run the sample, you need to specify model weights and a device. - This sample supports models with FP32 weights only. - The ``lenet.bin`` weights file is generated by - :doc:`model conversion API <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>` + :doc:`model conversion API <../../openvino-workflow/model-preparation/convert-model-to-ir>` from the public LeNet model, with the ``input_shape [64,1,28,28]`` parameter specified. - The original model is available in the `Caffe repository `__ on GitHub. @@ -292,6 +292,6 @@ Additional Resources - :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>` +- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` - `Model Creation Python Sample on Github `__ - `Model Creation C++ Sample on Github `__ diff --git a/docs/articles_en/learn-openvino/openvino-samples/sync-benchmark.rst b/docs/articles_en/learn-openvino/openvino-samples/sync-benchmark.rst index 245672decb7ab2..ccaa1f03a35552 100644 --- a/docs/articles_en/learn-openvino/openvino-samples/sync-benchmark.rst +++ b/docs/articles_en/learn-openvino/openvino-samples/sync-benchmark.rst @@ -8,15 +8,13 @@ Sync Benchmark Sample This sample demonstrates how to estimate performance of a model using Synchronous Inference Request API. It makes sense to use synchronous inference only in latency -oriented scenarios. Models with static input shapes are supported. Unlike -`demos `__ -this sample does not have other configurable command-line +oriented scenarios. Models with static input shapes are supported. +This sample does not have other configurable command-line arguments. Feel free to modify sample's source code to try out different options. Before using the sample, refer to the following requirements: - The sample accepts any file format supported by ``core.read_model``. -- The sample has been validated with: `yolo-v3-tf `__, - `face-detection-0200 `__ models. +- The sample has been validated with: the yolo-v3-tf and face-detection-0200 models. - To build the sample, use instructions available at :ref:`Build the Sample Applications ` section in "Get Started with Samples" guide. @@ -167,6 +165,6 @@ Additional Resources - :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>` +- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` - `Sync Benchmark Python Sample on Github `__ - `Sync Benchmark C++ Sample on Github `__ diff --git a/docs/articles_en/learn-openvino/openvino-samples/throughput-benchmark.rst b/docs/articles_en/learn-openvino/openvino-samples/throughput-benchmark.rst index e8b723afd2a480..4632fab82bd0ea 100644 --- a/docs/articles_en/learn-openvino/openvino-samples/throughput-benchmark.rst +++ b/docs/articles_en/learn-openvino/openvino-samples/throughput-benchmark.rst @@ -7,7 +7,7 @@ Throughput Benchmark Sample This sample demonstrates how to estimate performance of a model using Asynchronous -Inference Request API in throughput mode. Unlike `demos `__ this sample +Inference Request API in throughput mode. This sample does not have other configurable command-line arguments. Feel free to modify sample's source code to try out different options. @@ -18,8 +18,7 @@ sets ``uint8``, while the sample uses default model precision which is usually ` Before using the sample, refer to the following requirements: - The sample accepts any file format supported by ``core.read_model``. -- The sample has been validated with: `yolo-v3-tf `__, - `face-detection-0200 `__ models. +- The sample has been validated with: yolo-v3-tf and face-detection-0200 models. - To build the sample, use instructions available at :ref:`Build the Sample Applications ` section in "Get Started with Samples" guide. @@ -171,6 +170,6 @@ Additional Resources - :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>` +- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` - `Throughput Benchmark Python Sample on Github `__ - `Throughput Benchmark C++ Sample on Github `__ diff --git a/docs/articles_en/openvino-workflow/model-preparation.rst b/docs/articles_en/openvino-workflow/model-preparation.rst index c23540874e9b7a..33a4d8a54cc7f6 100644 --- a/docs/articles_en/openvino-workflow/model-preparation.rst +++ b/docs/articles_en/openvino-workflow/model-preparation.rst @@ -56,12 +56,6 @@ The easiest way to obtain a model is to download it from an online database, suc .. note:: - Model conversion API prior to OpenVINO 2023.1 is considered deprecated. Existing and new - projects are recommended to transition to the new solutions, keeping in mind that they are - not fully backwards compatible with ``openvino.tools.mo.convert_model`` or the ``mo`` - CLI tool. For more details, see the - :doc:`Model Conversion API Transition Guide <../documentation/legacy-features/transition-legacy-conversion-api>`. - For PyTorch and JAX/Flax models, `Python API <#convert-a-model-with-python-convert-model>`__ is the only conversion option. @@ -298,15 +292,4 @@ follow: * :doc:`Post-training optimization ` * :doc:`Model inference in OpenVINO Runtime ` -If you are still using the legacy conversion API (``mo`` or ``openvino.tools.mo.convert_model``), -refer to the following materials: - -* :doc:`Transition from legacy mo and ov.tools.mo.convert_model <../documentation/legacy-features/transition-legacy-conversion-api>` -* :doc:`Legacy Model Conversion API <../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api>` - - - - .. need to investigate python api article generation - api/ie_python_api/_autosummary/openvino.Model.html does not exist, api/ie_python_api/_autosummary/openvino.runtime.Model.html does. - - diff --git a/docs/articles_en/openvino-workflow/model-preparation/convert-model-to-ir.rst b/docs/articles_en/openvino-workflow/model-preparation/convert-model-to-ir.rst index 560b013301e064..dd2fc35c56e92b 100644 --- a/docs/articles_en/openvino-workflow/model-preparation/convert-model-to-ir.rst +++ b/docs/articles_en/openvino-workflow/model-preparation/convert-model-to-ir.rst @@ -296,7 +296,7 @@ used by OpenVINO, typically obtained by converting models of supported framework * The ``convert_model()`` method: - You can use ``mo`` command-line tool to convert a model to IR. The obtained IR can + You can use ``ovc`` to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred. .. dropdown:: List of supported formats: @@ -423,7 +423,7 @@ used by OpenVINO, typically obtained by converting models of supported framework * The ``convert_model()`` method: - You can use ``mo`` command-line tool to convert a model to IR. The obtained IR + You can use ``ovc`` to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred. .. dropdown:: List of supported formats: @@ -557,7 +557,7 @@ used by OpenVINO, typically obtained by converting models of supported framework * The ``convert_model()`` method: - You can use ``mo`` command-line tool to convert a model to IR. The obtained IR + You can use ``ovc`` to convert a model to IR. The obtained IR can then be read by ``read_model()`` and inferred. .. dropdown:: List of supported formats: @@ -708,6 +708,6 @@ multiple times: Additional Resources #################### -* :doc:`Transition guide from the legacy to new conversion API <../../documentation/legacy-features/transition-legacy-conversion-api>` +* Learn about the :doc:`parameters to adjust model conversion <./conversion-parameters>`. * `Download models from Hugging Face `__. diff --git a/docs/articles_en/openvino-workflow/running-inference/dynamic-shapes.rst b/docs/articles_en/openvino-workflow/running-inference/dynamic-shapes.rst index 9de4ba9df18827..b9978f3767562e 100644 --- a/docs/articles_en/openvino-workflow/running-inference/dynamic-shapes.rst +++ b/docs/articles_en/openvino-workflow/running-inference/dynamic-shapes.rst @@ -139,7 +139,7 @@ To check if a model already has dynamic dimensions, first load it with the ``rea If the input model already has dynamic dimensions, that will not change during inference. If the inputs will not be used dynamically, it is recommended to set them to static values using the ``reshape`` method to save application memory and potentially improve inference speed. The OpenVINO API supports any combination of static and dynamic dimensions. -Static and dynamic dimensions can also be set when converting the model with ``convert_model()``. It has identical capabilities to the ``reshape`` method, so you can save time by converting the model with dynamic shapes beforehand rather than in the application code. To get information about setting input shapes using ``convert_model()``, refer to :doc:`Setting Input Shapes <../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-setting-input-shapes>`. +Static and dynamic dimensions can also be set when converting the model with ``convert_model()``. It has identical capabilities to the ``reshape`` method, so you can save time by converting the model with dynamic shapes beforehand rather than in the application code. To get information about setting input shapes using ``convert_model()``, refer to :doc:`Setting Input Shapes <./changing-input-shape>`. Dimension Bounds ---------------- diff --git a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes.rst b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes.rst index aa8e9cdabfda64..31d0af303c633a 100644 --- a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes.rst +++ b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes.rst @@ -31,7 +31,6 @@ different conditions: | :doc:`Automatic Device Selection (AUTO) ` | :doc:`Heterogeneous Execution (HETERO) ` | :doc:`Automatic Batching Execution (Auto-batching) ` -| :doc:`[DEPRECATED] Multi-Device Execution (MULTI) <../../documentation/legacy-features/multi-device>` To learn how to change the device configuration, read the :doc:`Query device properties article `. diff --git a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection.rst b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection.rst index 6bebf087052b75..a5ab0c845dfa66 100644 --- a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection.rst +++ b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection.rst @@ -513,7 +513,6 @@ Additional Resources * `Automatic Device Selection with OpenVINO™ Notebook `__ * :doc:`Debugging AUTO ` -* :doc:`(LEGACY) Running on Multiple Devices Simultaneously <../../../documentation/legacy-features/multi-device>` * :doc:`Inference Devices and Modes <../inference-devices-and-modes>` diff --git a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.rst b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.rst index b4e1c7ac15afcc..2adf3e7f9d1e4d 100644 --- a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.rst +++ b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.rst @@ -124,7 +124,7 @@ Selected precision of each primitive depends on the operation precision in IR, q The ``u1``/``u8``/``i8`` data types are used for quantized operations only, which means that they are not selected automatically for non-quantized operations. For more details on how to get a quantized model, refer to the :doc:`Model Optimization guide <../../model-optimization>`. -Floating-point precision of a GPU primitive is selected based on operation precision in the OpenVINO IR, except for the :doc:``, which is executed in the ``f16`` precision. +Floating-point precision of a GPU primitive is selected based on operation precision in the OpenVINO IR, except for the :doc:``, which is executed in the ``f16`` precision. .. note:: diff --git a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/npu-device.rst b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/npu-device.rst index 7b135fa7ff0b14..a0e496eb57fb9e 100644 --- a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/npu-device.rst +++ b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/npu-device.rst @@ -249,11 +249,11 @@ or **ov::intel_npu::max_tiles and ov::intel_npu::tiles** -the ``max_tiles`` property is read-write to enable compiling models off-device. +the ``max_tiles`` property is read-write to enable compiling models off-device. When on NPU, ``max_tiles`` will return the number of tiles the device has. Setting the number of tiles to compile for (via ``intel_npu::tiles``), when on device, -must be preceded by reading ``intel_npu::max_tiles`` first, to make sure that -``ov::intel_npu::tiles`` <= ``ov::intel_npu::max_tiles`` +must be preceded by reading ``intel_npu::max_tiles`` first, to make sure that +``ov::intel_npu::tiles`` <= ``ov::intel_npu::max_tiles`` to avoid exceptions from the compiler. .. note:: @@ -280,7 +280,3 @@ Additional Resources * `Working with NPUs in OpenVINO™ Notebook `__ * `Vision colorization Notebook <./../../../notebooks/vision-image-colorization-with-output.html>`__ -* `Classification Benchmark C++ Demo `__ -* `3D Human Pose Estimation Python Demo `__ -* `Object Detection C++ Demo `__ -* `Object Detection Python Demo `__ diff --git a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/general-optimizations.rst b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/general-optimizations.rst index b8ec2da9235fd4..5f01623d248755 100644 --- a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/general-optimizations.rst +++ b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/general-optimizations.rst @@ -18,7 +18,7 @@ Inputs Pre-Processing with OpenVINO In many cases, a network expects a pre-processed image. It is advised not to perform any unnecessary steps in the code: -* Model conversion API can efficiently incorporate the mean and normalization (scale) values into a model (for example, to the weights of the first convolution). For more details, see the :doc:`relevant model conversion API command-line parameters <../../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-embedding-preprocessing-computation>`. +* Model conversion API can efficiently incorporate the mean and normalization (scale) values into a model (for example, to the weights of the first convolution). For more details, see the :doc:`relevant model conversion API command-line parameters <../../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`. * Let OpenVINO accelerate other means of :doc:`Image Pre-processing and Conversion ` * Data which is already in the "on-device" memory can be input directly by using the :doc:`remote tensors API of the GPU Plugin <../inference-devices-and-modes/gpu-device/remote-tensor-api-gpu-plugin>`. @@ -60,7 +60,7 @@ Below are example-codes for the regular and async-based approaches to compare: The technique can be generalized to any available parallel slack. For example, you can do inference and simultaneously encode the resulting or previous frames or run further inference, like emotion detection on top of the face detection results. -Refer to the `Object Detection C++ Demo `__ , `Object Detection Python Demo `__ (latency-oriented Async API showcase) and :doc:`Benchmark App Sample <../../../learn-openvino/openvino-samples/benchmark-tool>` for complete examples of the Async API in action. +Refer to the :doc:`Benchmark App Sample <../../../learn-openvino/openvino-samples/benchmark-tool>` for complete examples of the Async API in action. .. note:: diff --git a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/layout-api-overview.rst b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/layout-api-overview.rst index 690b606ff3720a..1562165916e576 100644 --- a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/layout-api-overview.rst +++ b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/layout-api-overview.rst @@ -23,7 +23,6 @@ Below is a list of cases where input/output layout is important: * :doc:`Convert to OpenVINO <../../../model-preparation/convert-model-to-ir>` * `OpenVINO Model Conversion Tutorial `__ - * :doc:`[LEGACY] Model Optimizer Embedding Preprocessing Computation <../../../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-embedding-preprocessing-computation>` guide. * Improving the readability of a model input and output. diff --git a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-throughput.rst b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-throughput.rst index 18c18c5f7d05b8..8aafd9ceb4faec 100644 --- a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-throughput.rst +++ b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-throughput.rst @@ -63,18 +63,7 @@ In general, most throughput-oriented inference applications should: * Use the Async API with callbacks, to avoid any dependency on the completion order of the requests and possible device starvation, as explained in the :doc:`common-optimizations section `. -Multi-Device Execution -###################### - -OpenVINO offers the automatic, scalable :doc:`multi-device inference mode <../../../documentation/legacy-features/multi-device>`, which is a simple *application-transparent* way to improve throughput. There is no need to re-architecture existing applications for any explicit multi-device support: no explicit network loading to each device, no separate per-device queues, no additional logic to balance inference requests between devices, etc. For the application using it, multi-device is like any other device, as it manages all processes internally. -Just like with other throughput-oriented scenarios, there are several major pre-requisites for optimal multi-device performance: - -* Using the :ref:`Asynchronous API ` and :doc:`callbacks <../integrate-openvino-with-your-application/inference-request>` in particular. -* Providing the multi-device (and hence the underlying devices) with enough data to crunch. As the inference requests are naturally independent data pieces, the multi-device performs load-balancing at the "requests" (outermost) level to minimize the scheduling overhead. - -Keep in mind that the resulting performance is usually a fraction of the "ideal" (plain sum) value, when the devices compete for certain resources such as the memory-bandwidth, which is shared between CPU and iGPU. - .. note:: - While the legacy approach of optimizing the parameters of each device separately works, the :doc:`Automatic Device Selection <../inference-devices-and-modes/auto-device-selection>` allow configuring all devices (that are part of the specific multi-device configuration) at once. + The :doc:`Automatic Device Selection <../inference-devices-and-modes/auto-device-selection>` allows configuration of all devices at once. diff --git a/docs/dev/build_mac_arm.md b/docs/dev/build_mac_arm.md index 5a1a3698568f95..8b9781e46a5c96 100644 --- a/docs/dev/build_mac_arm.md +++ b/docs/dev/build_mac_arm.md @@ -14,14 +14,14 @@ The software was validated on: - [brew](https://brew.sh) package manager to install additional dependencies. Use [install brew](https://brew.sh) guide to achieve this. - Installation step for python and python libraries varies depending on the host architecture: - - **arm64** Python 3.9 - 3.12 for the OpenVINO Runtime Python API, Development tools (Model Optimizer, POT and others): + - **arm64** Python 3.9 - 3.12 for the OpenVINO Runtime Python API: ```sh % # let's have a look what python versions are available in brew % brew search python % # select preferred version of python based on available ones, e.g. 3.11 % brew install python@3.11 ``` - - **x86_64** Select universal2 installer from [Python releases](https://www.python.org/downloads/macos/) download page and install `python-3.X.Y-macos11.pkg` image. This allows to have universal python libraries, build x86_64 OpenVINO Python API and Development tools. + - **x86_64** Select universal2 installer from [Python releases](https://www.python.org/downloads/macos/) download page and install `python-3.X.Y-macos11.pkg` image. This allows you to have universal python libraries of OpenVINO Python API (build x86_64). - Clang compiler and other command line tools from Xcode 10.1 or higher: ```sh @@ -35,13 +35,13 @@ The software was validated on: ```sh % brew install tbb pugixml flatbuffers snappy protobuf ``` -- Additional `pip` dependencies to build OpenVINO Runtime Python API, Development tools (Model Optimizer, POT and others): +- Additional `pip` dependencies to build OpenVINO Runtime Python API: ```sh % # update pip and setuptools to newer versions % python3 -m pip install -U pip % python3 -m pip install -r /src/bindings/python/requirements.txt ``` - Additional install requirements (after OpenVINO repo clone) in order to build OpenVINO Python API and Development tools as wheel packages: + Additional install requirements (after OpenVINO repo clone) in order to build OpenVINO Python API as wheel packages: ```sh % python3 -m pip install -r /src/bindings/python/wheel/requirements-dev.txt ``` diff --git a/docs/dev/build_mac_intel_cpu.md b/docs/dev/build_mac_intel_cpu.md index f5b70d73709c20..735c8a97a3b3df 100644 --- a/docs/dev/build_mac_intel_cpu.md +++ b/docs/dev/build_mac_intel_cpu.md @@ -12,14 +12,14 @@ The software was validated on: - [brew](https://brew.sh) package manager to install additional dependencies. Use [install brew](https://brew.sh) guide to achieve this. - Installation step for python and python libraries varies depending on the host architecture: - - **x86_64** Python 3.9 - 3.12 for the OpenVINO Runtime Python API, Development tools (Model Optimizer, POT and others): + - **x86_64** Python 3.9 - 3.12 for the OpenVINO Runtime Python API: ```sh % # let's have a look what python versions are available in brew % brew search python % # select preferred version of python based on available ones, e.g. 3.11 % brew install python@3.11 ``` - - **arm64** Select universal2 installer from [Python releases](https://www.python.org/downloads/macos/) download page and install `python-3.X.Y-macos11.pkg` image. This allows to have universal python libraries, build x86_64 OpenVINO Python API and Development tools. + - **arm64** Select universal2 installer from [Python releases](https://www.python.org/downloads/macos/) download page and install `python-3.X.Y-macos11.pkg` image. This allows to have universal python libraries of OpenVINO Python API (build x86_64) . - [CMake](https://cmake.org/download/) 3.13 or higher and other development tools: ```sh % brew install cmake scons fdupes git-lfs ninja @@ -32,13 +32,13 @@ The software was validated on: ```sh % brew install tbb pugixml flatbuffers snappy protobuf ``` -- Additional `pip` dependencies to build OpenVINO Runtime Python API, Development tools (Model Optimizer, POT and others): +- Additional `pip` dependencies to build OpenVINO Runtime Python API: ```sh % # update pip and setuptools to newer versions % python3 -m pip install -U pip % python3 -m pip install -r /src/bindings/python/requirements.txt ``` - Additional install requirements (after OpenVINO repo clone) in order to build OpenVINO Python API and Development tools as wheel packages: + Additional install requirements (after OpenVINO repo clone) in order to build OpenVINO Python API: ```sh % python3 -m pip install -r /src/bindings/python/wheel/requirements-dev.txt ``` diff --git a/docs/dev/installing.md b/docs/dev/installing.md index de4c7ba9df9af6..c20b2ce183de3c 100644 --- a/docs/dev/installing.md +++ b/docs/dev/installing.md @@ -6,200 +6,87 @@ Once the project is built you can install OpenVINO™ Runtime into custom locati cmake --install --prefix ``` -## Installation check +## Build and Run Samples -
-For versions prior to 2022.1 -

+1. Build samples. -1. Obtaining Open Model Zoo tools and models + To build C++ sample applications, run the following commands: -To have the ability to run samples and demos, you need to clone the Open Model Zoo repository and copy the folder under `./deployment_tools` to your install directory: + Linux and macOS: + ```sh + cd /samples/cpp + ./build_samples.sh + ``` -``` -git clone https://github.com/openvinotoolkit/open_model_zoo.git -cmake -E copy_directory ./open_model_zoo/ /deployment_tools/open_model_zoo/ -``` - -2. Adding OpenCV to your environment - -Open Model Zoo samples use OpenCV functionality to load images. To use it for demo builds you need to provide the path to your OpenCV custom build by setting `OpenCV_DIR` environment variable and add path OpenCV libraries to the `LD_LIBRARY_PATH (Linux)` or `PATH (Windows)` variable before running demos. - -Linux: -```sh -export LD_LIBRARY_PATH=/path/to/opencv_install/lib/:$LD_LIBRARY_PATH -export OpenCV_DIR=/path/to/opencv_install/cmake -``` - -Windows: -```sh -set PATH=\path\to\opencv_install\bin\;%PATH% -set OpenCV_DIR=\path\to\opencv_install\cmake -``` - -3. Running demo - -To check your installation go to the demo directory and run Classification Demo: - -Linux and macOS: -```sh -cd /deployment_tools/demo -./demo_squeezenet_download_convert_run.sh -``` - -Windows: -```sh -cd \deployment_tools\demo -demo_squeezenet_download_convert_run.bat -``` - -Result: -``` -Top 10 results: + Windows Command Prompt: + ```sh + cd \samples\cpp + build_samples_msvc.bat + ``` -Image /deployment_tools/demo/car.png - -classid probability label -------- ----------- ----- -817 0.6853030 sports car, sport car -479 0.1835197 car wheel -511 0.0917197 convertible -436 0.0200694 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon -751 0.0069604 racer, race car, racing car -656 0.0044177 minivan -717 0.0024739 pickup, pickup truck -581 0.0017788 grille, radiator grille -468 0.0013083 cab, hack, taxi, taxicab -661 0.0007443 Model T - -[ INFO ] Execution successful -``` + Windows PowerShell: + ```sh + & /build_samples.ps1 + ``` -

-
+2. Download a model. + You can download an image classification model from + [Hugging Face](https://huggingface.co/models?pipeline_tag=image-classification&sort=trending) + to run the sample -
- For 2022.1 and after -

+4. Convert the model. -1. Build samples + Linux and macOS: + ```sh + ovc --compress_to_fp16=True + ``` + Windows: + ```bat + ovc --compress_to_fp16=True + ``` -To build C++ sample applications, run the following commands: +5. Run inference on the sample. -Linux and macOS: -```sh -cd /samples/cpp -./build_samples.sh -``` + Set up the OpenVINO environment variables: -Windows Command Prompt: -```sh -cd \samples\cpp -build_samples_msvc.bat -``` + Linux and macOS: + ```sh + source /setupvars.sh + ``` -Windows PowerShell: -```sh -& /build_samples.ps1 -``` + Windows Command Prompt: + ```bat + \setupvars.bat + ``` -2. Install OpenVINO Development Tools + Windows PowerShell: + ```bat + . /setupvars.ps1 + ``` -> **NOTE**: To build OpenVINO Development Tools (Model Optimizer, Post-Training Optimization Tool, Model Downloader, and Open Model Zoo tools) wheel package locally you are required to use the CMake option: `-DENABLE_WHEEL=ON`. + The following commands run the Image Classification Code Sample using the [`dog.bmp`](https://storage.openvinotoolkit.org/data/test_data/images/ 224x224/dog.bmp) file as an input image, the model in IR format, and on different hardware devices: -To install OpenVINO Development Tools to work with Caffe models (OpenVINO support for Caffe is currently being deprecated and will be removed entirely in the future), execute the following commands: + Linux and macOS: -Linux and macOS: + ```sh + cd ~/openvino_cpp_samples_build//Release + ./classification_sample_async -i /dog.bmp -m /model.xml -d CPU + ``` + where the is the output of ``uname -m``, for example, ``intel64``, ``armhf``, or ``aarch64``. -```sh -#setup virtual environment -python3 -m venv openvino_env -source openvino_env/bin/activate -pip install pip --upgrade + Windows: -#install local package from install directory -pip install openvino_dev--py3-none-any.whl[caffe] --find-links=/tools -``` - -Windows: -```bat -rem setup virtual environment -python -m venv openvino_env -openvino_env\Scripts\activate.bat -pip install pip --upgrade - -rem install local package from install directory -cd \tools -pip install openvino_dev--py3-none-any.whl[caffe] --find-links=\tools -``` - -3. Download the Models - -Download the following model to run the Image Classification Sample: - -Linux and macOS: -```sh -omz_downloader --name googlenet-v1 --output_dir ~/models -``` - -Windows: -```bat -omz_downloader --name googlenet-v1 --output_dir %USERPROFILE%\Documents\models -``` - -4. Convert the Model with Model Optimizer - -Linux and macOS: -```sh -mkdir ~/ir -mo --input_model ~/models/public/googlenet-v1/googlenet-v1.caffemodel --compress_to_fp16 --output_dir ~/ir -``` -Windows: -```bat -mkdir %USERPROFILE%\Documents\ir -mo --input_model %USERPROFILE%\Documents\models\public\googlenet-v1\googlenet-v1.caffemodel --compress_to_fp16 --output_dir %USERPROFILE%\Documents\ir -``` - -5. Run Inference on the Sample - -Set up the OpenVINO environment variables: - -Linux and macOS: -```sh -source /setupvars.sh -``` - -Windows Command Prompt: -```bat -\setupvars.bat -``` - -Windows PowerShell: -```bat -. /setupvars.ps1 -``` - -The following commands run the Image Classification Code Sample using the [`dog.bmp`](https://storage.openvinotoolkit.org/data/test_data/images/224x224/dog.bmp) file as an input image, the model in IR format from the `ir` directory, and on different hardware devices: - -Linux and macOS: - -```sh -cd ~/openvino_cpp_samples_build//Release -./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d CPU -``` -where the is the output of ``uname -m``, for example, ``intel64``, ``armhf``, or ``aarch64``. - -Windows: - -```bat -cd %USERPROFILE%\Documents\Intel\OpenVINO\openvino_cpp_samples_build\\Release -.\classification_sample_async.exe -i %USERPROFILE%\Downloads\dog.bmp -m %USERPROFILE%\Documents\ir\googlenet-v1.xml -d CPU -``` -where the is either ``intel64`` or ``aarch64`` depending on the platform architecture. + ```bat + cd %USERPROFILE%\Documents\Intel\OpenVINO\openvino_cpp_samples_build\\Release + .\classification_sample_async.exe -i \dog.bmp -m \model.xml -d CPU + ``` + where the is either ``intel64`` or ``aarch64`` depending on the platform architecture. When the sample application is complete, you see the label and confidence data for the top 10 categories on the display: +Below are results of using the googlenet-v1 model. + ``` Top 10 results: @@ -220,36 +107,9 @@ classid probability ``` -

-
## Adding OpenVINO Runtime to Your Project -
-For versions prior to 2022.1 -

- -For CMake projects, set the `InferenceEngine_DIR` and when you run CMake tool: - -```sh -cmake -DInferenceEngine_DIR=/path/to/openvino/build/ . -``` - -Then you can find Inference Engine by [`find_package`]: - -```cmake -find_package(InferenceEngine REQUIRED) -target_link_libraries(${PROJECT_NAME} PRIVATE ${InferenceEngine_LIBRARIES}) -``` -

-
- - -
-For 2022.1 and after -

- - For CMake projects, set the `OpenVINO_DIR` and when you run CMake tool: ```sh @@ -266,8 +126,6 @@ target_link_libraries(ov_app PRIVATE openvino::runtime) add_executable(ov_c_app main.c) target_link_libraries(ov_c_app PRIVATE openvino::runtime::c) ``` -

-
## See also diff --git a/docs/dev/pypi_publish/pypi-openvino-dev.md b/docs/dev/pypi_publish/pypi-openvino-dev.md deleted file mode 100644 index 868a7298b10a14..00000000000000 --- a/docs/dev/pypi_publish/pypi-openvino-dev.md +++ /dev/null @@ -1,190 +0,0 @@ -# OpenVINO™ Development Tools - - -> **NOTE**: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software. - -> **NOTE**: OpenVINO™ Development Tools package has been deprecated and will be discontinued with 2025.0 release. To learn more, refer to the [OpenVINO Legacy Features and Components page](https://docs.openvino.ai/2024/documentation/legacy-features.html). - -Intel® Distribution of OpenVINO™ toolkit is an open-source toolkit for optimizing and deploying AI inference. It can be used to develop applications and solutions based on deep learning tasks, such as: emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, etc. It provides high-performance and rich deployment options, from edge to cloud. - -OpenVINO™ Development Tools enables you to download models from Open Model Zoo, convert your own models to OpenVINO IR, as well as optimize and tune pre-trained deep learning models. See [What's in the Package](#whats-in-the-package) for more information. - -## System Requirements - -Before you start the installation, check the supported operating systems and required Python* versions. The complete list of supported hardware is available in the [System Requirements](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html). - -**C++ libraries** are also required for the installation on Windows*. To install that, you can [download the Visual Studio Redistributable file (.exe)](https://aka.ms/vs/17/release/vc_redist.x64.exe). - -> **NOTE**: This package can be installed on other versions of macOS, Linux and Windows, but only the specific versions above are fully validated. - -## Install the OpenVINO™ Development Tools Package - -There are two options to install OpenVINO Development Tools: installation into an existing environment with a deep learning framework used for model training or creation; -or installation in a new environment. - -### Installation into an Existing Environment with the Source Deep Learning Framework - -To install OpenVINO Development Tools (see the [What's in the Package](#whats-in-the-package) section of this article) into an existing environment -with the source deep learning framework used for model training or creation, run the following command: -``` -pip install openvino-dev -``` - -### Installation in a New Environment - -If you do not have an environment with the source deep learning framework for the input model or you encounter any compatibility issues between OpenVINO and your version of deep learning framework, -you may install OpenVINO Development Tools with validated versions of frameworks into a new environment. - -#### Step 1. Set Up Python Virtual Environment - -Use a virtual environment to avoid dependency conflicts. - -To create a virtual environment, use the following commands: - -On Windows: -```sh -python -m venv openvino_env -``` - -On Linux and macOS: -```sh -python3 -m venv openvino_env -``` - -> **NOTE**: On Linux and macOS, you may need to [install pip](https://pip.pypa.io/en/stable/installation/). For example, on Ubuntu execute the following command to get pip installed: `sudo apt install python3-venv python3-pip`. - -#### Step 2. Activate Virtual Environment - -On Linux and macOS: -```sh -source openvino_env/bin/activate -``` -On Windows: -```sh -openvino_env\Scripts\activate -``` - -#### Step 3. Set Up and Update PIP to the Highest Version - -Run the command below: -```sh -python -m pip install --upgrade pip -``` - -#### Step 4. Install the Package - -Use the following command: -```sh -pip install openvino-dev[extras] -``` - where `extras` is the source deep learning framework for the input model and is one or more of the following values separated with "," : - -| Extras Value | DL Framework | -| :-------------------------------| :------------------------------------------------------------------------------- | -| caffe | [Caffe*](https://caffe.berkeleyvision.org/) | -| kaldi | [Kaldi*](https://github.com/kaldi-asr/kaldi) | -| onnx | [ONNX*](https://github.com/microsoft/onnxruntime/) | -| pytorch | [PyTorch*](https://pytorch.org/) | -| tensorflow | [TensorFlow* 1.x](https://www.tensorflow.org/versions#tensorflow_1) | -| tensorflow2 | [TensorFlow* 2.x](https://www.tensorflow.org/versions#tensorflow_2) | - -For example, to install and configure the components for working with TensorFlow 2.x and ONNX models, use the following command: - ```sh - pip install openvino-dev[tensorflow2,onnx] - ``` -> **NOTE**: Model conversion API support for TensorFlow 1.x environment has been deprecated. Use TensorFlow 2.x environment to convert both TensorFlow 1.x and 2.x models. - -> **NOTE**: On macOS, you may need to enclose the package name in quotes: `pip install "openvino-dev[extras]"`. - -## How to Verify that the Package Is Installed - -- To verify that the **developer package** is properly installed, run the command below (this may take a few seconds): - ```sh - mo -h - ``` - You will see the help message for ``mo`` if installation finished successfully. - -- To verify that OpenVINO Runtime from the **runtime package** is available, run the command below: - ```sh - python -c "from openvino import Core; print(Core().available_devices)" - ``` - If installation was successful, you will see a list of available devices. - - - -## What's in the Package? - -> **NOTE**: The openvino-dev package installs [OpenVINO™ Runtime](https://pypi.org/project/openvino) as a dependency, which is the engine that runs the deep learning model and includes a set of libraries for an easy inference integration into your applications. - -**In addition, the openvino-dev package installs the following components by default:** - -| Component | Console Script | Description | -|------------------|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [Legacy Model conversion API](https://docs.openvino.ai/2024/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api.html) | `mo` |**Model conversion API** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by OpenVINO components.
Supported frameworks include Caffe\*, TensorFlow\*, MXNet\*, PaddlePaddle\*, and ONNX\*. | | -| [Model Downloader and other Open Model Zoo tools](https://docs.openvino.ai/2024/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high-quality and extremely fast pre-trained deep learning [public](@ref omz_models_group_public) and [Intel](@ref omz_models_group_intel)-trained models. These free pre-trained models can be used to speed up the development and production deployment process without training your own models. The tool downloads model files from online sources and, if necessary, patches them to make them more usable with model conversion API. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting Open Model Zoo models that are stored in an original deep learning framework format into the OpenVINO Intermediate Representation (IR) using model conversion API.
**Model Quantizer** is a tool for automatic quantization of full-precision models in the IR format into low-precision versions using the Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models to a stable, machine-readable format. | - -## Troubleshooting - -For general troubleshooting steps and issues, see [Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/2024/get-started/troubleshooting-install-config.html). The following sections also provide explanations to several error messages. - -### Errors with Installing via PIP for Users in China - -Users in China might encounter errors while downloading sources via PIP during OpenVINO™ installation. To resolve the issues, try the following solution: - -* Add the download source using the ``-i`` parameter with the Python ``pip`` command. For example: - - ``` sh - pip install openvino-dev -i https://mirrors.aliyun.com/pypi/simple/ - ``` - Use the ``--trusted-host`` parameter if the URL above is ``http`` instead of ``https``. - You can also run the following command to install openvino-dev with specific frameworks. For example: - - ``` - pip install openvino-dev[tensorflow2] -i https://mirrors.aliyun.com/pypi/simple/ - ``` - -### zsh: no matches found : openvino-dev[...] - -If you use zsh (Z shell) interpreter, that is the default shell for macOS starting with version 10.15 (Catalina), you may encounter the following error while installing `openvino-dev` package with extras: - -```sh -pip install openvino-dev[tensorflow2,caffe] -zsh: no matches found: openvino-dev[tensorflow2,caffe] -``` - -By default zsh interprets square brackets as an expression for pattern matching. To resolve this issue, you need to escape the command with quotes: - -```sh -pip install 'openvino-dev[tensorflow2,caffe]' -``` - -To avoid such issues you can also disable globbing for PIP commands by defining an alias in `~/.zshrc` file: - -```sh -alias pip='noglob pip' -``` - -### ERROR:root:Could not find OpenVINO Python API. - -On Windows*, some libraries are necessary to run OpenVINO. To resolve this issue, install the [C++ redistributable (.exe)](https://aka.ms/vs/17/release/vc_redist.x64.exe). You can also view a full download list on the [official support page](https://docs.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist). - -### ImportError: libpython3.8.so.1.0: cannot open shared object file: No such file or directory - -To resolve missing external dependency on Ubuntu* 18.04, execute the following command: -```sh -sudo apt-get install libpython3.8 -``` - -## Additional Resources - -- [Intel® Distribution of OpenVINO™ toolkit](https://software.intel.com/en-us/openvino-toolkit) -- [OpenVINO™ Documentation](https://docs.openvino.ai/) -- [OpenVINO™ Notebooks](https://github.com/openvinotoolkit/openvino_notebooks) -- [OpenVINO Installation Selector Tool](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html) - -Copyright © 2018-2024 Intel Corporation -> **LEGAL NOTICE**: Your use of this software and any required dependent software (the -“Software Package”) is subject to the terms and conditions of the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0.html) for the Software Package, which may also include notices, disclaimers, or -license terms for third party or open source software included in or with the Software Package, and your use indicates your acceptance of all such terms. Please refer to the “third-party-programs.txt” or other similarly-named text file included with the Software Package for additional details. - ->Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the [Intel Global Human Rights Principles](https://www.intel.com/content/www/us/en/policy/policy-human-rights.html). Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right. diff --git a/docs/optimization_guide/nncf/code/pruning_tf.py b/docs/optimization_guide/nncf/code/pruning_tf.py index 4d2f5018961365..76b76174dc7429 100644 --- a/docs/optimization_guide/nncf/code/pruning_tf.py +++ b/docs/optimization_guide/nncf/code/pruning_tf.py @@ -40,22 +40,22 @@ #! [distributed] #! [tune_model] -... # fine-tuning preparations, e.g. dataset, loss, optimizer setup, etc. +... # fine-tuning preparations, e.g. dataset, loss, optimization setup, etc. # create compression callbacks to control pruning parameters and dump compression statistics -# all the setting are being taked from compression_ctrl, i.e. from NNCF config +# all the setting are being taked from compression_ctrl, i.e. from NNCF config compression_callbacks = create_compression_callbacks(compression_ctrl, log_dir="./compression_log") # tune quantized model for 50 epochs as the baseline -model.fit(train_dataset, epochs=50, callbacks=compression_callbacks) +model.fit(train_dataset, epochs=50, callbacks=compression_callbacks) #! [tune_model] #! [export] compression_ctrl.export_model("compressed_model.pb") #export to Frozen Graph -#! [export] +#! [export] #! [save_checkpoint] -from nncf.tensorflow.utils.state import TFCompressionState +from nncf.tensorflow.utils.state import TFCompressionState from nncf.tensorflow.callbacks.checkpoint_callback import CheckpointManagerCallback checkpoint = tf.train.Checkpoint(model=model, diff --git a/docs/optimization_guide/nncf/code/pruning_torch.py b/docs/optimization_guide/nncf/code/pruning_torch.py index 6bc1cae4319406..6b637881b5cfc9 100644 --- a/docs/optimization_guide/nncf/code/pruning_torch.py +++ b/docs/optimization_guide/nncf/code/pruning_torch.py @@ -30,7 +30,7 @@ #! [nncf_congig] #! [wrap_model] -model = TorchModel() # instance of torch.nn.Module +model = TorchModel() # instance of torch.nn.Module compression_ctrl, model = create_compressed_model(model, nncf_config) #! [wrap_model] @@ -39,7 +39,7 @@ #! [distributed] #! [tune_model] -... # fine-tuning preparations, e.g. dataset, loss, optimizer setup, etc. +... # fine-tuning preparations, e.g. dataset, loss, optimization setup, etc. # tune quantized model for 50 epochs as the baseline for epoch in range(0, 50): @@ -52,7 +52,7 @@ #! [export] compression_ctrl.export_model("compressed_model.onnx") -#! [export] +#! [export] #! [save_checkpoint] checkpoint = { @@ -65,8 +65,8 @@ #! [load_checkpoint] resuming_checkpoint = torch.load(path_to_checkpoint) -compression_state = resuming_checkpoint['compression_state'] +compression_state = resuming_checkpoint['compression_state'] compression_ctrl, model = create_compressed_model(model, nncf_config, compression_state=compression_state) -state_dict = resuming_checkpoint['state_dict'] +state_dict = resuming_checkpoint['state_dict'] model.load_state_dict(state_dict) #! [load_checkpoint] diff --git a/docs/optimization_guide/nncf/code/qat_tf.py b/docs/optimization_guide/nncf/code/qat_tf.py index e210b963d5a8f6..d8a20958cfbcc2 100644 --- a/docs/optimization_guide/nncf/code/qat_tf.py +++ b/docs/optimization_guide/nncf/code/qat_tf.py @@ -20,8 +20,8 @@ #! [nncf_congig] #! [wrap_model] -model = KerasModel() # instance of the tensorflow.keras.Model -compression_ctrl, model = create_compressed_model(model, nncf_config) +model = KerasModel() # instance of the tensorflow.keras.Model +compression_ctrl, model = create_compressed_model(model, nncf_config) #! [wrap_model] #! [distributed] @@ -29,7 +29,7 @@ #! [distributed] #! [tune_model] -... # fine-tuning preparations, e.g. dataset, loss, optimizer setup, etc. +... # fine-tuning preparations, e.g. dataset, loss, optimization setup, etc. # create compression callbacks to control optimization parameters and dump compression statistics compression_callbacks = create_compression_callbacks(compression_ctrl, log_dir="./compression_log") @@ -39,10 +39,10 @@ #! [export] compression_ctrl.export_model("compressed_model.pb") #export to Frozen Graph -#! [export] +#! [export] #! [save_checkpoint] -from nncf.tensorflow.utils.state import TFCompressionState +from nncf.tensorflow.utils.state import TFCompressionState from nncf.tensorflow.callbacks.checkpoint_callback import CheckpointManagerCallback checkpoint = tf.train.Checkpoint(model=model, diff --git a/docs/optimization_guide/nncf/code/qat_torch.py b/docs/optimization_guide/nncf/code/qat_torch.py index f80a7e8f9aea9f..71594635cb84fd 100644 --- a/docs/optimization_guide/nncf/code/qat_torch.py +++ b/docs/optimization_guide/nncf/code/qat_torch.py @@ -7,7 +7,7 @@ #! [quantize] #! [tune_model] -... # fine-tuning preparations, e.g. dataset, loss, optimizer setup, etc. +... # fine-tuning preparations, e.g. dataset, loss, optimization setup, etc. # tune quantized model for 5 epochs as the baseline for epoch in range(0, 5): diff --git a/samples/cpp/benchmark_app/README.md b/samples/cpp/benchmark_app/README.md index e516c93cf18487..1f9ad9d2c2eb4a 100644 --- a/samples/cpp/benchmark_app/README.md +++ b/samples/cpp/benchmark_app/README.md @@ -12,4 +12,4 @@ To use the C++ benchmark_app, you must first build it following the [Build the S > **NOTE**: If you installed OpenVINO Runtime using PyPI or Anaconda Cloud, only the [Benchmark Python Tool](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/benchmark-tool.html) is available, and you should follow the usage instructions on that page instead. -The benchmarking application works with models in the OpenVINO IR, TensorFlow, TensorFlow Lite, PaddlePaddle, PyTorch and ONNX formats. If you need it, OpenVINO also allows you to [convert your models](https://docs.openvino.ai/2024/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api.html). +The benchmarking application works with models in the OpenVINO IR, TensorFlow, TensorFlow Lite, PaddlePaddle, PyTorch and ONNX formats. If you need it, OpenVINO also allows you to [convert your models](https://docs.openvino.ai/2024/documentation/openvino-workflow/model-preparation/convert-model-to-ir.html). diff --git a/src/bindings/python/docs/build.md b/src/bindings/python/docs/build.md index f824d9ccb8d82a..36aecd4350d2d5 100644 --- a/src/bindings/python/docs/build.md +++ b/src/bindings/python/docs/build.md @@ -18,14 +18,14 @@ To learn more about wheels and their use cases, check out the article [What Are OpenVINO can be built based on specific virtual environments such as [venv](https://docs.python.org/3/tutorial/venv.html), [virtualenv](https://virtualenv.pypa.io/en/latest/) or [pyenv](https://github.com/pyenv/pyenv). It is highly recommended to use virtual environments during development. They improve development process and allow better management of Python versions and packages. -*Note: Supported Python versions can be found in ["System Requirements" section](../../../../docs/install_guides/pypi-openvino-dev.md#system-requirements).* +*Note: Supported Python versions can be found in ["System Requirements"](https://docs.openvino.ai/nightly/about-openvino/release-notes-openvino/system-requirements.html).* ### Example: using pyenv with OpenVINO™ on Linux based system 1. First, set up the `pyenv` project. Please follow [official instructions of the pyenv project](https://github.com/pyenv/pyenv#installation) for any additional information. -2. Install a desired Python version. Following example will use Python in version 3.10.7. To correctly link libraries, an installed Python version must match OpenVINO™: +2. Install a desired Python version. Following example will use Python in version 3.10.7. To correctly link libraries, an installed Python version must match OpenVINO™: * Python with a shared library for a dynamically linked OpenVINO™: ```shell env PYTHON_CONFIGURE_OPTS="--enable-shared" pyenv install --verbose 3.10.7 diff --git a/tools/benchmark_tool/README.md b/tools/benchmark_tool/README.md index 2d254557c81e56..fec7f801d308d5 100644 --- a/tools/benchmark_tool/README.md +++ b/tools/benchmark_tool/README.md @@ -11,4 +11,4 @@ For more detailed information on how this sample works, check the dedicated [art The Python benchmark_app is automatically installed when you install OpenVINO Developer Tools using [PyPI](https://docs.openvino.ai/2024/get-started/install-openvino/install-openvino-pip.html) Before running ``benchmark_app``, make sure the ``openvino_env`` virtual environment is activated, and navigate to the directory where your model is located. The benchmarking application works with models in the OpenVINO IR (``model.xml`` and ``model.bin``) and ONNX (``model.onnx``) formats. -Make sure to [convert your models](https://docs.openvino.ai/2024/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api.html) if necessary. +Make sure to [convert your models](https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-to-ir.html) if necessary.