Releases: openvinotoolkit/openvino
2023.1.0
Summary of major features and improvements
- More Generative AI options with Hugging Face and improved PyTorch model support.
- NEW: Your PyTorch solutions are now even further enhanced with OpenVINO. You’ve got more options and you no longer need to convert to ONNX for deployment. Developers can now use their API of choice - PyTorch or OpenVINO for added performance benefits. Additionally, users can automatically import and convert PyTorch models for quicker deployment. You can continue to make the most of OpenVINO tools for advanced model compression and deployment advantages, ensuring flexibility and a range of options.
- torch.compile (preview) – OpenVINO is now available as a backend through PyTorch torch.compile, empowering developers to utilize OpenVINO toolkit through PyTorch APIs. This feature has also been integrated into the Automatic1111 Stable Diffusion Web UI, helping developers achieve accelerated performance for Stable Diffusion 1.5 and 2.1 on Intel CPUs and GPUs in both Native Linux and Windows OS platforms.
- Optimum Intel – Hugging Face and Intel continue to enhance top generative AI models by optimizing execution, making your models run faster and more efficiently on both CPU and GPU. OpenVINO serves as a runtime for inferencing execution. New PyTorch auto import and conversion capabilities have been enabled, along with support for weights compression to achieve further performance gains.
- Broader LLM model support and more model compression techniques
- Enhanced performance and accessibility for Generative AI: Runtime performance and memory usage have been significantly optimized, especially for Large Language models (LLMs). Models used for chatbots, instruction following, code generation, and many more, including prominent models like BLOOM, Dolly, Llama 2, GPT-J, GPTNeoX, ChatGLM, and Open-Llama have been enabled.
- Improved LLMs on GPU – Model coverage for dynamic shapes support has been expanded, further helping the performance of generative AI workloads on both integrated and discrete GPUs. Furthermore, memory reuse and weight memory consumption for dynamic shapes have been improved.
- Neural Network Compression Framework (NNCF) now includes an 8-bit weights compression method, making it easier to compress and optimize LLM models. SmoothQuant method has been added for more accurate and efficient post-training quantization for Transformer-based models.
- More portability and performance to run AI at the edge, in the cloud or locally.
- NEW: Support for Intel(R) Core(TM) Ultra (codename Meteor Lake). This new generation of Intel CPUs is tailored to excel in AI workloads with a built-in inference accelerators.
- Integration with MediaPipe – Developers now have direct access to this framework for building multipurpose AI pipelines. Easily integrate with OpenVINO Runtime and OpenVINO Model Server to enhance performance for faster AI model execution. You also benefit from seamless model management and version control, as well as custom logic integration with additional calculators and graphs for tailored AI solutions. Lastly, you can scale faster by delegating deployment to remote hosts via gRPC/REST interfaces for distributed processing.
Support Change and Deprecation Notices
- OpenVINO™ Development Tools package (pip install openvino-dev) is currently being deprecated and will be removed from installation options and distribution channels with 2025.0. For more info, see the documentation for Legacy Features.
- Tools:
- Accuracy Checker is deprecated and will be discontinued with 2024.0.
- Post-Training Optimization Tool (POT) has been deprecated and will be discontinued with 2024.0.
- Runtime:
- Intel® Gaussian & Neural Accelerator (Intel® GNA) is being deprecated, the GNA plugin will be discontinued with 2024.0.
- OpenVINO C++/C/Python 1.0 APIs will be discontinued with 2024.0.
- Python 3.7 will be discontinued with 2023.2 LTS release.
You can find OpenVINO™ toolkit 2023.1 release here:
- Download archives* with OpenVINO™
- Install it via Conda:
conda install -c conda-forge openvino=2023.1.0
- OpenVINO™ for Python:
pip install openvino==2023.1.0
Release documentation is available here: https://docs.openvino.ai/2023.1
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-1.html
2023.0.2
This release provides functional bug fixes and capability updates for 2023.0 that enable developers to deploy applications powered by Intel® Distribution of OpenVINO™ toolkit with confidence.
Note: This is a standard release intended for developers that prefer the very latest version of OpenVINO. Standard releases will continue to be made available three to four times a year. Long Term Support (LTS) releases are also available. A new LTS version is released every year and is supported for 2 years (1 year of bug fixes, and 2 years for security patches). Visit Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy to get details on the latest LTS releases.
Major changes:
- OpenVINO GNA Plugin:
- Fixes the issue when GNA device would not work on Gemini Lake (GLK) platforms
- Fixes the problem with memory leak during HLK test
- OpenVINO CPU Plugin:
- Fixes the issues occurred in Multi-Threading 2.0 getting CPU mapping detail on Windows 7 platforms
- OpenVINO Core:
- Fixes the issues occurred when compiling a Pytorch model with unfold op
You can find OpenVINO™ toolkit 2023.0.2 release here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install openvino==2023.0.2
- OpenVINO™ Development tools:
pip install openvino-dev==2023.0.2
Release documentation is available here: https://docs.openvino.ai/2023.0/home.html
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-0.html
2023.1.0.dev20230811
NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.
OpenVINO™ toolkit pre-release definition:
- It is introduced to get early feedback from the community.
- The scope and functionality of the pre-release version is subject to change in the future.
- Using the pre-release in production is strongly discouraged.
You can find OpenVINO™ toolkit 2023.1.0.dev20230811 pre-release version here:
- Download archives* with OpenVINO™
- Install it via Conda:
conda install -c "conda-forge/label/openvino_dev" openvino=2023.1.0.dev20230811
- OpenVINO™ Runtime for Python:
pip install --pre openvino
orpip install openvino==2023.1.0.dev20230811
- OpenVINO™ Development tools:
pip install --pre openvino-dev
orpip install openvino-dev==2023.1.0.dev20230811
Release notes are available here: https://docs.openvino.ai/nightly/prerelease_information.html
Release documentation is available here: https://docs.openvino.ai/nightly/
What's Changed
- CPU runtime:
- Enabled weights decompression support for Large Language models (LLMs). The implementation supports avx2 and avx512 HW targets for Intel® Core™ processors, improving performance in the latency mode (comparison: FP32 VS FP32+INT8 weights). For 4th Generation Intel® Xeon® Scalable Processors (formerly Sapphire Rapids) this INT8 decompression feature improves performance compared to pure BF16 inference. PRs: #18915, #19111
- Reduced memory consumption of the ‘compile model’ stage by moving constant folding of Transpose nodes to the CPU Runtime side. PR: #18877
- Set FP16 inference precision by default for non-convolution networks on ARM. Convolution networks will be executed in FP32. PRs: #19069, #19192, #19176
- GPU runtime: Added paddings for dynamic convolutions to improve performance for models like Stable-Diffusion v2.1, PR: #19001
- Python API:
- TensorFlow FE:
- Added support for the TensorFlow 1 Checkpoint format. All native TensorFlow formats are now enabled.
- Added support for 8 new operations:
- PyTorch FE:
New openvino_notebooks
- 245-typo-detector English Typo Detection in sentences with OpenVINO™
- 247-code-language-id Identify the programming language used in an arbitrary code snippet
- 121-convert-to-openvino Learn OpenVINO model conversion API
- 244-named-entity-recognition Named entity recognition with OpenVINO™
- 246-depth-estimation-videpth Monocular Visual-Inertial Depth Estimation with OpenVINO™
- 248-stable-diffusion-xl Image generation with Stable Diffusion XL
- 249-oneformer-segmentation Universal segmentation with OneFormer
Fixed GitHub issues
- Fixed #18978 "Webassembly build fails" with PR #19005
- Fixed #18847 "Debugging OpenVINO Python GIL Error" with PR #18848
- Fixed #18465 "OpenVINO can't be built in an environment that has an 'ambient' oneDNN installation" with PR #18805
Acknowledgements
Thanks for contributions from the OpenVINO developer community: @DmitriyValetov, @kai-waang
Full Changelog: 2023.1.0.dev20230728...2023.1.0.dev20230811
2023.1.0.dev20230728
NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.
OpenVINO™ toolkit pre-release definition:
- It is introduced to get early feedback from the community.
- The scope and functionality of the pre-release version is subject to change in the future.
- Using the pre-release in production is strongly discouraged.
You can find OpenVINO™ toolkit 2023.1.0.dev20230728 pre-release version here:
- Download archives* with OpenVINO™ Runtime for C/C++
- Install it via Conda:
conda install -c "conda-forge/label/openvino_dev" openvino=2023.1.0.dev20230728
- OpenVINO™ Runtime for Python:
pip install --pre openvino
orpip install openvino==2023.1.0.dev20230728
- OpenVINO™ Development tools:
pip install --pre openvino-dev
orpip install openvino-dev==2023.1.0.dev20230728
Release notes is available here: https://docs.openvino.ai/nightly/prerelease_information.html
Release documentation is available here: https://docs.openvino.ai/nightly/
2023.0.1
This release provides functional bug fixes and capability updates for 2023.0 that enable developers to deploy applications powered by Intel® Distribution of OpenVINO™ toolkit with confidence.
Note: This is a standard release intended for developers that prefer the very latest version of OpenVINO. Standard releases will continue to be made available three to four times a year. Long Term Support (LTS) releases are also available. A new LTS version is released every year and is supported for 2 years (1 year of bug fixes, and 2 years for security patches). Visit Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy to get details on the latest LTS releases.
Major changes:
- POT:
- Fixes the errors caused by the default usage of the MMap allocator (enabled in 2023.0). Only Windows affected.
- OpenVINO Core
- Fixes the issue with properly handling the directory in read_model() on Windows
You can find OpenVINO™ toolkit 2023.0.1 release here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install openvino==2023.0.1
- OpenVINO™ Development tools:
pip install openvino-dev==2023.0.1
Release documentation is available here: https://docs.openvino.ai/2023.0/home.html
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-0.html
2023.1.0.dev20230623
NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.
OpenVINO™ toolkit pre-release definition:
- It is introduced to get early feedback from the community.
- The scope and functionality of the pre-release version is subject to change in the future.
- Using the pre-release in production is strongly discouraged.
You can find OpenVINO™ toolkit 2023.1.0.dev20230623 pre-release version here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install --pre openvino
orpip install openvino==2023.1.0.dev20230623
- OpenVINO™ Development tools:
pip install --pre openvino-dev
orpip install openvino-dev==2023.1.0.dev20230623
Release notes is available here: https://docs.openvino.ai/nightly/prerelease_information.html
Release documentation is available here: https://docs.openvino.ai/nightly/
2022.3.1
Major Features and Improvements Summary
This is a Long-Term Support (LTS) release. LTS versions are released every year and supported for two years (one year for bug fixes, and two years for security patches). Read Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy v.2 for more details.
- This 2022.3.1 LTS release provides functional bug fixes and minor capability changes for the previous 2022.3 Long-Term Support (LTS) release, enabling developers to deploy applications powered by Intel® Distribution of OpenVINO™ toolkit with confidence.
- Intel® Movidius ™ VPU-based products are supported in this release.
You can find OpenVINO™ toolkit 2022.3 release here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install openvino==2022.3.1
- OpenVINO™ Development tools:
pip install openvino-dev==2022.3.1
Release documentation is available here: https://docs.openvino.ai/2022.3/
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-lts/2022-3.html
2023.0.0
Summary of major features and improvements
- More integrations, minimizing code changes
- Now you can load TensorFlow and TensorFlow Lite models directly in OpenVINO Runtime and OpenVINO Model Server. Models are converted automatically. For maximum performance, it is still recommended to convert to OpenVINO Intermediate Representation or IR format before loading the model. Additionally, we’ve introduced a similar functionality with PyTorch models as a preview feature where you can convert PyTorch models directly without needing to convert to ONNX.
- Support for Python 3.11
- NEW: C++ developers can now install OpenVINO runtime from Conda Forge
- NEW: ARM processors are now supported in CPU plug-in, including dynamic shapes, full processor performance, and broad sample code/notebook coverage. Officially validated for Raspberry Pi 4 and Apple® Mac M1/M2
- Preview: A new Python API has been introduced to allow developers to convert and optimize models directly from Python scripts
- Broader model support and optimizations
- Expanded model support for generative AI: CLIP, BLIP, Stable Diffusion 2.0, text processing models, transformer models (i.e. S-BERT, GPT-J, etc.), and others of note: Detectron2, Paddle Slim, RNN-T, Segment Anything Model (SAM), Whisper, and YOLOv8 to name a few.
- Initial support for dynamic shapes on GPU - you no longer need to change to static shapes when leveraging the GPU which is especially important for NLP models.
- Neural Network Compression Framework (NNCF) is now the main quantization solution. You can use it for both post-training optimization and quantization-aware training. Try it out:
pip install nncf
- Portability and performance
- CPU plugin now offers thread scheduling on 12th gen Intel® Core and up. You can choose to run inference on E-cores, P-cores, or both, depending on your application’s configurations. It is now possible to optimize for performance or for power savings as needed.
- NEW: Default Inference Precision - no matter which device you use, OpenVINO will default to the format that enables its optimal performance. For example, FP16 for GPU or BF16 for 4th Generation Intel® Xeon®. You no longer need to convert the model beforehand to specific IR precision, and you still have the option of running in accuracy mode if needed.
- Model caching on GPU is now improved with more efficient model loading/compiling.
You can find OpenVINO™ toolkit 2023.0 release here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install openvino==2023.0.0
- OpenVINO™ Development tools:
pip install openvino-dev==2023.0.0
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-0.html
2023.0.0.dev20230427
NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.
OpenVINO™ toolkit pre-release definition:
- It is introduced to get early feedback from the community.
- The scope and functionality of the pre-release version is subject to change in the future.
- Using the pre-release in production is strongly discouraged.
You can find OpenVINO™ toolkit 2023.0.0.dev20230427 pre-release version here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install --pre openvino
orpip install openvino==2023.0.0.dev20230427
- OpenVINO™ Development tools:
pip install --pre openvino-dev
orpip install openvino-dev==2023.0.0.dev20230427
Release notes is available here: https://docs.openvino.ai/nightly/prerelease_information.html
Release documentation is available here: https://docs.openvino.ai/nightly/
2023.0.0.dev20230407
NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.
OpenVINO™ toolkit pre-release definition:
- It is introduced to get early feedback from the community.
- The scope and functionality of the pre-release version is subject to change in the future.
- Using the pre-release in production is strongly discouraged.
You can find OpenVINO™ toolkit 2023.0.0.dev20230407 pre-release version here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install --pre openvino
orpip install openvino==2023.0.0.dev20230407
- OpenVINO™ Development tools:
pip install --pre openvino-dev
orpip install openvino-dev==2023.0.0.dev20230407
Release notes is available here: https://docs.openvino.ai/nightly/prerelease_information.html
Release documentation is available here: https://docs.openvino.ai/nightly/