openvino/openvino.changes

153 lines
7.1 KiB
Plaintext
Raw Normal View History

Accepting request 1173003 from home:cabelo:branches:science:machinelearning - Fix sample source path in build script. - Update to 2024.1.0 - More Generative AI coverage and framework integrations to minimize code changes. * Mixtral and URLNet models optimized for performance improvements on Intel® Xeon® processors. * Stable Diffusion 1.5, ChatGLM3-6B, and Qwen-7B models optimized for improved inference speed on Intel® Core™ Ultra processors with integrated GPU. * Support for Falcon-7B-Instruct, a GenAI Large Language Model (LLM) ready-to-use chat/instruct model with superior performance metrics. * New Jupyter Notebooks added: YOLO V9, YOLO V8 Oriented Bounding Boxes Detection (OOB), Stable Diffusion in Keras, MobileCLIP, RMBG-v1.4 Background Removal, Magika, TripoSR, AnimateAnyone, LLaVA-Next, and RAG system with OpenVINO and LangChain. - Broader Large Language Model (LLM) support and more model compression techniques. * LLM compilation time reduced through additional optimizations with compressed embedding. Improved 1st token performance of LLMs on 4th and 5th generations of Intel® Xeon® processors with Intel® Advanced Matrix Extensions (Intel® AMX). * Better LLM compression and improved performance with oneDNN, INT4, and INT8 support for Intel® Arc™ GPUs. * Significant memory reduction for select smaller GenAI models on Intel® Core™ Ultra processors with integrated GPU. - More portability and performance to run AI at the edge, in the cloud, or locally. * The preview NPU plugin for Intel® Core™ Ultra processors is now available in the OpenVINO open-source GitHub repository, in addition to the main OpenVINO package on PyPI. * The JavaScript API is now more easily accessible through the npm repository, enabling JavaScript developers’ seamless access to the OpenVINO API. * FP16 inference on ARM processors now enabled for the Convolutional Neural Network (CNN) by default. - Support Change and Deprecation Notices * Using deprecated features and components is not advised. They are available to enable a smooth transition to new solutions and will be discontinued in the future. To keep using Discontinued features, you will have to revert to the last LTS OpenVINO version supporting them. * For more details, refer to the OpenVINO Legacy Features and Components page. * Discontinued in 2024.0: + Runtime components: - Intel® Gaussian & Neural Accelerator (Intel® GNA). Consider using the Neural Processing Unit (NPU) for low-powered systems like Intel® Core™ Ultra or 14th generation and beyond. - OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API transition guide for reference). - All ONNX Frontend legacy API (known as ONNX_IMPORTER_API) - 'PerfomanceMode.UNDEFINED' property as part of the OpenVINO Python API + Tools: - Deployment Manager. See installation and deployment guides for current distribution options. - Accuracy Checker. - Post-Training Optimization Tool (POT). Neural Network Compression Framework (NNCF) should be used instead. - A Git patch for NNCF integration with  huggingface/transformers. The recommended approach  is to use huggingface/optimum-intel for applying NNCF optimization on top of models from Hugging Face. - Support for Apache MXNet, Caffe, and Kaldi model formats. Conversion to ONNX may be used as a solution. * Deprecated and to be removed in the future: + The OpenVINO™ Development Tools package (pip install openvino-dev) will be removed from installation options and distribution channels beginning with OpenVINO 2025.0. + Model Optimizer will be discontinued with OpenVINO 2025.0. Consider using the new conversion methods instead. For more details, see the model conversion transition guide. + OpenVINO property Affinity API will be discontinued with OpenVINO 2025.0. It will be replaced with CPU binding configurations (ov::hint::enable_cpu_pinning). + OpenVINO Model Server components: - “auto shape” and “auto batch size” (reshaping a model in runtime) will be removed in the future. OpenVINO’s dynamic shape models are recommended instead. OBS-URL: https://build.opensuse.org/request/show/1173003 OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/openvino?expand=0&rev=5
2024-05-13 17:52:35 +00:00
-------------------------------------------------------------------
Thu May 9 22:56:53 UTC 2024 - Alessandro de Oliveira Faria <cabelo@opensuse.org>
- Fix sample source path in build script.
- Update to 2024.1.0
- More Generative AI coverage and framework integrations to
minimize code changes.
* Mixtral and URLNet models optimized for performance
improvements on Intel® Xeon® processors.
* Stable Diffusion 1.5, ChatGLM3-6B, and Qwen-7B models
optimized for improved inference speed on Intel® Core™
Ultra processors with integrated GPU.
* Support for Falcon-7B-Instruct, a GenAI Large Language Model
(LLM) ready-to-use chat/instruct model with superior
performance metrics.
* New Jupyter Notebooks added: YOLO V9, YOLO V8
Oriented Bounding Boxes Detection (OOB), Stable Diffusion
in Keras, MobileCLIP, RMBG-v1.4 Background Removal, Magika,
TripoSR, AnimateAnyone, LLaVA-Next, and RAG system with
OpenVINO and LangChain.
- Broader Large Language Model (LLM) support and more model
compression techniques.
* LLM compilation time reduced through additional optimizations
with compressed embedding. Improved 1st token performance of
LLMs on 4th and 5th generations of Intel® Xeon® processors
with Intel® Advanced Matrix Extensions (Intel® AMX).
* Better LLM compression and improved performance with oneDNN,
INT4, and INT8 support for Intel® Arc™ GPUs.
* Significant memory reduction for select smaller GenAI
models on Intel® Core™ Ultra processors with integrated GPU.
- More portability and performance to run AI at the edge,
in the cloud, or locally.
* The preview NPU plugin for Intel® Core™ Ultra processors
is now available in the OpenVINO open-source GitHub
repository, in addition to the main OpenVINO package on PyPI.
* The JavaScript API is now more easily accessible through
the npm repository, enabling JavaScript developers seamless
access to the OpenVINO API.
* FP16 inference on ARM processors now enabled for the
Convolutional Neural Network (CNN) by default.
- Support Change and Deprecation Notices
* Using deprecated features and components is not advised. They
are available to enable a smooth transition to new solutions
and will be discontinued in the future. To keep using
Discontinued features, you will have to revert to the last
LTS OpenVINO version supporting them.
* For more details, refer to theOpenVINO Legacy Features
and Components page.
* Discontinued in 2024.0:
+ Runtime components:
- Intel® Gaussian & Neural Accelerator (Intel®GNA).
Consider using the Neural Processing Unit (NPU)
for low-powered systems like Intel® Core™ Ultra or
14thgeneration and beyond.
- OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API
transition guide for reference).
- All ONNX Frontend legacy API (known as
ONNX_IMPORTER_API)
- 'PerfomanceMode.UNDEFINED' property as part of
the OpenVINO Python API
+ Tools:
- Deployment Manager. See installation and deployment
guides for current distribution options.
- Accuracy Checker.
- Post-Training Optimization Tool (POT).Neural Network
Compression Framework (NNCF) should be used instead.
- A Git patchfor NNCF integration with
huggingface/transformers. The recommended approach
is to usehuggingface/optimum-intelfor applying
NNCF optimization on top of models from Hugging
Face.
- Support for Apache MXNet, Caffe, and Kaldi model
formats. Conversion to ONNX may be used as
a solution.
* Deprecated and to be removed in the future:
+ The OpenVINO™ Development Tools package (pip install
openvino-dev) will be removed from installation options
and distribution channels beginning with OpenVINO 2025.0.
+ Model Optimizer will be discontinued with OpenVINO 2025.0.
Consider using the new conversion methods instead. For
more details, see the model conversion transition guide.
+ OpenVINO property Affinity API will be discontinued with
OpenVINO 2025.0. It will be replaced with CPU binding
configurations (ov::hint::enable_cpu_pinning).
+ OpenVINO Model Server components:
- “auto shape” and “auto batch size” (reshaping a model
in runtime) will be removed in the future. OpenVINOs
dynamic shape models are recommended instead.
-------------------------------------------------------------------
Tue Apr 23 18:57:17 UTC 2024 - Atri Bhattacharya <badshah400@gmail.com>
- License update: play safe and list all third party licenses as
part of the License tag.
-------------------------------------------------------------------
Tue Apr 23 12:42:32 UTC 2024 - Atri Bhattacharya <badshah400@gmail.com>
- Switch to _service file as tagged Source tarball does not
include `./thirdparty` submodules.
- Update openvino-fix-install-paths.patch to fix python module
install path.
- Enable python module and split it out into a python subpackage
(for now default python3 only).
- Explicitly build python metadata (dist-info) and install it
(needs simple sed hackery to support "officially" unsupported
platform ppc64le).
- Specify ENABLE_JS=OFF to turn off javascript bindings as
building these requires downloading npm stuff from the network.
- Build with system pybind11.
- Bump _constraints for updated disk space requirements.
- Drop empty %check section, rpmlint was misleading when it
recommended adding this.
-------------------------------------------------------------------
Fri Apr 19 08:08:02 UTC 2024 - Atri Bhattacharya <badshah400@gmail.com>
- Numerous specfile cleanups:
* Drop redundant `mv` commands and use `install` where
appropriate.
* Build with system protobuf.
* Fix Summary tags.
* Trim package descriptions.
* Drop forcing CMAKE_BUILD_TYPE=Release, let macro default
RelWithDebInfo be used instead.
* Correct naming of shared library packages.
* Separate out libopenvino_c.so.* into own shared lib package.
* Drop rpmlintrc rule used to hide shlib naming mistakes.
* Rename Source tarball to %{name}-%{version}.EXT pattern.
* Use ldconfig_scriptlet macro for post(un).
- Add openvino-onnx-ml-defines.patch -- Define ONNX_ML at compile
time when using system onnx to allow using 'onnx-ml.pb.h'
instead of 'onnx.pb.h', the latter not being shipped with
openSUSE's onnx-devel package (gh#onnx/onnx#3074).
- Add openvino-fix-install-paths.patch: Change hard-coded install
paths in upstream cmake macro to standard Linux dirs.
- Add openvino-ComputeLibrary-include-string.patch: Include header
for std::string.
- Add external devel packages as Requires for openvino-devel.
- Pass -Wl,-z,noexecstack to %build_ldflags to avoid an exec stack
issue with intel CPU plugin.
- Use ninja for build.
- Adapt _constraits file for correct disk space and memory
requirements.
- Add empty %check section.
-------------------------------------------------------------------
Mon Apr 15 03:18:33 UTC 2024 - Alessandro de Oliveira Faria <cabelo@opensuse.org>
- Initial package
- Version 2024.0.0
- Add openvino-rpmlintrc.