Accepting request 1173003 from home:cabelo:branches:science:machinelearning

- Fix sample source path in build script.
- Update to 2024.1.0
- More Generative AI coverage and framework integrations to
  minimize code changes.
  * Mixtral and URLNet models optimized for performance 
    improvements on Intel® Xeon® processors.
  * Stable Diffusion 1.5, ChatGLM3-6B, and Qwen-7B models 
    optimized for improved inference speed on Intel® Core™
    Ultra processors with integrated GPU.
  * Support for Falcon-7B-Instruct, a GenAI Large Language Model
    (LLM) ready-to-use chat/instruct model with superior
    performance metrics.
  * New Jupyter Notebooks added: YOLO V9, YOLO V8
    Oriented Bounding Boxes Detection (OOB), Stable Diffusion 
    in Keras, MobileCLIP, RMBG-v1.4 Background Removal, Magika, 
    TripoSR, AnimateAnyone, LLaVA-Next, and RAG system with 
    OpenVINO and LangChain.
- Broader Large Language Model (LLM) support and more model
  compression techniques.
  * LLM compilation time reduced through additional optimizations
    with compressed embedding. Improved 1st token performance of
    LLMs on 4th and 5th generations of Intel® Xeon® processors 
    with Intel® Advanced Matrix Extensions (Intel® AMX).
  * Better LLM compression and improved performance with oneDNN,
    INT4, and INT8 support for Intel® Arc™ GPUs.
  * Significant memory reduction for select smaller GenAI
    models on Intel® Core™ Ultra processors with integrated GPU.
- More portability and performance to run AI at the edge, 
  in the cloud, or locally.
  * The preview NPU plugin for Intel® Core™ Ultra processors
    is now available in the OpenVINO open-source GitHub 
    repository, in addition to the main OpenVINO package on PyPI.
  * The JavaScript API is now more easily accessible through
    the npm repository, enabling JavaScript developers’ seamless 
    access to the OpenVINO API.
  * FP16 inference on ARM processors now enabled for the 
    Convolutional Neural Network (CNN) by default.
- Support Change and Deprecation Notices
  * Using deprecated features and components is not advised. They
    are available to enable a smooth transition to new solutions 
    and will be discontinued in the future. To keep using 
    Discontinued features, you will have to revert to the last 
    LTS OpenVINO version supporting them.
  * For more details, refer to the OpenVINO Legacy Features 
    and Components page.
  * Discontinued in 2024.0:
    + Runtime components:
      - Intel® Gaussian & Neural Accelerator (Intel® GNA).
        Consider using the Neural Processing Unit (NPU) 
        for low-powered systems like Intel® Core™ Ultra or
        14th generation and beyond.
      - OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API 
        transition guide for reference).
      - All ONNX Frontend legacy API (known as 
        ONNX_IMPORTER_API)
      - 'PerfomanceMode.UNDEFINED' property as part of
         the OpenVINO Python API
    + Tools:
      - Deployment Manager. See installation and deployment
        guides for current distribution options.
      - Accuracy Checker.
      - Post-Training Optimization Tool (POT). Neural Network
        Compression Framework (NNCF) should be used instead.
      - A Git patch for NNCF integration with 
        huggingface/transformers. The recommended approach
        is to use huggingface/optimum-intel for applying 
        NNCF optimization on top of models from Hugging 
        Face.
      - Support for Apache MXNet, Caffe, and Kaldi model 
        formats. Conversion to ONNX may be used as 
        a solution.
  * Deprecated and to be removed in the future:
    + The OpenVINO™ Development Tools package (pip install
      openvino-dev) will be removed from installation options
      and distribution channels beginning with OpenVINO 2025.0.
    + Model Optimizer will be discontinued with OpenVINO 2025.0.
      Consider using the new conversion methods instead. For 
      more details, see the model conversion transition guide.
    + OpenVINO property Affinity API will be discontinued with 
      OpenVINO 2025.0. It will be replaced with CPU binding 
      configurations (ov::hint::enable_cpu_pinning).
    + OpenVINO Model Server components:
      - “auto shape” and “auto batch size” (reshaping a model
        in runtime) will be removed in the future. OpenVINO’s
        dynamic shape models are recommended instead.

OBS-URL: https://build.opensuse.org/request/show/1173003
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/openvino?expand=0&rev=5
This commit is contained in:
Guillaume GARDET 2024-05-13 17:52:35 +00:00 committed by Git OBS Bridge
parent 61dbbb54ad
commit cd00b14665
9 changed files with 194 additions and 18 deletions

View File

@ -1,9 +1,9 @@
<services>
<service name="obs_scm" mode="manual">
<service name="obs_scm">
<param name="url">https://github.com/openvinotoolkit/openvino.git</param>
<param name="scm">git</param>
<param name="revision">2024.0.0</param>
<param name="version">2024.0.0</param>
<param name="revision">2024.1.0</param>
<param name="version">2024.1.0</param>
<param name="submodules">enable</param>
<param name="filename">openvino</param>
<param name="exclude">.git</param>

View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7a62674d7f3b6ddf8a7e32a8344922b9968101d01963e037ebf2142bc48cfb9f
size 865282063

View File

@ -0,0 +1,4 @@
name: openvino
version: 2024.1.0
mtime: 1713778234
commit: f4afc983258bcb2592d999ed6700043fdb58ad78

View File

@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:df436c6a42a84424f4a3c2249298d40b89a6046568ea7b38088f1d7bde3b011e
size 825923087

View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:556da89cbc03dd30dad270a1c1796598932c488a696a87056399e74a4d688680
size 865282063

View File

@ -0,0 +1,12 @@
diff -uNr openvino.orig/samples/cpp/build_samples.sh openvino/samples/cpp/build_samples.sh
--- openvino.orig/samples/cpp/build_samples.sh 2024-04-25 01:04:42.451868881 -0300
+++ openvino/samples/cpp/build_samples.sh 2024-04-25 01:05:04.678342617 -0300
@@ -59,7 +59,7 @@
printf "\nSetting environment variables for building samples...\n"
if [ -z "$INTEL_OPENVINO_DIR" ]; then
- if [[ "$SAMPLES_SOURCE_DIR" = "/usr/share/openvino"* ]]; then
+ if [[ "$SAMPLES_SOURCE_DIR" = "/usr/share/OpenVINO"* ]]; then
true
elif [ -e "$SAMPLES_SOURCE_DIR/../../setupvars.sh" ]; then
setupvars_path="$SAMPLES_SOURCE_DIR/../../setupvars.sh"

View File

@ -1,3 +1,92 @@
-------------------------------------------------------------------
Thu May 9 22:56:53 UTC 2024 - Alessandro de Oliveira Faria <cabelo@opensuse.org>
- Fix sample source path in build script.
- Update to 2024.1.0
- More Generative AI coverage and framework integrations to
minimize code changes.
* Mixtral and URLNet models optimized for performance
improvements on Intel® Xeon® processors.
* Stable Diffusion 1.5, ChatGLM3-6B, and Qwen-7B models
optimized for improved inference speed on Intel® Core™
Ultra processors with integrated GPU.
* Support for Falcon-7B-Instruct, a GenAI Large Language Model
(LLM) ready-to-use chat/instruct model with superior
performance metrics.
* New Jupyter Notebooks added: YOLO V9, YOLO V8
Oriented Bounding Boxes Detection (OOB), Stable Diffusion
in Keras, MobileCLIP, RMBG-v1.4 Background Removal, Magika,
TripoSR, AnimateAnyone, LLaVA-Next, and RAG system with
OpenVINO and LangChain.
- Broader Large Language Model (LLM) support and more model
compression techniques.
* LLM compilation time reduced through additional optimizations
with compressed embedding. Improved 1st token performance of
LLMs on 4th and 5th generations of Intel® Xeon® processors
with Intel® Advanced Matrix Extensions (Intel® AMX).
* Better LLM compression and improved performance with oneDNN,
INT4, and INT8 support for Intel® Arc™ GPUs.
* Significant memory reduction for select smaller GenAI
models on Intel® Core™ Ultra processors with integrated GPU.
- More portability and performance to run AI at the edge,
in the cloud, or locally.
* The preview NPU plugin for Intel® Core™ Ultra processors
is now available in the OpenVINO open-source GitHub
repository, in addition to the main OpenVINO package on PyPI.
* The JavaScript API is now more easily accessible through
the npm repository, enabling JavaScript developers seamless
access to the OpenVINO API.
* FP16 inference on ARM processors now enabled for the
Convolutional Neural Network (CNN) by default.
- Support Change and Deprecation Notices
* Using deprecated features and components is not advised. They
are available to enable a smooth transition to new solutions
and will be discontinued in the future. To keep using
Discontinued features, you will have to revert to the last
LTS OpenVINO version supporting them.
* For more details, refer to theOpenVINO Legacy Features
and Components page.
* Discontinued in 2024.0:
+ Runtime components:
- Intel® Gaussian & Neural Accelerator (Intel®GNA).
Consider using the Neural Processing Unit (NPU)
for low-powered systems like Intel® Core™ Ultra or
14thgeneration and beyond.
- OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API
transition guide for reference).
- All ONNX Frontend legacy API (known as
ONNX_IMPORTER_API)
- 'PerfomanceMode.UNDEFINED' property as part of
the OpenVINO Python API
+ Tools:
- Deployment Manager. See installation and deployment
guides for current distribution options.
- Accuracy Checker.
- Post-Training Optimization Tool (POT).Neural Network
Compression Framework (NNCF) should be used instead.
- A Git patchfor NNCF integration with
huggingface/transformers. The recommended approach
is to usehuggingface/optimum-intelfor applying
NNCF optimization on top of models from Hugging
Face.
- Support for Apache MXNet, Caffe, and Kaldi model
formats. Conversion to ONNX may be used as
a solution.
* Deprecated and to be removed in the future:
+ The OpenVINO™ Development Tools package (pip install
openvino-dev) will be removed from installation options
and distribution channels beginning with OpenVINO 2025.0.
+ Model Optimizer will be discontinued with OpenVINO 2025.0.
Consider using the new conversion methods instead. For
more details, see the model conversion transition guide.
+ OpenVINO property Affinity API will be discontinued with
OpenVINO 2025.0. It will be replaced with CPU binding
configurations (ov::hint::enable_cpu_pinning).
+ OpenVINO Model Server components:
- “auto shape” and “auto batch size” (reshaping a model
in runtime) will be removed in the future. OpenVINOs
dynamic shape models are recommended instead.
-------------------------------------------------------------------
Tue Apr 23 18:57:17 UTC 2024 - Atri Bhattacharya <badshah400@gmail.com>

View File

@ -1,4 +1,4 @@
name: openvino
version: 2024.0.0
mtime: 1708605048
commit: 34caeefd07800b59065345d651949efbe8ab6649
version: 2024.1.0
mtime: 1713778234
commit: f4afc983258bcb2592d999ed6700043fdb58ad78

View File

@ -21,12 +21,12 @@
# Compilation takes ~1 hr on OBS for a single python, don't try all supported flavours
%define pythons python3
%define __builder ninja
%define so_ver 2400
%define so_ver 2410
%define shlib lib%{name}%{so_ver}
%define shlib_c lib%{name}_c%{so_ver}
%define prj_name OpenVINO
Name: openvino
Version: 2024.0.0
Version: 2024.1.0
Release: 0
Summary: A toolkit for optimizing and deploying AI inference
# Let's be safe and put all third party licenses here, no matter that we use specific thirdparty libs or not
@ -40,6 +40,8 @@ Patch0: openvino-onnx-ml-defines.patch
Patch2: openvino-fix-install-paths.patch
# PATCH-FIX-UPSTREAM openvino-ComputeLibrary-include-string.patch badshah400@gmail.com -- Include header for std::string
Patch3: openvino-ComputeLibrary-include-string.patch
# PATCH-FIX-UPSTREAM openvino-fix-build-sample-path.patch cabelo@opensuse.org -- Fix sample source path in build script
Patch4: openvino-fix-build-sample-path.patch
BuildRequires: ade-devel
BuildRequires: cmake
BuildRequires: fdupes
@ -51,6 +53,12 @@ BuildRequires: opencl-cpp-headers
# headers. Please regenerate this file with a newer version of protoc.
#BuildRequires: cmake(ONNX)
BuildRequires: pkgconfig
BuildRequires: %{python_module devel}
BuildRequires: %{python_module pip}
BuildRequires: %{python_module pybind11-devel}
BuildRequires: %{python_module setuptools}
BuildRequires: %{python_module wheel}
BuildRequires: python-rpm-macros
BuildRequires: zstd
BuildRequires: pkgconfig(OpenCL-Headers)
BuildRequires: pkgconfig(flatbuffers)
@ -62,12 +70,6 @@ BuildRequires: pkgconfig(pugixml)
BuildRequires: pkgconfig(snappy)
BuildRequires: pkgconfig(tbb)
BuildRequires: pkgconfig(zlib)
BuildRequires: python-rpm-macros
BuildRequires: %{python_module devel}
BuildRequires: %{python_module pip}
BuildRequires: %{python_module pybind11-devel}
BuildRequires: %{python_module setuptools}
BuildRequires: %{python_module wheel}
%ifarch %{arm64}
BuildRequires: scons
%endif
@ -79,8 +81,11 @@ ExcludeArch: %{ix86} %{arm32} ppc
%description
OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
## Main shared libs and devel pkg ##
#
%package -n %{shlib}
Summary: Shared library for OpenVINO toolkit
@ -89,14 +94,20 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the shared library for OpenVINO.
#
%package -n %{shlib_c}
Summary: Shared C library for OpenVINO toolkit
%description -n %{shlib_c}
This package provides the C library for OpenVINO.
#
%package -n %{name}-devel
Summary: Headers and sources for OpenVINO toolkit
Requires: %{shlib_c} = %{version}
@ -127,8 +138,11 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the headers and sources for developing applications with
OpenVINO.
## Plugins ##
#
%package -n %{name}-arm-cpu-plugin
Summary: Intel CPU plugin for OpenVINO toolkit
@ -137,7 +151,10 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the ARM CPU plugin for OpenVINO on %{arm64} archs.
#
%package -n %{name}-auto-plugin
Summary: Auto / Multi software plugin for OpenVINO toolkit
@ -146,7 +163,10 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the Auto / Multi software plugin for OpenVINO.
#
%package -n %{name}-auto-batch-plugin
Summary: Automatic batch software plugin for OpenVINO toolkit
@ -155,7 +175,10 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the automatic batch software plugin for OpenVINO.
#
%package -n %{name}-hetero-plugin
Summary: Hetero frontend for Intel OpenVINO toolkit
@ -164,7 +187,10 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the hetero frontend for OpenVINO.
#
%package -n %{name}-intel-cpu-plugin
Summary: Intel CPU plugin for OpenVINO toolkit
@ -173,8 +199,23 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the intel CPU plugin for OpenVINO for %{x86_64} archs.
#
%package -n %{name}-intel-npu-plugin
Summary: Intel NPU plugin for OpenVINO toolkit
%description -n %{name}-intel-npu-plugin
OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the intel NPU plugin for OpenVINO for %{x86_64} archs.
## Frontend shared libs ##
#
%package -n lib%{name}_ir_frontend%{so_ver}
Summary: Paddle frontend for Intel OpenVINO toolkit
@ -183,7 +224,10 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the ir frontend for OpenVINO.
#
%package -n lib%{name}_onnx_frontend%{so_ver}
Summary: Onnx frontend for OpenVINO toolkit
@ -192,7 +236,10 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the onnx frontend for OpenVINO.
#
%package -n lib%{name}_paddle_frontend%{so_ver}
Summary: Paddle frontend for Intel OpenVINO toolkit
@ -201,7 +248,10 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the paddle frontend for OpenVINO.
#
%package -n lib%{name}_pytorch_frontend%{so_ver}
Summary: PyTorch frontend for OpenVINO toolkit
@ -210,7 +260,10 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the pytorch frontend for OpenVINO.
#
%package -n lib%{name}_tensorflow_frontend%{so_ver}
Summary: TensorFlow frontend for OpenVINO toolkit
@ -219,7 +272,10 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the tensorflow frontend for OpenVINO.
#
%package -n lib%{name}_tensorflow_lite_frontend%{so_ver}
Summary: TensorFlow Lite frontend for OpenVINO toolkit
@ -228,8 +284,11 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the tensorflow-lite frontend for OpenVINO.
## Python module ##
#
%package -n python-openvino
Summary: Python module for openVINO toolkit
Requires: python-numpy < 2
@ -241,8 +300,11 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides a Python module for interfacing with openVINO toolkit.
## Samples/examples ##
#
%package -n %{name}-sample
Summary: Samples for use with OpenVINO toolkit
BuildArch: noarch
@ -251,8 +313,10 @@ BuildArch: noarch
OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides some samples for use with openVINO.
#
#
%prep
%autosetup -p1
@ -352,6 +416,10 @@ rm -fr %{buildroot}%{_datadir}/licenses/*
%files -n %{name}-intel-cpu-plugin
%dir %{_libdir}/%{prj_name}
%{_libdir}/%{prj_name}/libopenvino_intel_cpu_plugin.so
%files -n %{name}-intel-npu-plugin
%dir %{_libdir}/%{prj_name}
%{_libdir}/%{prj_name}/libopenvino_intel_npu_plugin.so
%endif
%ifarch %{arm64}