Accepting request 1208543 from science:machinelearning

OBS-URL: https://build.opensuse.org/request/show/1208543
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/openvino?expand=0&rev=6
This commit is contained in:
Ana Guerrero 2024-10-17 16:39:32 +00:00 committed by Git OBS Bridge
commit 0a7c9bb146
6 changed files with 154 additions and 79 deletions

View File

@ -2,8 +2,8 @@
<service name="obs_scm" mode="manual"> <service name="obs_scm" mode="manual">
<param name="url">https://github.com/openvinotoolkit/openvino.git</param> <param name="url">https://github.com/openvinotoolkit/openvino.git</param>
<param name="scm">git</param> <param name="scm">git</param>
<param name="revision">2024.3.0</param> <param name="revision">2024.4.0</param>
<param name="version">2024.3.0</param> <param name="version">2024.4.0</param>
<param name="submodules">enable</param> <param name="submodules">enable</param>
<param name="filename">openvino</param> <param name="filename">openvino</param>
<param name="exclude">.git</param> <param name="exclude">.git</param>

View File

@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bacc2b9540afda6c5bd6d17ddea35afe17caefdd4fa1a350ed1c8be2eb290981
size 1055294991

View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fde6d7a29c8284b72866b02b37f6eaff9143f4a3b05f48a098d4965cc53c9248
size 1102958095

View File

@ -1,3 +1,117 @@
-------------------------------------------------------------------
Tue Oct 15 00:56:54 UTC 2024 - Alessandro de Oliveira Faria <cabelo@opensuse.org>
- Temporarily inserted gcc-13 in Tumbleweed/Factory/Slowroll:
Because there is an incompatibility of the source code of the
level-zero library and npu module with gcc-14. I am working
with Intel on tests to return to native gcc.
- Update to 2024.4.0
- Summary of major features and improvements
* More Gen AI coverage and framework integrations to minimize
code changes
+ Support for GLM-4-9B Chat, MiniCPM-1B, Llama 3 and 3.1,
Phi-3-Mini, Phi-3-Medium and YOLOX-s models.
+ Noteworthy notebooks added: Florence-2, NuExtract-tiny
Structure Extraction, Flux.1 Image Generation, PixArt-α:
Photorealistic Text-to-Image Synthesis, and Phi-3-Vision
Visual Language Assistant.
* Broader Large Language Model (LLM) support and more model
compression techniques.
+ OpenVINO™ runtime optimized for Intel® Xe Matrix Extensions
(Intel® XMX) systolic arrays on built-in GPUs for efficient
matrix multiplication resulting in significant LLM
performance boost with improved 1st and 2nd token
latency, as well as a smaller memory footprint on
Intel® Core™ Ultra Processors (Series 2).
+ Memory sharing enabled for NPUs on Intel® Core™ Ultra
Processors (Series 2) for efficient pipeline integration
without memory copy overhead.
+ Addition of the PagedAttention feature for discrete GPUs*
enables a significant boost in throughput for parallel
inferencing when serving LLMs on Intel® Arc™ Graphics
or Intel® Data Center GPU Flex Series.
* More portability and performance to run AI at the edge,
in the cloud, or locally.
+ OpenVINO™ Model Server now comes with production-quality
support for OpenAI-compatible API which enables i
significantly higher throughput for parallel inferencing
on Intel® Xeon® processors when serving LLMs to many
concurrent users.
+ Improved performance and memory consumption with prefix
caching, KV cache compression, and other optimizations
for serving LLMs using OpenVINO™ Model Server.
+ Support for Python 3.12.
- Support Change and Deprecation Notices
* Using deprecated features and components is not advised.
They are available to enable a smooth transition to new
solutions and will be discontinued in the future.
To keep using discontinued features, you will have to
revert to the last LTS OpenVINO version supporting them.
For more details, refer to the OpenVINO Legacy Features
and Components page.
* Discontinued in 2024.0:
+ Runtime components:
- Intel® Gaussian & Neural Accelerator (Intel®GNA).
Consider using the Neural Processing Unit (NPU) for
low-powered systems like Intel® Core™ Ultra or
14th generation and beyond.
- OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API
transition guide for reference).
- All ONNX Frontend legacy API (known as
ONNX_IMPORTER_API)
-'PerfomanceMode.UNDEFINED' property as part of the
OpenVINO Python API
+ Tools:
- Deployment Manager. See installation and deployment
guides for current distribution options.
- Accuracy Checker.
- Post-Training Optimization Tool (POT).Neural Network
Compression Framework (NNCF) should be used instead.
- A Git patchfor NNCF integration withhuggingface/
transformers. The recommended approachis to use
huggingface/optimum-intelfor applying NNCF
optimization on top of models from Hugging Face.
- Support for Apache MXNet, Caffe, and Kaldi model
formats. Conversion to ONNX may be used as a
solution.
* Deprecated and to be removed in the future:
+ The macOS x86_64 debug bins will no longer be
provided with the OpenVINO toolkit, starting with
OpenVINO 2024.5.
+ Python 3.8 is now considered deprecated, and it will not
be available beyond the 2024.4 OpenVINO version.
+ dKMB support is now considered deprecated and will be
fully removed with OpenVINO 2024.5
+ Intel® Streaming SIMD Extensions (Intel® SSE) will be
supported in source code form, but not enabled in the
binary package by default, starting with OpenVINO 2025.0
+ The openvino-nightly PyPI module will soon be discontinued.
End-users should proceed with the Simple PyPI nightly repo
instead. More information in Release Policy.
+ The OpenVINO™ Development Tools package (pip install
openvino-dev) will be removed from installation options and
distribution channels beginning with OpenVINO 2025.0.
+ Model Optimizer will be discontinued with OpenVINO 2025.0.
Consider using the new conversion methods instead. For more
details, see the model conversion transition guide.
+ OpenVINO property Affinity API will be discontinued with
OpenVINO 2025.0. It will be replaced with CPU binding
configurations (ov::hint::enable_cpu_pinning).
+ OpenVINO Model Server components:
- “auto shape” and “auto batch size” (reshaping a model in
runtime) will be removed in the future. OpenVINOs dynamic
shape models are recommended instead.
+ A number of notebooks have been deprecated. For an
up-to-date listing of available notebooks, refer to the
OpenVINO™ Notebook index (openvinotoolkit.github.io).
-------------------------------------------------------------------
Wed Oct 2 20:56:59 UTC 2024 - Giacomo Comes <gcomes.obs@gmail.com>
- Add Leap15 build
- Remove comment lines in the spec file that cause the insertion
of extra lines during a commit
------------------------------------------------------------------- -------------------------------------------------------------------
Sat Aug 10 01:41:06 UTC 2024 - Alessandro de Oliveira Faria <cabelo@opensuse.org> Sat Aug 10 01:41:06 UTC 2024 - Alessandro de Oliveira Faria <cabelo@opensuse.org>

View File

@ -1,4 +1,4 @@
name: openvino name: openvino
version: 2024.3.0 version: 2024.4.0
mtime: 1721394417 mtime: 1725541792
commit: 1e3b88e4e3f89774923e04e845428579f8ffa0fe commit: c3152d32c9c7df71397e5a3aba1d935c49eec598

View File

@ -17,17 +17,27 @@
# #
# Note: Will not build on Leap:15.X on account of too old TBB %if 0%{?suse_version} < 1600
%define isLeap15 %nil
%else
%undefine isLeap15
%endif
# Compilation takes ~1 hr on OBS for a single python, don't try all supported flavours # Compilation takes ~1 hr on OBS for a single python, don't try all supported flavours
%if %{defined isLeap15}
%define x86_64 x86_64
%define pythons python311
%else
%define pythons python3 %define pythons python3
%endif
%define __builder ninja %define __builder ninja
%define so_ver 2430 %define so_ver 2440
%define shlib lib%{name}%{so_ver} %define shlib lib%{name}%{so_ver}
%define shlib_c lib%{name}_c%{so_ver} %define shlib_c lib%{name}_c%{so_ver}
%define prj_name OpenVINO %define prj_name OpenVINO
Name: openvino Name: openvino
Version: 2024.3.0 Version: 2024.4.0
Release: 0 Release: 0
Summary: A toolkit for optimizing and deploying AI inference Summary: A toolkit for optimizing and deploying AI inference
# Let's be safe and put all third party licenses here, no matter that we use specific thirdparty libs or not # Let's be safe and put all third party licenses here, no matter that we use specific thirdparty libs or not
@ -49,7 +59,7 @@ Patch5: openvino-remove-npu-compile-tool.patch
BuildRequires: ade-devel BuildRequires: ade-devel
BuildRequires: cmake BuildRequires: cmake
BuildRequires: fdupes BuildRequires: fdupes
BuildRequires: gcc-c++ BuildRequires: gcc13-c++
BuildRequires: ninja BuildRequires: ninja
BuildRequires: opencl-cpp-headers BuildRequires: opencl-cpp-headers
# FIXME: /usr/include/onnx/onnx-ml.pb.h:17:2: error: This file was generated by # FIXME: /usr/include/onnx/onnx-ml.pb.h:17:2: error: This file was generated by
@ -64,15 +74,21 @@ BuildRequires: %{python_module setuptools}
BuildRequires: %{python_module wheel} BuildRequires: %{python_module wheel}
BuildRequires: python-rpm-macros BuildRequires: python-rpm-macros
BuildRequires: zstd BuildRequires: zstd
BuildRequires: pkgconfig(OpenCL-Headers)
BuildRequires: pkgconfig(flatbuffers) BuildRequires: pkgconfig(flatbuffers)
BuildRequires: pkgconfig(libva) BuildRequires: pkgconfig(libva)
BuildRequires: pkgconfig(nlohmann_json) BuildRequires: pkgconfig(nlohmann_json)
BuildRequires: pkgconfig(ocl-icd) BuildRequires: pkgconfig(ocl-icd)
BuildRequires: pkgconfig(protobuf) BuildRequires: pkgconfig(protobuf)
BuildRequires: pkgconfig(pugixml) BuildRequires: pkgconfig(pugixml)
%if %{defined isLeap15}
BuildRequires: opencl-headers
BuildRequires: snappy-devel
BuildRequires: tbb-devel
%else
BuildRequires: pkgconfig(OpenCL-Headers)
BuildRequires: pkgconfig(snappy) BuildRequires: pkgconfig(snappy)
BuildRequires: pkgconfig(tbb) BuildRequires: pkgconfig(tbb)
%endif
BuildRequires: pkgconfig(zlib) BuildRequires: pkgconfig(zlib)
%ifarch %{arm64} %ifarch %{arm64}
BuildRequires: scons BuildRequires: scons
@ -85,10 +101,6 @@ ExcludeArch: %{ix86} %{arm32} ppc
%description %description
OpenVINO is an open-source toolkit for optimizing and deploying AI inference. OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
## Main shared libs and devel pkg ##
#
%package -n %{shlib} %package -n %{shlib}
Summary: Shared library for OpenVINO toolkit Summary: Shared library for OpenVINO toolkit
@ -97,18 +109,12 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the shared library for OpenVINO. This package provides the shared library for OpenVINO.
#
%package -n %{shlib_c} %package -n %{shlib_c}
Summary: Shared C library for OpenVINO toolkit Summary: Shared C library for OpenVINO toolkit
%description -n %{shlib_c} %description -n %{shlib_c}
This package provides the C library for OpenVINO. This package provides the C library for OpenVINO.
#
%package -n %{name}-devel %package -n %{name}-devel
Summary: Headers and sources for OpenVINO toolkit Summary: Headers and sources for OpenVINO toolkit
Requires: %{shlib_c} = %{version} Requires: %{shlib_c} = %{version}
@ -119,15 +125,21 @@ Requires: lib%{name}_paddle_frontend%{so_ver} = %{version}
Requires: lib%{name}_pytorch_frontend%{so_ver} = %{version} Requires: lib%{name}_pytorch_frontend%{so_ver} = %{version}
Requires: lib%{name}_tensorflow_frontend%{so_ver} = %{version} Requires: lib%{name}_tensorflow_frontend%{so_ver} = %{version}
Requires: lib%{name}_tensorflow_lite_frontend%{so_ver} = %{version} Requires: lib%{name}_tensorflow_lite_frontend%{so_ver} = %{version}
Requires: pkgconfig(OpenCL-Headers)
Requires: pkgconfig(flatbuffers) Requires: pkgconfig(flatbuffers)
Requires: pkgconfig(libva) Requires: pkgconfig(libva)
Requires: pkgconfig(nlohmann_json) Requires: pkgconfig(nlohmann_json)
Requires: pkgconfig(ocl-icd) Requires: pkgconfig(ocl-icd)
Requires: pkgconfig(protobuf) Requires: pkgconfig(protobuf)
Requires: pkgconfig(pugixml) Requires: pkgconfig(pugixml)
%if %{defined isLeap15}
Requires: opencl-headers
Requires: snappy-devel
Requires: tbb-devel
%else
Requires: pkgconfig(OpenCL-Headers)
Requires: pkgconfig(snappy) Requires: pkgconfig(snappy)
Requires: pkgconfig(tbb) Requires: pkgconfig(tbb)
%endif
Recommends: %{name}-auto-batch-plugin = %{version} Recommends: %{name}-auto-batch-plugin = %{version}
Recommends: %{name}-auto-plugin = %{version} Recommends: %{name}-auto-plugin = %{version}
Recommends: %{name}-hetero-plugin = %{version} Recommends: %{name}-hetero-plugin = %{version}
@ -142,10 +154,6 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the headers and sources for developing applications with This package provides the headers and sources for developing applications with
OpenVINO. OpenVINO.
## Plugins ##
#
%package -n %{name}-arm-cpu-plugin %package -n %{name}-arm-cpu-plugin
Summary: Intel CPU plugin for OpenVINO toolkit Summary: Intel CPU plugin for OpenVINO toolkit
@ -154,9 +162,6 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the ARM CPU plugin for OpenVINO on %{arm64} archs. This package provides the ARM CPU plugin for OpenVINO on %{arm64} archs.
#
%package -n %{name}-riscv-cpu-plugin %package -n %{name}-riscv-cpu-plugin
Summary: RISC-V CPU plugin for OpenVINO toolkit Summary: RISC-V CPU plugin for OpenVINO toolkit
@ -165,9 +170,6 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the RISC-V CPU plugin for OpenVINO on riscv64 archs. This package provides the RISC-V CPU plugin for OpenVINO on riscv64 archs.
#
%package -n %{name}-auto-plugin %package -n %{name}-auto-plugin
Summary: Auto / Multi software plugin for OpenVINO toolkit Summary: Auto / Multi software plugin for OpenVINO toolkit
@ -176,9 +178,6 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the Auto / Multi software plugin for OpenVINO. This package provides the Auto / Multi software plugin for OpenVINO.
#
%package -n %{name}-auto-batch-plugin %package -n %{name}-auto-batch-plugin
Summary: Automatic batch software plugin for OpenVINO toolkit Summary: Automatic batch software plugin for OpenVINO toolkit
@ -187,9 +186,6 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the automatic batch software plugin for OpenVINO. This package provides the automatic batch software plugin for OpenVINO.
#
%package -n %{name}-hetero-plugin %package -n %{name}-hetero-plugin
Summary: Hetero frontend for Intel OpenVINO toolkit Summary: Hetero frontend for Intel OpenVINO toolkit
@ -198,9 +194,6 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the hetero frontend for OpenVINO. This package provides the hetero frontend for OpenVINO.
#
%package -n %{name}-intel-cpu-plugin %package -n %{name}-intel-cpu-plugin
Summary: Intel CPU plugin for OpenVINO toolkit Summary: Intel CPU plugin for OpenVINO toolkit
@ -209,9 +202,6 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the intel CPU plugin for OpenVINO for %{x86_64} archs. This package provides the intel CPU plugin for OpenVINO for %{x86_64} archs.
#
%package -n %{name}-intel-npu-plugin %package -n %{name}-intel-npu-plugin
Summary: Intel NPU plugin for OpenVINO toolkit Summary: Intel NPU plugin for OpenVINO toolkit
@ -220,11 +210,6 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the intel NPU plugin for OpenVINO for %{x86_64} archs. This package provides the intel NPU plugin for OpenVINO for %{x86_64} archs.
## Frontend shared libs ##
#
%package -n lib%{name}_ir_frontend%{so_ver} %package -n lib%{name}_ir_frontend%{so_ver}
Summary: Paddle frontend for Intel OpenVINO toolkit Summary: Paddle frontend for Intel OpenVINO toolkit
@ -233,9 +218,6 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the ir frontend for OpenVINO. This package provides the ir frontend for OpenVINO.
#
%package -n lib%{name}_onnx_frontend%{so_ver} %package -n lib%{name}_onnx_frontend%{so_ver}
Summary: Onnx frontend for OpenVINO toolkit Summary: Onnx frontend for OpenVINO toolkit
@ -244,9 +226,6 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the onnx frontend for OpenVINO. This package provides the onnx frontend for OpenVINO.
#
%package -n lib%{name}_paddle_frontend%{so_ver} %package -n lib%{name}_paddle_frontend%{so_ver}
Summary: Paddle frontend for Intel OpenVINO toolkit Summary: Paddle frontend for Intel OpenVINO toolkit
@ -255,9 +234,6 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the paddle frontend for OpenVINO. This package provides the paddle frontend for OpenVINO.
#
%package -n lib%{name}_pytorch_frontend%{so_ver} %package -n lib%{name}_pytorch_frontend%{so_ver}
Summary: PyTorch frontend for OpenVINO toolkit Summary: PyTorch frontend for OpenVINO toolkit
@ -266,9 +242,6 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the pytorch frontend for OpenVINO. This package provides the pytorch frontend for OpenVINO.
#
%package -n lib%{name}_tensorflow_frontend%{so_ver} %package -n lib%{name}_tensorflow_frontend%{so_ver}
Summary: TensorFlow frontend for OpenVINO toolkit Summary: TensorFlow frontend for OpenVINO toolkit
@ -277,9 +250,6 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the tensorflow frontend for OpenVINO. This package provides the tensorflow frontend for OpenVINO.
#
%package -n lib%{name}_tensorflow_lite_frontend%{so_ver} %package -n lib%{name}_tensorflow_lite_frontend%{so_ver}
Summary: TensorFlow Lite frontend for OpenVINO toolkit Summary: TensorFlow Lite frontend for OpenVINO toolkit
@ -288,10 +258,6 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides the tensorflow-lite frontend for OpenVINO. This package provides the tensorflow-lite frontend for OpenVINO.
## Python module ##
#
%package -n python-openvino %package -n python-openvino
Summary: Python module for openVINO toolkit Summary: Python module for openVINO toolkit
Requires: python-numpy < 2 Requires: python-numpy < 2
@ -302,10 +268,6 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides a Python module for interfacing with openVINO toolkit. This package provides a Python module for interfacing with openVINO toolkit.
## Samples/examples ##
#
%package -n %{name}-sample %package -n %{name}-sample
Summary: Samples for use with OpenVINO toolkit Summary: Samples for use with OpenVINO toolkit
BuildArch: noarch BuildArch: noarch
@ -315,13 +277,11 @@ OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
This package provides some samples for use with openVINO. This package provides some samples for use with openVINO.
#
%prep %prep
%autosetup -p1 %autosetup -p1
%build %build
export CC=gcc-13 CXX=g++-13
# Otherwise intel_cpu plugin declares an executable stack # Otherwise intel_cpu plugin declares an executable stack
%ifarch %{x86_64} %ifarch %{x86_64}
%define build_ldflags -Wl,-z,noexecstack %define build_ldflags -Wl,-z,noexecstack
@ -344,6 +304,9 @@ This package provides some samples for use with openVINO.
-DENABLE_SYSTEM_PUGIXML=ON \ -DENABLE_SYSTEM_PUGIXML=ON \
-DENABLE_SYSTEM_SNAPPY=ON \ -DENABLE_SYSTEM_SNAPPY=ON \
-DENABLE_SYSTEM_TBB=ON \ -DENABLE_SYSTEM_TBB=ON \
%if %{defined isLeap15}
-DENABLE_TBBBIND_2_5=OFF \
%endif
-DONNX_USE_PROTOBUF_SHARED_LIBS=ON \ -DONNX_USE_PROTOBUF_SHARED_LIBS=ON \
-DProtobuf_USE_STATIC_LIBS=OFF \ -DProtobuf_USE_STATIC_LIBS=OFF \
%{nil} %{nil}
@ -362,8 +325,6 @@ export WHEEL_VERSION=%{version} \
%install %install
%cmake_install %cmake_install
rm %{buildroot}%{_datadir}/%{prj_name}/samples/cpp/thirdparty/nlohmann_json/.cirrus.yml
# Hash-bangs in non-exec python sample scripts # Hash-bangs in non-exec python sample scripts
sed -Ei "1{\@/usr/bin/env@d}" \ sed -Ei "1{\@/usr/bin/env@d}" \
%{buildroot}%{_datadir}/%{prj_name}/samples/python/benchmark/bert_benchmark/bert_benchmark.py \ %{buildroot}%{_datadir}/%{prj_name}/samples/python/benchmark/bert_benchmark/bert_benchmark.py \