Summary of major features and improvements
- More GenAI coverage and framework integrations to minimize code
changes
* New models supported on CPUs & GPUs: Phi-4,
Mistral-7B-Instruct-v0.3, SD-XL Inpainting 0.1, Stable
Diffusion 3.5 Large Turbo, Phi-4-reasoning, Qwen3, and
Qwen2.5-VL-3B-Instruct. Mistral 7B Instruct v0.3 is also
supported on NPUs.
* Preview: OpenVINO ™ GenAI introduces a text-to-speech
pipeline for the SpeechT5 TTS model, while the new RAG
backend offers developers a simplified API that delivers
reduced memory usage and improved performance.
* Preview: OpenVINO™ GenAI offers a GGUF Reader for seamless
integration of llama.cpp based LLMs, with Python and C++
pipelines that load GGUF models, build OpenVINO graphs,
and run GPU inference on-the-fly. Validated for popular models:
DeepSeek-R1-Distill-Qwen (1.5B, 7B), Qwen2.5 Instruct
(1.5B, 3B, 7B) & llama-3.2 Instruct (1B, 3B, 8B).
- Broader LLM model support and more model compression
techniques
* Further optimization of LoRA adapters in OpenVINO GenAI
for improved LLM, VLM, and text-to-image model performance
on built-in GPUs. Developers can use LoRA adapters to
quickly customize models for specialized tasks.
* KV cache compression for CPUs is enabled by default for
INT8, providing a reduced memory footprint while maintaining
accuracy compared to FP16. Additionally, it delivers
substantial memory savings for LLMs with INT4 support compared
to INT8.
* Optimizations for Intel® Core™ Ultra Processor Series 2
built-in GPUs and Intel® Arc™ B Series Graphics with the
Intel® XMX systolic platform to enhance the performance of
VLM models and hybrid quantized image generation models, as
well as improve first-token latency for LLMs through dynamic
quantization.
- More portability and performance to run AI at the edge, in the
cloud, or locally.
* Enhanced Linux* support with the latest GPU driver for
built-in GPUs on Intel® Core™ Ultra Processor Series 2
(formerly codenamed Arrow Lake H).
* Support for INT4 data-free weights compression for ONNX
models implemented in the Neural Network Compression
Framework (NNCF).
* NPU support for FP16-NF4 precision on Intel® Core™ 200V
Series processors for models with up to 8B parameters is
enabled through symmetrical and channel-wise quantization,
improving accuracy while maintaining performance efficiency.
Support Change and Deprecation Notices
- Discontinued in 2025:
* Runtime components:
+ The OpenVINO property of Affinity API is no longer
available. It has been replaced with CPU binding
configurations (ov::hint::enable_cpu_pinning).
+ The openvino-nightly PyPI module has been discontinued.
End-users should proceed with the Simple PyPI nightly repo
instead. More information in Release Policy. The
openvino-nightly PyPI module has been discontinued.
End-users should proceed with the Simple PyPI nightly repo
instead. More information in Release Policy.
* Tools:
+ The OpenVINO™ Development Tools package (pip install
openvino-dev) is no longer available for OpenVINO releases
in 2025.
+ Model Optimizer is no longer available. Consider using the
new conversion methods instead. For more details, see the
model conversion transition guide.
+ Intel® Streaming SIMD Extensions (Intel® SSE) are currently
not enabled in the binary package by default. They are
still supported in the source code form.
+ Legacy prefixes: l_, w_, and m_ have been removed from
OpenVINO archive names.
* OpenVINO GenAI:
+ StreamerBase::put(int64_t token)
+ The Bool value for Callback streamer is no longer accepted.
It must now return one of three values of StreamingStatus
enum.
+ ChunkStreamerBase is deprecated. Use StreamerBase instead.
* NNCF create_compressed_model() method is now deprecated.
nncf.quantize() method is recommended for
Quantization-Aware Training of PyTorch and TensorFlow models.
* OpenVINO Model Server (OVMS) benchmark client in C++
using TensorFlow Serving API.
- Deprecated and to be removed in the future:
* Python 3.9 is now deprecated and will be unavailable after
OpenVINO version 2025.4.
* openvino.Type.undefined is now deprecated and will be removed
with version 2026.0. openvino.Type.dynamic should be used
instead.
* APT & YUM Repositories Restructure: Starting with release
2025.1, users can switch to the new repository structure
for APT and YUM, which no longer uses year-based
subdirectories (like “2025”). The old (legacy) structure
will still be available until 2026, when the change will
be finalized. Detailed instructions are available on the
relevant documentation pages:
+ Installation guide - yum
+ Installation guide - apt
* OpenCV binaries will be removed from Docker images in 2026.
* Ubuntu 20.04 support will be deprecated in future OpenVINO
releases due to the end of standard support.
* “auto shape” and “auto batch size” (reshaping a model in
runtime) will be removed in the future. OpenVINO’s dynamic
shape models are recommended instead.
* MacOS x86 is no longer recommended for use due to the
discontinuation of validation. Full support will be removed
later in 2025.
* The openvino namespace of the OpenVINO Python API has been
redesigned, removing the nested openvino.runtime module.
The old namespace is now considered deprecated and will be
discontinued in 2026.0.
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/openvino?expand=0&rev=37
2025-06-24 04:28:58 +00:00
|
|
|
diff -uNr openvino.orig/samples/cpp/build_samples.sh openvino/samples/cpp/build_samples.sh
|
|
|
|
--- openvino.orig/samples/cpp/build_samples.sh 2024-04-25 01:04:42.451868881 -0300
|
|
|
|
+++ openvino/samples/cpp/build_samples.sh 2024-04-25 01:05:04.678342617 -0300
|
|
|
|
@@ -59,7 +59,7 @@
|
|
|
|
printf "\nSetting environment variables for building samples...\n"
|
|
|
|
|
|
|
|
if [ -z "$INTEL_OPENVINO_DIR" ]; then
|
|
|
|
- if [[ "$SAMPLES_SOURCE_DIR" = "/usr/share/openvino"* ]]; then
|
|
|
|
+ if [[ "$SAMPLES_SOURCE_DIR" = "/usr/share/OpenVINO"* ]]; then
|
|
|
|
true
|
|
|
|
elif [ -e "$SAMPLES_SOURCE_DIR/../../setupvars.sh" ]; then
|
|
|
|
setupvars_path="$SAMPLES_SOURCE_DIR/../../setupvars.sh"
|