Compare commits

..

No commits in common. "factory" and "factory" have entirely different histories.

5 changed files with 44 additions and 79 deletions

View File

@ -1,3 +0,0 @@
<services>
<service name="download_files" mode="manual" />
</services>

3
oneDNN-3.4.1.tar.gz Normal file
View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:66a6512405664c2cd004811922173adabaa50d6aadc9352291d2d85f8b0f3d10
size 13282745

BIN
oneDNN-3.6.2.tar.gz (Stored with Git LFS)

Binary file not shown.

View File

@ -1,56 +1,28 @@
-------------------------------------------------------------------
Fri Jan 3 23:23:08 UTC 2025 - Eyad Issa <eyadlorenzo@gmail.com>
- Update to 3.6.2:
* https://github.com/oneapi-src/oneDNN/releases/tag/v3.6.2
-------------------------------------------------------------------
Thu Oct 17 11:42:58 UTC 2024 - Guillaume GARDET <guillaume.gardet@opensuse.org>
- Update to 3.6:
* https://github.com/oneapi-src/oneDNN/releases/tag/v3.6
-------------------------------------------------------------------
Wed Oct 2 12:06:54 UTC 2024 - Guillaume GARDET <guillaume.gardet@opensuse.org>
- Add openCL deps for devel package
-------------------------------------------------------------------
Tue Sep 24 08:40:52 UTC 2024 - Guillaume GARDET <guillaume.gardet@opensuse.org>
- Enable graph component
-------------------------------------------------------------------
Mon Sep 23 10:04:43 UTC 2024 - Guillaume GARDET <guillaume.gardet@opensuse.org>
- Update to 3.5.3:
* https://github.com/oneapi-src/oneDNN/releases/tag/v3.5.3
-------------------------------------------------------------------
Fri Apr 19 17:27:48 UTC 2024 - Alessandro de Oliveira Faria <cabelo@opensuse.org>
- Update to 3.4.1:
* Fixed an issue with caching and serialization of primitives in
* Fixed an issue with caching and serialization of primitives in
deterministic mode (7ed604a)
* Introduced memory descriptor serialization API
* Introduced memory descriptor serialization API
(4cad420, 929a27a, 9b848c8)
* Fixed incorrect results in fp64 convolution and deconvolution
on Intel GPUs based on Xe-LPG architecture (ebe77b5, 0b399ac,
* Fixed incorrect results in fp64 convolution and deconvolution
on Intel GPUs based on Xe-LPG architecture (ebe77b5, 0b399ac,
d748d64, 9f4f3d5, 21a8cae)
* Fixed incorrect results in reorder with large sizes on
* Fixed incorrect results in reorder with large sizes on
Intel CPUs and GPUs (69a111e, 4b72361, 74a343b)
* Reduced creation time for deconvolution primitive on
* Reduced creation time for deconvolution primitive on
Intel CPUs (bec487e, 1eab005)
* Fixed performance regression in deconvolution on
* Fixed performance regression in deconvolution on
Intel CPUs (fbe5b97, 1dd3c6a)
* Removed dangling symblols from static builds
* Removed dangling symblols from static builds
(e92c404, 6f5621a)
* Fixed crash during platform detection on some
* Fixed crash during platform detection on some
AArch64-based systems (406a079)
* Fixed performance regression in int8 deconvolution on
* Fixed performance regression in int8 deconvolution on
Intel CPUs (7e50e15)
* Fixed handling of zero points for matmul in verbose
logs converter (15c7916)
* Fixed handling of zero points for matmul in verbose
logs converter (15c7916)
-------------------------------------------------------------------
Fri Dec 1 04:33:49 UTC 2023 - Alessandro de Oliveira Faria <cabelo@opensuse.org>
@ -285,38 +257,38 @@ Wed Feb 26 10:36:26 UTC 2020 - Tomáš Chvátal <tchvatal@suse.com>
-------------------------------------------------------------------
Thu Feb 20 10:26:52 UTC 2020 - Christian Goll <cgoll@suse.com>
- enabled tests
- enabled tests
-------------------------------------------------------------------
Thu Jan 30 14:20:22 UTC 2020 - Christian Goll <cgoll@suse.com>
- packaged separate benchnn packae with its input files
- updated to v1.1.3 which includes
* Fixed the mean and variance memory descriptors in layer
* Fixed the mean and variance memory descriptors in layer
normalization (65f1908)
* Fixed the layer normalization formula (c176ceb)
-------------------------------------------------------------------
Wed Jan 8 15:21:54 UTC 2020 - Christian Goll <cgoll@suse.com>
- updated to v1.1.2
- updated to v1.1.2
* Fixed threading over the spatial in bfloat16 batched
normalization (017b6c9)
* Fixed read past end-of-buffer error for int8 convolution (7d6f45e)
* Fixed condition for dispatching optimized channel blocking in
* Fixed condition for dispatching optimized channel blocking in
fp32 backward convolution on Intel Xeon Phi(TM) processor (846eba1)
* Fixed fp32 backward convolution for shapes with spatial strides
* Fixed fp32 backward convolution for shapes with spatial strides
over the depth dimension (002e3ab)
* Fixed softmax with zero sizes on GPU (936bff4)
* Fixed int8 deconvolution with dilation when ih <= dh (3e3bacb)
* Enabled back fp32 -> u8 reorder for RNN (a2c2507)
* Fixed segmentation fault in bfloat16 backward convolution from
* Fixed segmentation fault in bfloat16 backward convolution from
kd_padding=0 computation (52d476c)
* Fixed segmentation fault in bfloat16 forward convolution due
* Fixed segmentation fault in bfloat16 forward convolution due
to push/pop imbalance (4f6e3d5)
* Fixed library version for OS X build (0d85005)
* Fixed padding by channels in concat (a265c7d)
* Added full text of third party licenses and
* Added full text of third party licenses and
copyright notices to LICENSE file (79f204c)
* Added separate README for binary packages (28f4c96)
* Fixed computing per-oc mask in RNN (ff3ffab)
@ -330,7 +302,7 @@ Mon Feb 11 16:35:48 UTC 2019 - cgoll@suse.com
-------------------------------------------------------------------
Tue Feb 5 07:45:53 UTC 2019 - Christian Goll <cgoll@suse.com>
- Initial checking of the Intel(R) Math Kernel Library for
- Initial checking of the Intel(R) Math Kernel Library for
Deep Neural Networks which can be used by:
* tensorflow
* Caffee

View File

@ -1,7 +1,7 @@
#
# spec file for package onednn
#
# Copyright (c) 2025 SUSE LLC
# Copyright (c) 2024 SUSE LLC
#
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
@ -16,25 +16,27 @@
#
%define libname libdnnl3
%ifarch x86_64
%bcond_without opencl
%else
# Build broken on non-x86, with openCL
%bcond_with opencl
%endif
%ifarch aarch64
# Disable ACL until fixed upstream - https://github.com/oneapi-src/oneDNN/issues/2137
# Disable ACL until fixed upstream - https://github.com/oneapi-src/oneDNN/issues/1599
%bcond_with acl
%else
%bcond_with acl
%endif
%define libname libdnnl3
Name: onednn
Version: 3.6.2
Version: 3.4.1
Release: 0
Summary: oneAPI Deep Neural Network Library (oneDNN)
Summary: Intel Math Kernel Library for Deep Neural Networks
License: Apache-2.0
URL: https://github.com/oneapi-src/oneDNN
URL: https://01.org/onednn
Source0: https://github.com/oneapi-src/oneDNN/archive/v%{version}/oneDNN-%{version}.tar.gz
BuildRequires: chrpath
BuildRequires: cmake
@ -42,27 +44,26 @@ BuildRequires: doxygen
BuildRequires: fdupes
BuildRequires: gcc-c++
BuildRequires: graphviz
BuildRequires: ninja
BuildRequires: texlive-dvips-bin
Provides: mkl-dnn = %{version}
Obsoletes: mkl-dnn <= %{version}
Provides: oneDNN = %{version}
ExclusiveArch: x86_64 aarch64 ppc64le
%if %{with acl}
BuildRequires: ComputeLibrary-devel >= 24.08.1
BuildRequires: ComputeLibrary-devel >= 22.08
%endif
%if %{with opencl}
BuildRequires: opencl-headers
BuildRequires: pkgconfig
BuildRequires: pkgconfig(OpenCL)
%endif
ExclusiveArch: x86_64 aarch64 ppc64le
Provides: mkl-dnn = %{version}
Obsoletes: mkl-dnn <= %{version}
Provides: oneDNN = %{version}
%description
oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform
performance library of basic building blocks for deep learning applications.
oneDNN project is part of the UXL Foundation and is an implementation of the
oneAPI specification for oneDNN component.
Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) is an
open-source performance library for deep-learning applications. The library
accelerates deep-learning applications and frameworks on Intel architecture.
Intel MKL-DNN contains vectorized and threaded building blocks that you can use
to implement deep neural networks (DNN) with C and C++ interfaces.
%package -n benchdnn
Summary: Header files of Intel Math Kernel Library
@ -83,10 +84,6 @@ Requires: %{libname} = %{version}
Provides: mkl-dnn-devel = %{version}
Obsoletes: mkl-dnn-devel <= %{version}
Provides: oneDNN-devel = %{version}
%if %{with opencl}
Requires: opencl-headers
Requires: pkgconfig(OpenCL)
%endif
%description devel
Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) is an
@ -120,7 +117,6 @@ to implement deep neural networks (DNN) with C and C++ interfaces.
%autosetup -p1 -n oneDNN-%{version}
%build
%define __builder ninja
%cmake \
-DCMAKE_INSTALL_LIBDIR=%{_lib} \
-DMKLDNN_ARCH_OPT_FLAGS="" \
@ -135,7 +131,7 @@ to implement deep neural networks (DNN) with C and C++ interfaces.
%endif
-DDNNL_INSTALL_MODE=DEFAULT \
-DDNNL_BUILD_TESTS=ON \
-DONEDNN_BUILD_GRAPH=ON \
-DONEDNN_BUILD_GRAPH=OFF \
-DDNNL_WERROR=OFF
%cmake_build
%cmake_build doc_doxygen
@ -164,7 +160,7 @@ chrpath -d %{buildroot}/%{_bindir}/benchdnn
# do not use macro so we can exclude all gpu and cross (gpu and cpu) tests (they need gpu set up)
pushd build
export LD_LIBRARY_PATH=%{buildroot}%{_libdir}
ctest --output-on-failure --force-new-ctest-process %{?_smp_mflags} -E '(gpu|cross)'
ctest --output-on-failure --force-new-ctest-process %{_smp_mflags} -E '(gpu|cross)'
popd
%post -n %{libname} -p /sbin/ldconfig