1
0
forked from pool/onednn

Update to 3.6.2

This commit is contained in:
Eyad Issa 2025-01-04 14:39:57 +01:00
parent 43a7d8744d
commit 66ba227cef
5 changed files with 53 additions and 44 deletions

3
_service Normal file
View File

@ -0,0 +1,3 @@
<services>
<service name="download_files" mode="manual" />
</services>

BIN
oneDNN-3.6.2.tar.gz (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:20c4a92cc0ae0dc19d3d2beca0e357b1d13a5a3af9890a2cc3e41a880e4a0302
size 13782760

View File

@ -1,3 +1,9 @@
-------------------------------------------------------------------
Fri Jan 3 23:23:08 UTC 2025 - Eyad Issa <eyadlorenzo@gmail.com>
- Update to 3.6.2:
* https://github.com/oneapi-src/oneDNN/releases/tag/v3.6.2
-------------------------------------------------------------------
Thu Oct 17 11:42:58 UTC 2024 - Guillaume GARDET <guillaume.gardet@opensuse.org>
@ -24,27 +30,27 @@ Mon Sep 23 10:04:43 UTC 2024 - Guillaume GARDET <guillaume.gardet@opensuse.org>
Fri Apr 19 17:27:48 UTC 2024 - Alessandro de Oliveira Faria <cabelo@opensuse.org>
- Update to 3.4.1:
* Fixed an issue with caching and serialization of primitives in
* Fixed an issue with caching and serialization of primitives in
deterministic mode (7ed604a)
* Introduced memory descriptor serialization API
* Introduced memory descriptor serialization API
(4cad420, 929a27a, 9b848c8)
* Fixed incorrect results in fp64 convolution and deconvolution
on Intel GPUs based on Xe-LPG architecture (ebe77b5, 0b399ac,
* Fixed incorrect results in fp64 convolution and deconvolution
on Intel GPUs based on Xe-LPG architecture (ebe77b5, 0b399ac,
d748d64, 9f4f3d5, 21a8cae)
* Fixed incorrect results in reorder with large sizes on
* Fixed incorrect results in reorder with large sizes on
Intel CPUs and GPUs (69a111e, 4b72361, 74a343b)
* Reduced creation time for deconvolution primitive on
* Reduced creation time for deconvolution primitive on
Intel CPUs (bec487e, 1eab005)
* Fixed performance regression in deconvolution on
* Fixed performance regression in deconvolution on
Intel CPUs (fbe5b97, 1dd3c6a)
* Removed dangling symblols from static builds
* Removed dangling symblols from static builds
(e92c404, 6f5621a)
* Fixed crash during platform detection on some
* Fixed crash during platform detection on some
AArch64-based systems (406a079)
* Fixed performance regression in int8 deconvolution on
* Fixed performance regression in int8 deconvolution on
Intel CPUs (7e50e15)
* Fixed handling of zero points for matmul in verbose
logs converter (15c7916)
* Fixed handling of zero points for matmul in verbose
logs converter (15c7916)
-------------------------------------------------------------------
Fri Dec 1 04:33:49 UTC 2023 - Alessandro de Oliveira Faria <cabelo@opensuse.org>
@ -279,38 +285,38 @@ Wed Feb 26 10:36:26 UTC 2020 - Tomáš Chvátal <tchvatal@suse.com>
-------------------------------------------------------------------
Thu Feb 20 10:26:52 UTC 2020 - Christian Goll <cgoll@suse.com>
- enabled tests
- enabled tests
-------------------------------------------------------------------
Thu Jan 30 14:20:22 UTC 2020 - Christian Goll <cgoll@suse.com>
- packaged separate benchnn packae with its input files
- updated to v1.1.3 which includes
* Fixed the mean and variance memory descriptors in layer
* Fixed the mean and variance memory descriptors in layer
normalization (65f1908)
* Fixed the layer normalization formula (c176ceb)
-------------------------------------------------------------------
Wed Jan 8 15:21:54 UTC 2020 - Christian Goll <cgoll@suse.com>
- updated to v1.1.2
- updated to v1.1.2
* Fixed threading over the spatial in bfloat16 batched
normalization (017b6c9)
* Fixed read past end-of-buffer error for int8 convolution (7d6f45e)
* Fixed condition for dispatching optimized channel blocking in
* Fixed condition for dispatching optimized channel blocking in
fp32 backward convolution on Intel Xeon Phi(TM) processor (846eba1)
* Fixed fp32 backward convolution for shapes with spatial strides
* Fixed fp32 backward convolution for shapes with spatial strides
over the depth dimension (002e3ab)
* Fixed softmax with zero sizes on GPU (936bff4)
* Fixed int8 deconvolution with dilation when ih <= dh (3e3bacb)
* Enabled back fp32 -> u8 reorder for RNN (a2c2507)
* Fixed segmentation fault in bfloat16 backward convolution from
* Fixed segmentation fault in bfloat16 backward convolution from
kd_padding=0 computation (52d476c)
* Fixed segmentation fault in bfloat16 forward convolution due
* Fixed segmentation fault in bfloat16 forward convolution due
to push/pop imbalance (4f6e3d5)
* Fixed library version for OS X build (0d85005)
* Fixed padding by channels in concat (a265c7d)
* Added full text of third party licenses and
* Added full text of third party licenses and
copyright notices to LICENSE file (79f204c)
* Added separate README for binary packages (28f4c96)
* Fixed computing per-oc mask in RNN (ff3ffab)
@ -324,7 +330,7 @@ Mon Feb 11 16:35:48 UTC 2019 - cgoll@suse.com
-------------------------------------------------------------------
Tue Feb 5 07:45:53 UTC 2019 - Christian Goll <cgoll@suse.com>
- Initial checking of the Intel(R) Math Kernel Library for
- Initial checking of the Intel(R) Math Kernel Library for
Deep Neural Networks which can be used by:
* tensorflow
* Caffee

View File

@ -1,7 +1,7 @@
#
# spec file for package onednn
#
# Copyright (c) 2024 SUSE LLC
# Copyright (c) 2025 SUSE LLC
#
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
@ -16,27 +16,25 @@
#
%define libname libdnnl3
%ifarch x86_64
%bcond_without opencl
%else
# Build broken on non-x86, with openCL
%bcond_with opencl
%endif
%ifarch aarch64
# Disable ACL until fixed upstream - https://github.com/oneapi-src/oneDNN/issues/2137
%bcond_with acl
%else
%bcond_with acl
%endif
%define libname libdnnl3
Name: onednn
Version: 3.6
Version: 3.6.2
Release: 0
Summary: Intel Math Kernel Library for Deep Neural Networks
Summary: oneAPI Deep Neural Network Library (oneDNN)
License: Apache-2.0
URL: https://01.org/onednn
URL: https://github.com/oneapi-src/oneDNN
Source0: https://github.com/oneapi-src/oneDNN/archive/v%{version}/oneDNN-%{version}.tar.gz
BuildRequires: chrpath
BuildRequires: cmake
@ -44,7 +42,12 @@ BuildRequires: doxygen
BuildRequires: fdupes
BuildRequires: gcc-c++
BuildRequires: graphviz
BuildRequires: ninja
BuildRequires: texlive-dvips-bin
Provides: mkl-dnn = %{version}
Obsoletes: mkl-dnn <= %{version}
Provides: oneDNN = %{version}
ExclusiveArch: x86_64 aarch64 ppc64le
%if %{with acl}
BuildRequires: ComputeLibrary-devel >= 24.08.1
%endif
@ -53,17 +56,13 @@ BuildRequires: opencl-headers
BuildRequires: pkgconfig
BuildRequires: pkgconfig(OpenCL)
%endif
ExclusiveArch: x86_64 aarch64 ppc64le
Provides: mkl-dnn = %{version}
Obsoletes: mkl-dnn <= %{version}
Provides: oneDNN = %{version}
%description
Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) is an
open-source performance library for deep-learning applications. The library
accelerates deep-learning applications and frameworks on Intel architecture.
Intel MKL-DNN contains vectorized and threaded building blocks that you can use
to implement deep neural networks (DNN) with C and C++ interfaces.
oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform
performance library of basic building blocks for deep learning applications.
oneDNN project is part of the UXL Foundation and is an implementation of the
oneAPI specification for oneDNN component.
%package -n benchdnn
Summary: Header files of Intel Math Kernel Library
@ -81,13 +80,13 @@ This package only includes the benchmark utility including its input files.
%package devel
Summary: Header files of Intel Math Kernel Library
Requires: %{libname} = %{version}
Provides: mkl-dnn-devel = %{version}
Obsoletes: mkl-dnn-devel <= %{version}
Provides: oneDNN-devel = %{version}
%if %{with opencl}
Requires: opencl-headers
Requires: pkgconfig(OpenCL)
%endif
Provides: mkl-dnn-devel = %{version}
Obsoletes: mkl-dnn-devel <= %{version}
Provides: oneDNN-devel = %{version}
%description devel
Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) is an
@ -121,6 +120,7 @@ to implement deep neural networks (DNN) with C and C++ interfaces.
%autosetup -p1 -n oneDNN-%{version}
%build
%define __builder ninja
%cmake \
-DCMAKE_INSTALL_LIBDIR=%{_lib} \
-DMKLDNN_ARCH_OPT_FLAGS="" \
@ -164,7 +164,7 @@ chrpath -d %{buildroot}/%{_bindir}/benchdnn
# do not use macro so we can exclude all gpu and cross (gpu and cpu) tests (they need gpu set up)
pushd build
export LD_LIBRARY_PATH=%{buildroot}%{_libdir}
ctest --output-on-failure --force-new-ctest-process %{_smp_mflags} -E '(gpu|cross)'
ctest --output-on-failure --force-new-ctest-process %{?_smp_mflags} -E '(gpu|cross)'
popd
%post -n %{libname} -p /sbin/ldconfig