Accepting request 839525 from home:Guillaume_G:branches:science:machinelearning

- Rename mkl-dnn to onednn to follow upstream
- Obsoletes mkl-dnn* <= %{version}

OBS-URL: https://build.opensuse.org/request/show/839525
OBS-URL: https://build.opensuse.org/package/show/science:machinelearning/onednn?expand=0&rev=1
This commit is contained in:
Tomáš Chvátal 2020-10-05 09:11:22 +00:00 committed by Git OBS Bridge
commit ade11b5481
6 changed files with 317 additions and 0 deletions

23
.gitattributes vendored Normal file
View File

@ -0,0 +1,23 @@
## Default LFS
*.7z filter=lfs diff=lfs merge=lfs -text
*.bsp filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.gem filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.jar filter=lfs diff=lfs merge=lfs -text
*.lz filter=lfs diff=lfs merge=lfs -text
*.lzma filter=lfs diff=lfs merge=lfs -text
*.obscpio filter=lfs diff=lfs merge=lfs -text
*.oxt filter=lfs diff=lfs merge=lfs -text
*.pdf filter=lfs diff=lfs merge=lfs -text
*.png filter=lfs diff=lfs merge=lfs -text
*.rpm filter=lfs diff=lfs merge=lfs -text
*.tbz filter=lfs diff=lfs merge=lfs -text
*.tbz2 filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.ttf filter=lfs diff=lfs merge=lfs -text
*.txz filter=lfs diff=lfs merge=lfs -text
*.whl filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
.osc

8
_constraints Normal file
View File

@ -0,0 +1,8 @@
<?xml version="1.0"?>
<constraints>
<hardware>
<memory>
<size unit="G">8</size>
</memory>
</hardware>
</constraints>

3
onednn-1.6.3.tar.gz Normal file
View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:471c877671f672e4119e5f49143890c5ce2efff80a52a5eaf7ef3730eb3e1738
size 5795520

107
onednn.changes Normal file
View File

@ -0,0 +1,107 @@
-------------------------------------------------------------------
Mon Oct 5 06:16:30 UTC 2020 - Guillaume GARDET <guillaume.gardet@opensuse.org>
- Obsoletes mkl-dnn* <= %{version}
-------------------------------------------------------------------
Fri Oct 2 12:47:08 UTC 2020 - Guillaume GARDET <guillaume.gardet@opensuse.org>
- Rename mkl-dnn to onednn to follow upstream
-------------------------------------------------------------------
Wed Sep 23 13:36:02 UTC 2020 - Guillaume GARDET <guillaume.gardet@opensuse.org>
- Update to 1.6.3
- Drop upstream patch:
* cmake-no-install-ocl-cmake.patch
-------------------------------------------------------------------
Wed Sep 23 13:16:39 UTC 2020 - Guillaume GARDET <guillaume.gardet@opensuse.org>
- Build on aarch64 and ppc64le which are now also supported
- Provide oneDNN and oneDNN-devel as it is the new official name
-------------------------------------------------------------------
Tue May 5 07:38:34 UTC 2020 - Tomáš Chvátal <tchvatal@suse.com>
- Update to 1.4:
* Performance improvements all over the board
- Rebase patch cmake-no-install-ocl-cmake.patch
-------------------------------------------------------------------
Tue Mar 24 10:50:57 UTC 2020 - Tomáš Chvátal <tchvatal@suse.com>
- Add constraints to not crash during testing on OOM
-------------------------------------------------------------------
Thu Feb 27 12:44:00 UTC 2020 - Tomáš Chvátal <tchvatal@suse.com>
- Do not disable LTO there is no actual reason for that
- Export LD_LIBRARY_PATH to fix older releases build
-------------------------------------------------------------------
Wed Feb 26 10:36:26 UTC 2020 - Tomáš Chvátal <tchvatal@suse.com>
- There is no actual reason to not use github tag for tarball
fetching -> remove the service
- Format with spec-cleaner
- Use proper %cmake macros everywhere
- Add configure options for cmake to set it up in a way we really
want
- Add patch from Debian to not install OpenCL cmake finder:
* cmake-no-install-ocl-cmake.patch
-------------------------------------------------------------------
Thu Feb 20 10:26:52 UTC 2020 - Christian Goll <cgoll@suse.com>
- enabled tests
-------------------------------------------------------------------
Thu Jan 30 14:20:22 UTC 2020 - Christian Goll <cgoll@suse.com>
- packaged separate benchnn packae with its input files
- updated to v1.1.3 which includes
* Fixed the mean and variance memory descriptors in layer
normalization (65f1908)
* Fixed the layer normalization formula (c176ceb)
-------------------------------------------------------------------
Wed Jan 8 15:21:54 UTC 2020 - Christian Goll <cgoll@suse.com>
- updated to v1.1.2
* Fixed threading over the spatial in bfloat16 batched
normalization (017b6c9)
* Fixed read past end-of-buffer error for int8 convolution (7d6f45e)
* Fixed condition for dispatching optimized channel blocking in
fp32 backward convolution on Intel Xeon Phi(TM) processor (846eba1)
* Fixed fp32 backward convolution for shapes with spatial strides
over the depth dimension (002e3ab)
* Fixed softmax with zero sizes on GPU (936bff4)
* Fixed int8 deconvolution with dilation when ih <= dh (3e3bacb)
* Enabled back fp32 -> u8 reorder for RNN (a2c2507)
* Fixed segmentation fault in bfloat16 backward convolution from
kd_padding=0 computation (52d476c)
* Fixed segmentation fault in bfloat16 forward convolution due
to push/pop imbalance (4f6e3d5)
* Fixed library version for OS X build (0d85005)
* Fixed padding by channels in concat (a265c7d)
* Added full text of third party licenses and
copyright notices to LICENSE file (79f204c)
* Added separate README for binary packages (28f4c96)
* Fixed computing per-oc mask in RNN (ff3ffab)
* Added workaround for number of cores calculation in Xbyak (301b088)
-------------------------------------------------------------------
Mon Feb 11 16:35:48 UTC 2019 - cgoll@suse.com
- added ARCH_OPT_FLAGS=""
-------------------------------------------------------------------
Tue Feb 5 07:45:53 UTC 2019 - Christian Goll <cgoll@suse.com>
- Initial checking of the Intel(R) Math Kernel Library for
Deep Neural Networks which can be used by:
* tensorflow
* Caffee
* PyTorch
and other machine learning tools

175
onednn.spec Normal file
View File

@ -0,0 +1,175 @@
#
# spec file for package onednn
#
# Copyright (c) 2020 SUSE LLC
#
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
# upon. The license for this file, and modifications and additions to the
# file, is the same license as for the pristine package itself (unless the
# license for the pristine package is not an Open Source License, in which
# case the license is the MIT License). An "Open Source License" is a
# license that conforms to the Open Source Definition (Version 1.9)
# published by the Open Source Initiative.
# Please submit bugfixes or comments via https://bugs.opensuse.org/
#
%ifarch x86_64
%bcond_without opencl
%else
# Build broken on non-x86, with openCL
%bcond_with opencl
%endif
%define libname libdnnl1
Name: onednn
Version: 1.6.3
Release: 0
Summary: Intel(R) Math Kernel Library for Deep Neural Networks
License: Apache-2.0
URL: https://01.org/onednn
Source0: https://github.com/oneapi-src/oneDNN/archive/v%{version}/%{name}-%{version}.tar.gz
BuildRequires: cmake
BuildRequires: doxygen
BuildRequires: fdupes
BuildRequires: gcc-c++
BuildRequires: graphviz
BuildRequires: texlive-dvips-bin
%if %{with opencl}
BuildRequires: opencl-headers
BuildRequires: pkgconfig
BuildRequires: pkgconfig(OpenCL)
%endif
ExclusiveArch: x86_64 aarch64 ppc64le
Provides: mkl-dnn = %{version}
Obsoletes: mkl-dnn <= %{version}
Provides: oneDNN = %{version}
%description
Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) is an
open-source performance library for deep-learning applications. The library
accelerates deep-learning applications and frameworks on Intel architecture.
Intel MKL-DNN contains vectorized and threaded building blocks that you can use
to implement deep neural networks (DNN) with C and C++ interfaces.
%package -n benchdnn
Summary: Header files of Intel(R) Math Kernel Library
Requires: %{libname} = %{version}
%description -n benchdnn
Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) is an
open-source performance library for deep-learning applications. The library
accelerates deep-learning applications and frameworks on Intel architecture.
Intel MKL-DNN contains vectorized and threaded building blocks that you can use
to implement deep neural networks (DNN) with C and C++ interfaces.
This package only includes the benchmark utility including its input files.
%package devel
Summary: Header files of Intel(R) Math Kernel Library
Requires: %{libname} = %{version}
Provides: mkl-dnn-devel = %{version}
Obsoletes: mkl-dnn-devel <= %{version}
Provides: oneDNN-devel = %{version}
%description devel
Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) is an
open-source performance library for deep-learning applications. The library
accelerates deep-learning applications and frameworks on Intel architecture.
Intel MKL-DNN contains vectorized and threaded building blocks that you can use
to implement deep neural networks (DNN) with C and C++ interfaces.
This package includes the required headers and library files to develop software
with the Intel(R) MKL-DNN.
%package doc
Summary: Reference documentation for the Intel(R) Math Kernel Library
BuildArch: noarch
%description doc
The reference documentation for the Intel(R) Math Kernel Library can be installed
with this package.
%package -n %{libname}
Summary: Header files of Intel(R) Math Kernel Library
%description -n %{libname}
Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) is an
open-source performance library for deep-learning applications. The library
accelerates deep-learning applications and frameworks on Intel architecture.
Intel MKL-DNN contains vectorized and threaded building blocks that you can use
to implement deep neural networks (DNN) with C and C++ interfaces.
%prep
%setup -q -n oneDNN-%{version}
%autopatch -p1
%build
%cmake \
-DCMAKE_INSTALL_LIBDIR=%{_lib} \
-DMKLDNN_ARCH_OPT_FLAGS="" \
-DDNNL_CPU_RUNTIME=OMP \
%if %{with opencl}
-DDNNL_GPU_RUNTIME=OCL \
%endif
-DDNNL_INSTALL_MODE=DEFAULT \
-DDNNL_BUILD_TESTS=ON \
-DDNNL_WERROR=OFF
%cmake_build
%cmake_build doc
%install
%cmake_install
# move the built doxygen data to normal location
mkdir -p %{buildroot}%{_docdir}/%{name}
mv %{buildroot}%{_datadir}/doc/dnnl/reference/* %{buildroot}%{_docdir}/%{name}
%fdupes %{buildroot}%{_docdir}/%{name}
# do use macros to install license/docu
rm -r %{buildroot}%{_datadir}/doc/dnnl
# Keep compatibility with mkl-dnn
pushd %{buildroot}%{_includedir}
ln -s . mkl-dnn
popd
# install the benchmark
install -D build/tests/benchdnn/benchdnn %{buildroot}/%{_bindir}/benchdnn
#move install shared lib
mkdir -vp %{buildroot}%{_datadir}/benchdnn
cp -vr build/tests/benchdnn/inputs %{buildroot}%{_datadir}/benchdnn
%check
# do not use macro so we can exclude all gpu and cross (gpu and cpu) tests (they need gpu set up)
pushd build
export LD_LIBRARY_PATH=%{buildroot}%{_libdir}
ctest --output-on-failure --force-new-ctest-process %{_smp_mflags} -E '(gpu|cross)'
popd
%post -n %{libname} -p /sbin/ldconfig
%postun -n %{libname} -p /sbin/ldconfig
%files -n benchdnn
%{_bindir}/benchdnn
%{_datadir}/benchdnn
%files devel
%{_includedir}/mkl-dnn
%{_includedir}/mkldnn*.h*
%{_includedir}/dnnl*.h*
%{_libdir}/libdnnl.so
%{_libdir}/libmkldnn.so
%dir %{_libdir}/cmake/dnnl
%{_libdir}/cmake/dnnl/*.cmake
%dir %{_libdir}/cmake/mkldnn
%{_libdir}/cmake/mkldnn/*.cmake
%files doc
%{_docdir}/%{name}
%files -n %{libname}
%license LICENSE
%doc README.md
%{_libdir}/libdnnl.so.*
%{_libdir}/libmkldnn.so.*
%changelog