SHA256
1
0
forked from pool/slurm
slurm/slurm.spec
Corot Sebastien bd06e0c765 Accepting request 454272 from home:eeich:branches:network:cluster
- Updated to 16.05.8.1
 * Remove StoragePass from being printed out in the slurmdbd log at debug2
   level.
 * Defer PATH search for task program until launch in slurmstepd.
 * Modify regression test1.89 to avoid leaving vestigial job. Also reduce
    logging to reduce likelyhood of Expect buffer overflow.
 * Do not PATH search for mult-prog launches if LaunchParamters=test_exec is
    enabled.
 * Fix for possible infinite loop in select/cons_res plugin when trying to
    satisfy a job's ntasks_per_core or socket specification.
 * If job is held for bad constraints make it so once updated the job doesn't
    go into JobAdminHeld.
 * sched/backfill - Fix logic to reserve resources for jobs that require a
    node reboot (i.e. to change KNL mode) in order to start.
 * When unpacking a node or front_end record from state and the protocol
    version is lower than the min version, set it to the min.
 * Remove redundant lookup for part_ptr when updating a reservation's nodes.
 * Fix memory and file descriptor leaks in slurmd daemon's sbcast logic.
 * Do not allocate specialized cores to jobs using the --exclusive option.
 * Cancel interactive job if Prolog failure with "PrologFlags=contain" or
   "PrologFlags=alloc" configured. Send new error prolog failure message to
   the salloc or srun command as needed.
 * Prevent possible out-of-bounds read in slurmstepd on an invalid #! line.
 * Fix check for PluginDir within slurmctld to work with multiple directories.
 * Cancel interactive jobs automatically on communication error to launching
   srun/salloc process.
 * Fix security issue caused by insecure file path handling triggered by the
   failure of a Prolog script. To exploit this a user needs to anticipate or
   cause the Prolog to fail for their job. CVE-2016-10030 (bsc#1018371).
- Replace group/user add macros with function calls.
- Disable building with netloc support: the netloc API is part of the devel
  branch of hwloc. Since this devel branch was included accidentally and has
  been reversed since, we need to disable this for the time being.
- Conditionalized architecture specific pieces to support non-x86 architectures
  better.

- Remove: unneeded 'BuildRequires:  python'
- Add:
  BuildRequires:  freeipmi-devel
  BuildRequires:  libibmad-devel
  BuildRequires:  libibumad-devel
  so they are picked up by the slurm build.
- Enable modifications from openHPC Project.
- Enable lua API package build.
- Add a recommends for slurm-munge to the slurm package:
  This is way, the munge auth method is available and slurm
  works out of the box.
- Create /var/lib/slurm as StateSaveLocation directory.
  /tmp is dangerous. 

- Keep %{_libdir}/libpmi* and %{_libdir}/mpi_pmi2* on SUSE.

OBS-URL: https://build.opensuse.org/request/show/454272
OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=13
2017-02-02 20:23:02 +00:00

776 lines
24 KiB
RPMSpec

#
# spec file for package slurm
#
# Copyright (c) 2015 SUSE LINUX Products GmbH, Nuernberg, Germany.
#
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
# upon. The license for this file, and modifications and additions to the
# file, is the same license as for the pristine package itself (unless the
# license for the pristine package is not an Open Source License, in which
# case the license is the MIT License). An "Open Source License" is a
# license that conforms to the Open Source Definition (Version 1.9)
# published by the Open Source Initiative.
# Please submit bugfixes or comments via http://bugs.opensuse.org/
#
%define trans() ( echo %{1} | sed -e "s#-#\\.#g" )
%define trunc() ( echo %{1} | sed -e "s#\\([^.]\\+\\.[^.]\\+\\.[^.]\\+\\).*#\\1#" )
%define vers_f() %(%trans)
%define vers_t() %(%trunc)
%if 0%{?suse_version} >= 1220 || 0%{?sle_version} >= 120000
%define with_systemd 1
%endif
%if 0
%define have_netloc 1
%endif
%ifarch x86_64
%define have_libnuma 1
%else
%ifarch %{ix86}
%if 0%{?sle_version} >= 120200
%define have_libnuma 1
%endif
%endif
%endif
%define libslurm libslurm29
%define ver_exp 16-05-8-1
%define slurm_u %name
%define slurm_g %name
Name: slurm
Version: %{vers_f %ver_exp}
Release: 0
Summary: Simple Linux Utility for Resource Management
License: SUSE-GPL-2.0-with-openssl-exception
Group: Productivity/Clustering/Computing
Url: https://computing.llnl.gov/linux/slurm/
Source: https://github.com/SchedMD/slurm/archive/%{name}-%{ver_exp}.tar.gz
Source1: slurm.service
Source2: slurmdbd.service
Patch0: slurm-2.4.4-rpath.patch
Patch1: slurm-2.4.4-init.patch
Patch2: slurmd-Fix-slurmd-for-new-API-in-hwloc-2.0.patch
Patch3: plugins-cgroup-Fix-slurmd-for-new-API-in-hwloc-2.0.patch
Patch4: pam_slurm-Initialize-arrays-and-pass-sizes.patch
Requires: slurm-plugins = %{version}
%if 0%{?suse_version} <= 1140
Requires(pre): pwdutils
%else
Requires(pre): shadow
%endif
BuildRequires: fdupes
BuildRequires: gcc-c++
BuildRequires: gtk2-devel
BuildRequires: libbitmask-devel
BuildRequires: libcpuset-devel
%if 0%{?have_libnuma}
BuildRequires: libnuma-devel
%endif
BuildRequires: mysql-devel >= 5.0.0
BuildRequires: ncurses-devel
BuildRequires: openssl-devel >= 0.9.6
BuildRequires: pkgconfig
BuildRequires: postgresql-devel >= 8.0.0
BuildRequires: readline-devel
%if 0%{?suse_version} > 1310 || 0%{?sle_version}
BuildRequires: libibmad-devel
BuildRequires: libibumad-devel
%endif
%if 0%{?suse_version} > 1140
BuildRequires: libhwloc-devel
%ifarch %{ix86} x86_64
BuildRequires: freeipmi-devel
%endif
%endif
%if 0%{?with_systemd}
%{?systemd_requires}
BuildRequires: systemd
%else
PreReq: %insserv_prereq %fillup_prereq
%endif
BuildRoot: %{_tmppath}/%{name}-%{version}-build
Recommends: %{name}-munge
%description
SLURM is an open source, fault-tolerant, and highly
scalable cluster management and job scheduling system for Linux clusters
containing up to 65,536 nodes. Components include machine status,
partition management, job management, scheduling and accounting modules.
%package doc
Summary: Documentation for SLURM
Group: Documentation/Clustering/Computing
%description doc
Documentation (html) for the SLURM cluster managment software
%package -n perl-slurm
Summary: Perl API to SLURM
Group: Development/Languages/Perl
Requires: slurm = %{version}
%if 0%{?suse_version} < 1140
Requires: perl = %{perl_version}
%else
%{perl_requires}
%endif
%description -n perl-slurm
Perl API package for SLURM. This package includes the perl API to provide a
helpful interface to SLURM through Perl.
%package -n %{libslurm}
Summary: Libraries for slurm
Group: System/Libraries
%description -n %{libslurm}
This package contains the library needed to run programs dynamically linked
with slurm.
%package devel
Summary: Development package for SLURM
Group: Development/Libraries/C and C++
Requires: %{libslurm} = %{version}
Requires: slurm = %{version}
%description devel
Development package for SLURM. This package includes the header files
and libraries for the SLURM API.
%package auth-none
Summary: SLURM auth NULL implementation (no authentication)
Group: Productivity/Clustering/Computing
Requires: slurm = %{version}
%description auth-none
This package cobtains the SLURM NULL authentication module.
%package munge
Summary: SLURM authentication and crypto implementation using Munge
Group: Productivity/Clustering/Computing
Requires: slurm = %{version}
Requires: munge
BuildRequires: munge-devel
Obsoletes: slurm-auth-munge < %{version}
Provides: slurm-auth-munge = %{version}
%description munge
This package contains the SLURM authentication module for Chris Dunlap''s Munge.
%package sview
Summary: SLURM graphical interface
Group: Productivity/Clustering/Computing
%description sview
sview is a graphical user interface to get and update state information for
jobs, partitions, and nodes managed by SLURM.
%package sched-wiki
Summary: SLURM plugin for the Maui or Moab scheduler wiki interface
Group: Productivity/Clustering/Computing
Requires: slurm = %{version}
%description sched-wiki
This package contains the SLURM plugin for the Maui or Moab scheduler wiki interface.
%package slurmdbd
Summary: SLURM database daemon
Group: Productivity/Clustering/Computing
Requires: slurm-plugins = %{version}
%if 0%{?with_systemd}
%{?systemd_requires}
%else
PreReq: %insserv_prereq %fillup_prereq
%endif
%description slurmdbd
The SLURM database daemon provides accounting of jobs in a database.
%package plugins
Summary: SLURM plugins (loadable shared objects)
Group: Productivity/Clustering/Computing
%description plugins
This package contains the SLURM plugins (loadable shared objects)
%package torque
Summary: Torque/PBS wrappers for transitition from Torque/PBS to SLURM
Group: Productivity/Clustering/Computing
Requires: perl-slurm = %{version}
Provides: torque-client
%description torque
Torque wrapper scripts used for helping migrate from Torque/PBS to SLURM.
%package openlava
Summary: Openlava/LSF wrappers for transitition from OpenLava/LSF to Slurm
Group: Development/System
Requires: slurm-perlapi
%package seff
Summary: Mail tool that includes job statistics in user notification email
Group: Development/System
Requires: slurm-perlapi
%description seff
Mail program used directly by the Slurm daemons. On completion of a job,
wait for it''s accounting information to be available and include that
information in the email body.
%description openlava
OpenLava wrapper scripts used for helping migrate from OpenLava/LSF to Slurm
%package slurmdb-direct
Summary: Wrappers to write directly to the slurmdb
Group: Productivity/Clustering/Computing
Requires: perl-slurm = %{version}
%if 0%{?suse_version} < 1140
Requires: perl = %{perl_version}
%else
%{perl_requires}
%endif
%description slurmdb-direct
This package contains the wrappers to write directly to the slurmdb.
%package sjstat
Summary: Perl tool to print SLURM job state information
Group: Productivity/Clustering/Computing
Requires: slurm = %{version}
%if 0%{?suse_version} < 1140
Requires: perl = %{perl_version}
%else
%{perl_requires}
%endif
%description sjstat
This package contains the perl tool to print SLURM job state information.
%package pam_slurm
Summary: PAM module for restricting access to compute nodes via SLURM
Group: Productivity/Clustering/Computing
Requires: slurm = %{version}
BuildRequires: pam-devel
%description pam_slurm
This module restricts access to compute nodes in a cluster where the Simple
Linux Utility for Resource Managment (SLURM) is in use. Access is granted
to root, any user with an SLURM-launched job currently running on the node,
or any user who has allocated resources on the node according to the SLURM
%package lua
Summary: Lua API for SLURM
Group: Development/Libraries/Other
Requires: slurm = %{version}
BuildRequires: lua-devel
%description lua
LUA API package for SLURM. This package includes the lua API to provide a
helpful interface to SLURM through LUA.
%prep
%setup -q -n %{name}-%{name}-%{ver_exp}
%patch0 -p1
%patch1 -p1
%patch2 -p1
%patch3 -p1
%patch4 -p1
%build
%configure --enable-shared \
--disable-static \
--without-rpath \
%{!?have_netloc:--without-netloc} \
--sysconfdir=%{_sysconfdir}/%{name}
make %{?_smp_mflags}
%install
%make_install
make install-contrib DESTDIR=$RPM_BUILD_ROOT PERL_MM_PARAMS="INSTALLDIRS=vendor"
rm -f $RPM_BUILD_ROOT/%{_sysconfdir}/slurm.conf.template
rm -f $RPM_BUILD_ROOT/%{_sbindir}/slurmconfgen.py
%if 0%{?with_systemd}
mkdir -p %{buildroot}%{_unitdir}
install -p -m644 etc/slurmd.service etc/slurmdbd.service etc/slurmctld.service %{buildroot}%{_unitdir}
ln -s /usr/sbin/service %{buildroot}%{_sbindir}/rcslurmd
ln -s /usr/sbin/service %{buildroot}%{_sbindir}/rcslurmdbd
ln -s /usr/sbin/service %{buildroot}%{_sbindir}/rcslurmctld
%else
install -D -m755 etc/init.d.slurm $RPM_BUILD_ROOT%{_initrddir}/slurm
install -D -m755 etc/init.d.slurmdbd $RPM_BUILD_ROOT%{_initrddir}/slurmdbd
ln -sf %{_initrddir}/slurm %{buildroot}%{_sbindir}/rcslurm
ln -sf %{_initrddir}/slurmdbd %{buildroot}%{_sbindir}/rcslurmdbd
%endif
install -D -m644 etc/slurm.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/slurm.conf%{?OHPC_BUILD:.example}
install -D -m644 etc/slurmdbd.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/slurmdbd.conf
install -D -m644 etc/cgroup.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup.conf
install -D -m644 etc/cgroup_allowed_devices_file.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup_allowed_devices_file.conf
install -D -m755 etc/cgroup.release_common.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup/release_common.example
install -D -m755 etc/cgroup.release_common.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup/release_freezer
install -D -m755 etc/cgroup.release_common.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup/release_cpuset
install -D -m755 etc/cgroup.release_common.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup/release_memory
install -D -m644 etc/slurmdbd.conf.example ${RPM_BUILD_ROOT}%{_sysconfdir}/%{name}/slurmdbd.conf.example
install -D -m755 etc/slurm.epilog.clean ${RPM_BUILD_ROOT}%{_sysconfdir}/%{name}/slurm.epilog.clean
install -D -m755 contribs/sgather/sgather ${RPM_BUILD_ROOT}%{_bindir}/sgather
install -D -m755 contribs/sjstat ${RPM_BUILD_ROOT}%{_bindir}/sjstat
%if 0%{?OHPC_BUILD}
# 6/16/15 karl.w.schulz@intel.com - do not package Slurm's version of libpmi with OpenHPC.
## rm -f $RPM_BUILD_ROOT/%%{_libdir}/libpmi*
## rm -f $RPM_BUILD_ROOT/%%{_libdir}/mpi_pmi2*
# 9/8/14 karl.w.schulz@intel.com - provide starting config file
head -n -2 $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf.example | grep -v ReturnToService > $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
sed -i 's#\(StateSaveLocation=\).*#\1%_localstatedir/lib/slurm#' $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
echo "# OpenHPC default configuration" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
echo "PropagateResourceLimitsExcept=MEMLOCK" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
echo "SlurmdLogFile=/var/log/slurm.log" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
echo "SlurmctldLogFile=/var/log/slurmctld.log" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
echo "Epilog=/etc/slurm/slurm.epilog.clean" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
echo "NodeName=c[1-4] Sockets=2 CoresPerSocket=8 ThreadsPerCore=2 State=UNKNOWN" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
echo "PartitionName=normal Nodes=c[1-4] Default=YES MaxTime=24:00:00 State=UP" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
# 6/3/16 nirmalasrjn@gmail.com - Adding ReturnToService Directive to starting config file (note removal of variable during above creation)
echo "ReturnToService=1" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
# 9/17/14 karl.w.schulz@intel.com - Add option to drop VM cache during epilog
sed -i '/^# No other SLURM jobs,/i \\n# Drop clean caches (OpenHPC)\necho 3 > /proc/sys/vm/drop_caches\n\n#' $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.epilog.clean
%{__mkdir_p} $RPM_BUILD_ROOT%_localstatedir/lib/slurm
%endif
# Delete unpackaged files:
rm -rf $RPM_BUILD_ROOT/%{_libdir}/slurm/*.{a,la} \
$RPM_BUILD_ROOT/%{_libdir}/*.la \
$RPM_BUILD_ROOT/%_lib/security/*.la \
$RPM_BUILD_ROOT/%{_mandir}/man5/bluegene*
rm -f $RPM_BUILD_ROOT%{_mandir}/man1/srun_cr* \
$RPM_BUILD_ROOT%{_bindir}/srun_cr \
$RPM_BUILD_ROOT%{_libexecdir}/slurm/cr_*
# Delete unpackaged files:
test -s $RPM_BUILD_ROOT/%{_perldir}/auto/Slurm/Slurm.bs ||
rm -f $RPM_BUILD_ROOT/%{_perldir}/auto/Slurm/Slurm.bs
test -s $RPM_BUILD_ROOT/%{_perldir}/auto/Slurmdb/Slurmdb.bs ||
rm -f $RPM_BUILD_ROOT/%{_perldir}/auto/Slurmdb/Slurmdb.bs
rm doc/html/shtml2html.py doc/html/Makefile*
%{__rm} -f %{buildroot}/%{perl_archlib}/perllocal.pod
%{__rm} -f %{buildroot}/%{perl_vendorarch}/auto/Slurm/.packlist
%{__rm} -f %{buildroot}/%{perl_vendorarch}/auto/Slurmdb/.packlist
%{__mv} %{buildroot}/%{perl_sitearch}/config.slurmdb.pl %{buildroot}/%{perl_vendorarch}
# Build man pages that are generated directly by the tools
rm -f $RPM_BUILD_ROOT/%{_mandir}/man1/sjobexitmod.1
${RPM_BUILD_ROOT}%{_bindir}/sjobexitmod --roff > $RPM_BUILD_ROOT/%{_mandir}/man1/sjobexitmod.1
rm -f $RPM_BUILD_ROOT/%{_mandir}/man1/sjstat.1
${RPM_BUILD_ROOT}%{_bindir}/sjstat --roff > $RPM_BUILD_ROOT/%{_mandir}/man1/sjstat.1
# rpmlint reports wrong end of line for those files
sed -i 's/\r$//' $RPM_BUILD_ROOT%{_bindir}/qrerun
sed -i 's/\r$//' $RPM_BUILD_ROOT%{_bindir}/qalter
mkdir -p $RPM_BUILD_ROOT/etc/ld.so.conf.d
echo '%{_libdir}
%{_libdir}/slurm' > $RPM_BUILD_ROOT/etc/ld.so.conf.d/slurm.conf
chmod 644 $RPM_BUILD_ROOT/etc/ld.so.conf.d/slurm.conf
# Make pkg-config file
mkdir -p $RPM_BUILD_ROOT/%{_libdir}/pkgconfig
cat > $RPM_BUILD_ROOT/%{_libdir}/pkgconfig/slurm.pc <<EOF
includedir=%{_prefix}/include
libdir=%{_libdir}
Cflags: -I\${includedir}
Libs: -L\${libdir} -lslurm
Description: Slurm API
Name: %{name}
Version: %{version}
EOF
%fdupes -s $RPM_BUILD_ROOT
%pre
%if 0%{?with_systemd}
%service_add_pre slurmd.service
%service_add_pre slurmctld.service
%endif
%define slurmdir %{_sysconfdir}/slurm
%define slurmdescr "SLURM workload manager"
getent group %name >/dev/null || groupadd -r %name
getent passwd %name >/dev/null || useradd -r -g %name -d %slurmdir -s /bin/false -c %{slurmdescr} %name
exit 0
%post
%if 0%{?with_systemd}
%service_add_post slurmd.service
%service_add_post slurmctld.service
%else
%fillup_and_insserv slurm
%endif
%preun
%if 0%{?with_systemd}
%service_del_preun slurmd.service
%service_del_preun slurmctld.service
%else
%stop_on_removal slurmd
%endif
%postun
%if 0%{?with_systemd}
%service_del_postun slurmd.service
%service_del_postun slurmctld.service
%else
%restart_on_update slurmd
%insserv_cleanup
%endif
%if 0%{?with_systemd}
%pre slurmdbd
%service_add_pre slurmdbd.service
%endif
%post slurmdbd
%if 0%{?with_systemd}
%service_add_post slurmdbd.service
%else
%fillup_and_insserv slurmdbd
%endif
%preun slurmdbd
%if 0%{?with_systemd}
%service_del_preun slurmdbd.service
%else
%stop_on_removal slurmdbd
%endif
%postun slurmdbd
%if 0%{?with_systemd}
%service_del_postun slurmdbd.service
%else
%restart_on_update slurmdbd
%insserv_cleanup
%endif
%post -n %{libslurm} -p /sbin/ldconfig
%postun -n %{libslurm} -p /sbin/ldconfig
%files
%defattr(-,root,root)
%doc AUTHORS NEWS RELEASE_NOTES DISCLAIMER COPYING
%doc doc/html
%{_bindir}/generate_pbs_nodefile
%{_bindir}/sacct
%{_bindir}/sacctmgr
%{_bindir}/salloc
%{_bindir}/sattach
%{_bindir}/sbatch
%{_bindir}/sbcast
%{_bindir}/scancel
%{_bindir}/scontrol
%{_bindir}/sdiag
%{_bindir}/sgather
%{_bindir}/sinfo
%{_bindir}/sjobexitmod
%{_bindir}/sprio
%{_bindir}/squeue
%{_bindir}/sreport
%{_bindir}/srun
%{_bindir}/smap
%{_bindir}/sshare
%{_bindir}/sstat
%{_bindir}/strigger
%{?have_netloc: %{_bindir}/netloc_to_topology}
%{_sbindir}/slurmctld
%{_sbindir}/slurmd
%{_sbindir}/slurmstepd
%{_mandir}/man1/sacct.1*
%{_mandir}/man1/sacctmgr.1*
%{_mandir}/man1/salloc.1*
%{_mandir}/man1/sattach.1*
%{_mandir}/man1/sbatch.1*
%{_mandir}/man1/sbcast.1*
%{_mandir}/man1/scancel.1*
%{_mandir}/man1/scontrol.1*
%{_mandir}/man1/sdiag.1.*
%{_mandir}/man1/sgather.1.*
%{_mandir}/man1/sinfo.1*
%{_mandir}/man1/slurm.1*
%{_mandir}/man1/smap.1*
%{_mandir}/man1/sprio.1*
%{_mandir}/man1/squeue.1*
%{_mandir}/man1/sreport.1*
%{_mandir}/man1/srun.1*
%{_mandir}/man1/sshare.1*
%{_mandir}/man1/sstat.1*
%{_mandir}/man1/strigger.1*
%{_mandir}/man1/sh5util.1*
%{_mandir}/man1/sjobexitmod.1.*
%{_mandir}/man1/sjstat.1.*
%{_mandir}/man5/acct_gather.conf.*
%{_mandir}/man5/burst_buffer.conf.*
%{_mandir}/man5/ext_sensors.conf.*
%{_mandir}/man5/slurm.*
%{_mandir}/man5/cgroup.*
%{_mandir}/man5/cray.*
%{_mandir}/man5/gres.*
%{_mandir}/man5/nonstop.conf.5.*
%{_mandir}/man5/topology.*
%{_mandir}/man5/knl.conf.5.*
%{_mandir}/man8/slurmctld.*
%{_mandir}/man8/slurmd.*
%{_mandir}/man8/slurmstepd*
%{_mandir}/man8/spank*
%dir %{_libdir}/slurm/src
%dir %{_sysconfdir}/%{name}
%config(noreplace) %{_sysconfdir}/%{name}/slurm.conf
%{?OHPC_BUILD:%config %{_sysconfdir}/%{name}/slurm.conf.example}
%config(noreplace) %{_sysconfdir}/%{name}/cgroup.conf
%config(noreplace) %{_sysconfdir}/%{name}/cgroup_allowed_devices_file.conf
%config(noreplace) %{_sysconfdir}/%{name}/slurm.epilog.clean
%dir %{_sysconfdir}/%{name}/cgroup
%config(noreplace) %{_sysconfdir}/%{name}/cgroup/release_*
%if 0%{?with_systemd}
%{_unitdir}/slurmd.service
%{_unitdir}/slurmctld.service
%{_sbindir}/rcslurmd
%else
%{_initrddir}/slurm
%{_sbindir}/rcslurm
%endif
%{?with_systemd:%{_sbindir}/rcslurmctld}
%{?OHPC_BUILD:%attr(0755, %slurm_u, %slurm_g) %_localstatedir/lib/slurm}
%files openlava
%defattr(-,root,root)
%{_bindir}/bjobs
%{_bindir}/bkill
%{_bindir}/bsub
%{_bindir}/lsid
%files seff
%defattr(-,root,root)
%{_bindir}/seff
%{_bindir}/smail
%files doc
%defattr(-,root,root)
%dir %{_datadir}/doc/%{name}-%{vers_t %{version}}
%{_datadir}/doc/%{name}-%{vers_t %{version}}/*
%files -n %{libslurm}
%defattr(-,root,root)
%{_libdir}/*.so.*
%files devel
%defattr(-,root,root)
%{_prefix}/include/slurm
%{_libdir}/libpmi.so
%{_libdir}/libpmi2.so
%{_libdir}/libslurm.so
%{_libdir}/libslurmdb.so
%{_libdir}/slurm/src/*
%{_mandir}/man3/slurm_*
%{_libdir}/pkgconfig/slurm.pc
%files sview
%defattr(-,root,root)
%{_bindir}/sview
%{_mandir}/man1/sview.1*
%files sched-wiki
%defattr(-,root,root)
%{_libdir}/slurm/sched_wiki*.so
#%%{_mandir}/man5/wiki.*
%files auth-none
%defattr(-,root,root)
%{_libdir}/slurm/auth_none.so
%files munge
%defattr(-,root,root)
%{_libdir}/slurm/auth_munge.so
%{_libdir}/slurm/crypto_munge.so
%files -n perl-slurm
%defattr(-,root,root)
%{perl_vendorarch}/Slurm.pm
%{perl_vendorarch}/Slurm
%{perl_vendorarch}/auto/Slurm
%{perl_vendorarch}/Slurmdb.pm
%{perl_vendorarch}/auto/Slurmdb
%{_mandir}/man3/Slurm*.3pm.*
%files slurmdbd
%defattr(-,root,root)
%{_sbindir}/slurmdbd
%{_mandir}/man5/slurmdbd.*
%{_mandir}/man8/slurmdbd.*
%config(noreplace) %{_sysconfdir}/%{name}/slurmdbd.conf
%{_sysconfdir}/%{name}/slurmdbd.conf.example
%if 0%{?with_systemd}
%config %{_unitdir}/slurmdbd.service
%else
%{_initrddir}/slurmdbd
%endif
%{_sbindir}/rcslurmdbd
%files plugins
%defattr(-,root,root)
%{_sysconfdir}/ld.so.conf.d/slurm.conf
%dir %{_libdir}/slurm
%{_libdir}/slurm/accounting_storage_filetxt.so
%{_libdir}/slurm/accounting_storage_none.so
%{_libdir}/slurm/accounting_storage_slurmdbd.so
%{_libdir}/slurm/acct_gather_energy_none.so
%{_libdir}/slurm/acct_gather_energy_rapl.so
%{_libdir}/slurm/acct_gather_energy_cray.so
%{_libdir}/slurm/acct_gather_energy_ibmaem.so
%{_libdir}/slurm/acct_gather_filesystem_lustre.so
%{_libdir}/slurm/acct_gather_filesystem_none.so
%{_libdir}/slurm/acct_gather_infiniband_none.so
%{_libdir}/slurm/acct_gather_profile_none.so
%{_libdir}/slurm/burst_buffer_generic.so
%{_libdir}/slurm/checkpoint_none.so
%{_libdir}/slurm/checkpoint_ompi.so
%{_libdir}/slurm/core_spec_cray.so
%{_libdir}/slurm/core_spec_none.so
%{_libdir}/slurm/ext_sensors_none.so
%{_libdir}/slurm/jobacct_gather_aix.so
%{_libdir}/slurm/jobacct_gather_linux.so
%{_libdir}/slurm/jobacct_gather_none.so
%{_libdir}/slurm/job_container_cncu.so
%{_libdir}/slurm/job_container_none.so
%{_libdir}/slurm/jobcomp_none.so
%{_libdir}/slurm/jobcomp_filetxt.so
%{_libdir}/slurm/jobcomp_script.so
%{_libdir}/slurm/job_submit_cray.so
%{_libdir}/slurm/job_submit_pbs.so
%{_libdir}/slurm/job_submit_require_timelimit.so
%{_libdir}/slurm/job_submit_throttle.so
%{_libdir}/slurm/layouts_power_cpufreq.so
%{_libdir}/slurm/layouts_power_default.so
%{_libdir}/slurm/layouts_unit_default.so
%{_libdir}/slurm/mpi_lam.so
%{_libdir}/slurm/mpi_mpich1_p4.so
%{_libdir}/slurm/mpi_mpich1_shmem.so
%{_libdir}/slurm/mpi_mpichgm.so
%{_libdir}/slurm/mpi_mpichmx.so
%{_libdir}/slurm/mpi_mvapich.so
%{_libdir}/slurm/mpi_none.so
%{_libdir}/slurm/mpi_openmpi.so
%{_libdir}/slurm/power_none.so
%{_libdir}/slurm/preempt_job_prio.so
%{_libdir}/slurm/preempt_none.so
%{_libdir}/slurm/preempt_partition_prio.so
%{_libdir}/slurm/preempt_qos.so
%{_libdir}/slurm/priority_basic.so
%{_libdir}/slurm/proctrack_pgid.so
%{_libdir}/slurm/proctrack_linuxproc.so
%{_libdir}/slurm/route_default.so
%{_libdir}/slurm/route_topology.so
%{_libdir}/slurm/sched_backfill.so
%{_libdir}/slurm/sched_builtin.so
%{_libdir}/slurm/sched_hold.so
%{_libdir}/slurm/select_alps.so
%{_libdir}/slurm/select_cons_res.so
%{_libdir}/slurm/select_linear.so
%{_libdir}/slurm/slurmctld_nonstop.so
%{_libdir}/slurm/switch_cray.so
%{_libdir}/slurm/switch_generic.so
%{_libdir}/slurm/switch_none.so
%{_libdir}/slurm/spank_pbs.so
%{_libdir}/slurm/task_cray.so
%{_libdir}/slurm/task_none.so
%{_libdir}/slurm/topology_3d_torus.so
%{_libdir}/slurm/topology_hypercube.so
%{_libdir}/slurm/topology_none.so
%{_libdir}/slurm/topology_tree.so
%{_libdir}/slurm/accounting_storage_mysql.so
%{_libdir}/slurm/crypto_openssl.so
%{_libdir}/slurm/jobcomp_mysql.so
%{_libdir}/slurm/task_affinity.so
%{_libdir}/slurm/gres_gpu.so
%{_libdir}/slurm/gres_mic.so
%{_libdir}/slurm/gres_nic.so
%{_libdir}/slurm/job_submit_all_partitions.so
#%%{_libdir}/slurm/job_submit_cnode.so
%{_libdir}/slurm/job_submit_defaults.so
%{_libdir}/slurm/job_submit_logging.so
%{_libdir}/slurm/job_submit_partition.so
%{_libdir}/slurm/jobacct_gather_cgroup.so
%{_libdir}/slurm/launch_slurm.so
%{_libdir}/slurm/mpi_pmi2.so
%{_libdir}/slurm/proctrack_cgroup.so
%{_libdir}/slurm/priority_multifactor.so
%{_libdir}/slurm/select_bluegene.so
%{_libdir}/slurm/select_cray.so
%{_libdir}/slurm/select_serial.so
%{_libdir}/slurm/task_cgroup.so
%{_libdir}/slurm/topology_node_rank.so
%{_libdir}/slurm/mcs_group.so
%{_libdir}/slurm/mcs_none.so
%{_libdir}/slurm/mcs_user.so
%if 0%{?suse_version} > 1310
%{_libdir}/slurm/acct_gather_infiniband_ofed.so
%endif
%if 0%{?suse_version} > 1140
%ifarch %{ix86} x86_64
%{_libdir}/slurm/acct_gather_energy_ipmi.so
%endif
%endif
%{_libdir}/slurm/node_features_knl_generic.so
%files lua
%defattr(-,root,root)
%{_libdir}/slurm/job_submit_lua.so
%{_libdir}/slurm/proctrack_lua.so
%files torque
%defattr(-,root,root)
%{_bindir}/pbsnodes
%{_bindir}/qalter
%{_bindir}/qdel
%{_bindir}/qhold
%{_bindir}/qrls
%{_bindir}/qrerun
%{_bindir}/qstat
%{_bindir}/qsub
%{_bindir}/mpiexec
%files slurmdb-direct
%defattr(-,root,root)
%config (noreplace) %{perl_vendorarch}/config.slurmdb.pl
%{_sbindir}/moab_2_slurmdb
%files sjstat
%defattr(-,root,root)
%{_bindir}/sjstat
%files pam_slurm
%defattr(-,root,root)
/%_lib/security/pam_slurm.so
/%_lib/security/pam_slurm_adopt.so
%changelog