SHA256
1
0
forked from pool/slurm

Accepting request 532262 from network:cluster

- Trim redundant wording in descriptions. (forwarded request 532228 from jengelh)

OBS-URL: https://build.opensuse.org/request/show/532262
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/slurm?expand=0&rev=3
This commit is contained in:
Dominique Leuenberger 2017-10-13 12:13:38 +00:00 committed by Git OBS Bridge
commit 395325315d
7 changed files with 199 additions and 143 deletions

View File

@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2c162d56138360543a9a0f2486ae671c588883685a80eda028e9e17541a1f7b1
size 8432017

3
slurm-17-02-7-1.tar.gz Normal file
View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ca2ddc5c1b2c747b5a04170b499cf1db28c71c059eac2be58d60ebbded3cefdf
size 8339516

View File

@ -1,3 +1,44 @@
-------------------------------------------------------------------
Fri Oct 6 13:53:08 UTC 2017 - jengelh@inai.de
- Trim redundant wording in descriptions.
-------------------------------------------------------------------
Wed Sep 27 11:08:29 UTC 2017 - jjolly@suse.com
- Updated to slurm 17-02-7-1
* Added python as BuildRequires
* Removed sched-wiki package
* Removed slurmdb-direct package
* Obsoleted sched-wiki and slurmdb-direct packages
* Removing Cray-specific files
* Added /etc/slurm/layout.d files (new for this version)
* Remove /etc/slurm/cgroup files from package
* Added lib/slurm/mcs_account.so
* Removed lib/slurm/jobacct_gather_aix.so
* Removed lib/slurm/job_submit_cnode.so
- Created slurm-sql package
- Moved files from slurm-plugins to slurm-torque package
- Moved creation of /usr/lib/tmpfiles.d/slurm.conf into slurm.spec
* Removed tmpfiles.d-slurm.conf
- Changed /var/run path for slurm daemons to /var/run/slurm
* Added slurmctld-service-var-run-path.patch
-------------------------------------------------------------------
Tue Sep 12 16:00:11 UTC 2017 - jjolly@suse.com
- Made tmpfiles_create post-install macro SLE12 SP2 or greater
- Directly calling systemd-tmpfiles --create for before SLE12 SP2
-------------------------------------------------------------------
Mon Jul 10 03:35:41 UTC 2017 - jjolly@suse.com
- Allows OpenSUSE Factory build as well
- Removes unused .service files from project
- Adds /var/run/slurm to /usr/lib/tmpfiles.d for boottime creation
* Patches upstream .service files to allow for /var/run/slurm path
* Modifies slurm.conf to allow for /var/run/slurm path
-------------------------------------------------------------------
Tue May 30 10:24:09 UTC 2017 - eich@suse.com

View File

@ -1,11 +0,0 @@
[Unit]
Description=SLURM is a simple resource management system
After=network.target
[Service]
Type=forking
EnvironmentFile=-/etc/sysconfig/slurm
ExecStart=/usr/sbin/slurmd
[Install]
WantedBy=multi-user.target

View File

@ -17,7 +17,7 @@
# For anything newer than Leap 42.1 and SLE-12-SP1 build compatible to OpenHPC.
%if 0%{?sle_version} >= 120200
%if 0%{suse_version} > 1320 || 0%{?sle_version} >= 120200
%define OHPC_BUILD 1
%endif
@ -40,7 +40,7 @@
%endif
%define libslurm libslurm29
%define ver_exp 16-05-8-1
%define ver_exp 17-02-7-1
%if 0%{?with_systemd}
%define slurm_u %name
@ -63,13 +63,12 @@ License: SUSE-GPL-2.0-with-openssl-exception
Group: Productivity/Clustering/Computing
Url: https://computing.llnl.gov/linux/slurm/
Source: https://github.com/SchedMD/slurm/archive/%{name}-%{ver_exp}.tar.gz
Source1: slurm.service
Source2: slurmdbd.service
Patch0: slurm-2.4.4-rpath.patch
Patch1: slurm-2.4.4-init.patch
Patch2: slurmd-Fix-slurmd-for-new-API-in-hwloc-2.0.patch
Patch3: plugins-cgroup-Fix-slurmd-for-new-API-in-hwloc-2.0.patch
Patch4: pam_slurm-Initialize-arrays-and-pass-sizes.patch
Patch5: slurmctld-service-var-run-path.patch
Requires: slurm-plugins = %{version}
%if 0%{?suse_version} <= 1140
Requires(pre): pwdutils
@ -81,6 +80,7 @@ BuildRequires: gcc-c++
BuildRequires: gtk2-devel
BuildRequires: libbitmask-devel
BuildRequires: libcpuset-devel
BuildRequires: python
%if 0%{?have_libnuma}
BuildRequires: libnuma-devel
%endif
@ -108,20 +108,21 @@ PreReq: %insserv_prereq %fillup_prereq
%endif
BuildRoot: %{_tmppath}/%{name}-%{version}-build
Recommends: %{name}-munge
Obsoletes: slurm-sched-wiki < %{version}
Obsoletes: slurmdb-direct < %{version}
%description
SLURM is an open source, fault-tolerant, and highly
scalable cluster management and job scheduling system for Linux
clusters containing up to 65,536 nodes. Components include machine
status, partition management, job management, scheduling and
accounting modules.
SLURM is a fault-tolerant scalable cluster management and job
scheduling system for Linux clusters containing up to 65,536 nodes.
Components include machine status, partition management, job
management, scheduling and accounting modules.
%package doc
Summary: Documentation for SLURM
Group: Documentation/HTML
%description doc
Documentation (html) for the SLURM cluster managment software.
Documentation (HTML) for the SLURM cluster managment software.
%package -n perl-slurm
Summary: Perl API to SLURM
@ -134,16 +135,16 @@ Requires: perl = %{perl_version}
%endif
%description -n perl-slurm
Perl API package for SLURM. This package includes the perl API to provide a
helpful interface to SLURM through Perl.
This package includes the Perl API to provide an interface to SLURM
through Perl.
%package -n %{libslurm}
Summary: Libraries for slurm
Summary: Libraries for SLURM
Group: System/Libraries
%description -n %{libslurm}
This package contains the library needed to run programs dynamically linked
with slurm.
with SLURM.
%package devel
@ -153,8 +154,7 @@ Requires: %{libslurm} = %{version}
Requires: slurm = %{version}
%description devel
Development package for SLURM. This package includes the header files
and libraries for the SLURM API.
This package includes the header files for the SLURM API.
%package auth-none
@ -176,7 +176,7 @@ Obsoletes: slurm-auth-munge < %{version}
Provides: slurm-auth-munge = %{version}
%description munge
This package contains the SLURM authentication module for Chris Dunlap''s Munge.
This package contains the SLURM authentication module for Chris Dunlap's Munge.
%package sview
Summary: SLURM graphical interface
@ -187,15 +187,6 @@ sview is a graphical user interface to get and update state information for
jobs, partitions, and nodes managed by SLURM.
%package sched-wiki
Summary: SLURM plugin for the Maui or Moab scheduler wiki interface
Group: Productivity/Clustering/Computing
Requires: slurm = %{version}
%description sched-wiki
This package contains the SLURM plugin for the Maui or Moab scheduler wiki interface.
%package slurmdbd
Summary: SLURM database daemon
Group: Productivity/Clustering/Computing
@ -205,11 +196,21 @@ Requires: slurm-plugins = %{version}
%else
PreReq: %insserv_prereq %fillup_prereq
%endif
Obsoletes: slurm-sched-wiki < %{version}
Obsoletes: slurmdb-direct < %{version}
%description slurmdbd
The SLURM database daemon provides accounting of jobs in a database.
%package sql
Summary: Slurm SQL support
Group: Productivity/Clustering/Computing
%description sql
Contains interfaces to MySQL for use by SLURM.
%package plugins
Summary: SLURM plugins (loadable shared objects)
Group: Productivity/Clustering/Computing
@ -218,21 +219,21 @@ Group: Productivity/Clustering/Computing
This package contains the SLURM plugins (loadable shared objects)
%package torque
Summary: Torque/PBS wrappers for transitition from Torque/PBS to SLURM
Summary: Wrappers for transitition from Torque/PBS to SLURM
Group: Productivity/Clustering/Computing
Requires: perl-slurm = %{version}
Provides: torque-client
%description torque
Torque wrapper scripts used for helping migrate from Torque/PBS to SLURM.
Wrapper scripts for aiding migration from Torque/PBS to SLURM.
%package openlava
Summary: Openlava/LSF wrappers for transitition from OpenLava/LSF to Slurm
Summary: Wrappers for transitition from OpenLava/LSF to Slurm
Group: Productivity/Clustering/Computing
Requires: perl-slurm = %{version}
%description openlava
OpenLava wrapper scripts used for helping migrate from OpenLava/LSF to Slurm
Wrapper scripts for aiding migration from OpenLava/LSF to Slurm
%package seff
Summary: Mail tool that includes job statistics in user notification email
@ -240,25 +241,11 @@ Group: Productivity/Clustering/Computing
Requires: perl-slurm = %{version}
%description seff
Mail program used directly by the Slurm daemons. On completion of a job,
wait for it''s accounting information to be available and include that
Mail program used directly by the SLURM daemons. On completion of a job,
it waits for accounting information to be available and includes that
information in the email body.
%package slurmdb-direct
Summary: Wrappers to write directly to the slurmdb
Group: Productivity/Clustering/Computing
Requires: perl-slurm = %{version}
%if 0%{?suse_version} < 1140
Requires: perl = %{perl_version}
%else
%{perl_requires}
%endif
%description slurmdb-direct
This package contains the wrappers to write directly to the slurmdb.
%package sjstat
Summary: Perl tool to print SLURM job state information
Group: Productivity/Clustering/Computing
@ -270,7 +257,7 @@ Requires: perl = %{perl_version}
%endif
%description sjstat
This package contains the perl tool to print SLURM job state information.
This package contains a Perl tool to print SLURM job state information.
%package pam_slurm
Summary: PAM module for restricting access to compute nodes via SLURM
@ -280,9 +267,9 @@ BuildRequires: pam-devel
%description pam_slurm
This module restricts access to compute nodes in a cluster where the Simple
Linux Utility for Resource Managment (SLURM) is in use. Access is granted
Linux Utility for Resource Managment (SLURM) is in use. Access is granted
to root, any user with an SLURM-launched job currently running on the node,
or any user who has allocated resources on the node according to the SLURM
or any user who has allocated resources on the node according to the SLURM.
%package lua
Summary: Lua API for SLURM
@ -291,8 +278,8 @@ Requires: slurm = %{version}
BuildRequires: lua-devel
%description lua
LUA API package for SLURM. This package includes the lua API to provide a
helpful interface to SLURM through LUA.
This package includes the Lua API to provide an interface to SLURM
through Lua.
%prep
%setup -q -n %{name}-%{name}-%{ver_exp}
@ -301,6 +288,7 @@ helpful interface to SLURM through LUA.
%patch2 -p1
%patch3 -p1
%patch4 -p1
%patch5 -p1
%build
%configure --enable-shared \
@ -313,8 +301,6 @@ make %{?_smp_mflags}
%install
%make_install
make install-contrib DESTDIR=%{buildroot} PERL_MM_PARAMS="INSTALLDIRS=vendor"
rm -f %{buildroot}/%{_sysconfdir}/slurm.conf.template
rm -f %{buildroot}/%{_sbindir}/slurmconfgen.py
%if 0%{?with_systemd}
mkdir -p %{buildroot}%{_unitdir}
@ -322,6 +308,12 @@ install -p -m644 etc/slurmd.service etc/slurmdbd.service etc/slurmctld.service %
ln -s /usr/sbin/service %{buildroot}%{_sbindir}/rcslurmd
ln -s /usr/sbin/service %{buildroot}%{_sbindir}/rcslurmdbd
ln -s /usr/sbin/service %{buildroot}%{_sbindir}/rcslurmctld
install -d -m 0755 %{buildroot}/%{_tmpfilesdir}/
cat <<-EOF > %{buildroot}/%{_tmpfilesdir}/%{name}.conf
# Create a directory with permissions 0700 owned by user slurm, group slurm
d /var/run/slurm 0700 slurm slurm
EOF
chmod 0644 %{buildroot}/%{_tmpfilesdir}/%{name}.conf
%else
install -D -m755 etc/init.d.slurm %{buildroot}%{_initrddir}/slurm
install -D -m755 etc/init.d.slurmdbd %{buildroot}%{_initrddir}/slurmdbd
@ -329,18 +321,24 @@ ln -sf %{_initrddir}/slurm %{buildroot}%{_sbindir}/rcslurm
ln -sf %{_initrddir}/slurmdbd %{buildroot}%{_sbindir}/rcslurmdbd
%endif
install -D -m644 etc/slurm.conf.example %{buildroot}/%{_sysconfdir}/%{name}/slurm.conf%{?OHPC_BUILD:.example}
install -D -m644 etc/slurmdbd.conf.example %{buildroot}/%{_sysconfdir}/%{name}/slurmdbd.conf
rm -f contribs/cray/opt_modulefiles_slurm
rm -f %{buildroot}%{_sysconfdir}/plugstack.conf.template
rm -f %{buildroot}%{_sysconfdir}/slurm.conf.template
rm -f %{buildroot}%{_sbindir}/capmc_suspend
rm -f %{buildroot}%{_sbindir}/capmc_resume
rm -f %{buildroot}%{_sbindir}/slurmconfgen.py
install -D -m644 etc/cgroup.conf.example %{buildroot}/%{_sysconfdir}/%{name}/cgroup.conf
install -D -m644 etc/cgroup_allowed_devices_file.conf.example %{buildroot}/%{_sysconfdir}/%{name}/cgroup_allowed_devices_file.conf
install -D -m755 etc/cgroup.release_common.example %{buildroot}/%{_sysconfdir}/%{name}/cgroup/release_common.example
install -D -m755 etc/cgroup.release_common.example %{buildroot}/%{_sysconfdir}/%{name}/cgroup/release_freezer
install -D -m755 etc/cgroup.release_common.example %{buildroot}/%{_sysconfdir}/%{name}/cgroup/release_cpuset
install -D -m755 etc/cgroup.release_common.example %{buildroot}/%{_sysconfdir}/%{name}/cgroup/release_memory
install -D -m644 etc/slurmdbd.conf.example %{buildroot}%{_sysconfdir}/%{name}/slurmdbd.conf.example
install -D -m644 etc/layouts.d.power.conf.example %{buildroot}/%{_sysconfdir}/%{name}/layouts.d/power.conf.example
install -D -m644 etc/layouts.d.power_cpufreq.conf.example %{buildroot}/%{_sysconfdir}/%{name}/layouts.d/power_cpufreq.conf.example
install -D -m644 etc/layouts.d.unit.conf.example %{buildroot}/%{_sysconfdir}/%{name}/layouts.d/unit.conf.example
install -D -m644 etc/slurm.conf.example %{buildroot}/%{_sysconfdir}/%{name}/slurm.conf%{?OHPC_BUILD:.example}
install -D -m755 etc/slurm.epilog.clean %{buildroot}%{_sysconfdir}/%{name}/slurm.epilog.clean
install -D -m755 contribs/sgather/sgather %{buildroot}%{_bindir}/sgather
install -D -m644 etc/slurmdbd.conf.example %{buildroot}/%{_sysconfdir}/%{name}/slurmdbd.conf
install -D -m644 etc/slurmdbd.conf.example %{buildroot}%{_sysconfdir}/%{name}/slurmdbd.conf.example
install -D -m755 contribs/sjstat %{buildroot}%{_bindir}/sjstat
install -D -m755 contribs/sgather/sgather %{buildroot}%{_bindir}/sgather
%if 0%{?OHPC_BUILD}
# 6/16/15 karl.w.schulz@intel.com - do not package Slurm's version of libpmi with OpenHPC.
@ -349,6 +347,8 @@ install -D -m755 contribs/sjstat %{buildroot}%{_bindir}/sjstat
# 9/8/14 karl.w.schulz@intel.com - provide starting config file
head -n -2 %{buildroot}/%{_sysconfdir}/%{name}/slurm.conf.example | grep -v ReturnToService > %{buildroot}/%{_sysconfdir}/%{name}/slurm.conf
sed -i 's#\(StateSaveLocation=\).*#\1%_localstatedir/lib/slurm#' %{buildroot}/%{_sysconfdir}/%{name}/slurm.conf
sed -i 's#^\(SlurmdPidFile=\).*$#\1%{_localstatedir}/run/slurm/slurmd.pid#' %{buildroot}/%{_sysconfdir}/%{name}/slurm.conf
sed -i 's#^\(SlurmctldPidFile=\).*$#\1%{_localstatedir}/run/slurm/slurmctld.pid#' %{buildroot}/%{_sysconfdir}/%{name}/slurm.conf
echo "# OpenHPC default configuration" >> %{buildroot}/%{_sysconfdir}/%{name}/slurm.conf
echo "PropagateResourceLimitsExcept=MEMLOCK" >> %{buildroot}/%{_sysconfdir}/%{name}/slurm.conf
echo "SlurmdLogFile=/var/log/slurm.log" >> %{buildroot}/%{_sysconfdir}/%{name}/slurm.conf
@ -364,6 +364,12 @@ mkdir -p %{buildroot}/%_localstatedir/lib/slurm
%endif
# Delete unpackaged files:
test -s %{buildroot}/%{_perldir}/auto/Slurm/Slurm.bs ||
rm -f %{buildroot}/%{_perldir}/auto/Slurm/Slurm.bs
test -s %{buildroot}/%{_perldir}/auto/Slurmdb/Slurmdb.bs ||
rm -f %{buildroot}/%{_perldir}/auto/Slurmdb/Slurmdb.bs
rm -rf %{buildroot}/%{_libdir}/slurm/*.{a,la} \
%{buildroot}/%{_libdir}/*.la \
%{buildroot}/%_lib/security/*.la \
@ -373,19 +379,11 @@ rm -f %{buildroot}/%{_mandir}/man1/srun_cr* \
%{buildroot}/%{_bindir}/srun_cr \
%{buildroot}/%{_libexecdir}/slurm/cr_*
# Delete unpackaged files:
test -s %{buildroot}/%{_perldir}/auto/Slurm/Slurm.bs ||
rm -f %{buildroot}/%{_perldir}/auto/Slurm/Slurm.bs
test -s %{buildroot}/%{_perldir}/auto/Slurmdb/Slurmdb.bs ||
rm -f %{buildroot}/%{_perldir}/auto/Slurmdb/Slurmdb.bs
rm doc/html/shtml2html.py doc/html/Makefile*
rm -f %{buildroot}/%{perl_archlib}/perllocal.pod
rm -f %{buildroot}/%{perl_vendorarch}/auto/Slurm/.packlist
rm -f %{buildroot}/%{perl_vendorarch}/auto/Slurmdb/.packlist
mv %{buildroot}/%{perl_sitearch}/config.slurmdb.pl %{buildroot}/%{perl_vendorarch}
# Build man pages that are generated directly by the tools
rm -f %{buildroot}/%{_mandir}/man1/sjobexitmod.1
@ -433,6 +431,11 @@ exit 0
%post
%if 0%{?with_systemd}
%if 0%{?sle_version} >= 120200
%tmpfiles_create slurm.conf
%else
systemd-tmpfiles --create slurm.conf
%endif
%service_add_post slurmd.service
%service_add_post slurmctld.service
%else
@ -491,7 +494,6 @@ exit 0
%defattr(-,root,root)
%doc AUTHORS NEWS RELEASE_NOTES DISCLAIMER COPYING
%doc doc/html
%{_bindir}/generate_pbs_nodefile
%{_bindir}/sacct
%{_bindir}/sacctmgr
%{_bindir}/salloc
@ -536,7 +538,6 @@ exit 0
%{_mandir}/man1/sshare.1*
%{_mandir}/man1/sstat.1*
%{_mandir}/man1/strigger.1*
%{_mandir}/man1/sh5util.1*
%{_mandir}/man1/sjobexitmod.1.*
%{_mandir}/man1/sjstat.1.*
%{_mandir}/man5/acct_gather.conf.*
@ -555,23 +556,26 @@ exit 0
%{_mandir}/man8/spank*
%dir %{_libdir}/slurm/src
%dir %{_sysconfdir}/%{name}
%dir %{_sysconfdir}/%{name}/layouts.d
%config(noreplace) %{_sysconfdir}/%{name}/slurm.conf
%{?OHPC_BUILD:%config %{_sysconfdir}/%{name}/slurm.conf.example}
%config(noreplace) %{_sysconfdir}/%{name}/cgroup.conf
%config(noreplace) %{_sysconfdir}/%{name}/cgroup_allowed_devices_file.conf
%config(noreplace) %{_sysconfdir}/%{name}/slurm.epilog.clean
%dir %{_sysconfdir}/%{name}/cgroup
%config(noreplace) %{_sysconfdir}/%{name}/cgroup/release_*
%config(noreplace) %{_sysconfdir}/%{name}/layouts.d/power.conf.example
%config(noreplace) %{_sysconfdir}/%{name}/layouts.d/power_cpufreq.conf.example
%config(noreplace) %{_sysconfdir}/%{name}/layouts.d/unit.conf.example
%if 0%{?with_systemd}
%{_unitdir}/slurmd.service
%{_unitdir}/slurmctld.service
%{_sbindir}/rcslurmd
%{_sbindir}/rcslurmctld
%else
%{_initrddir}/slurm
%{_sbindir}/rcslurm
%endif
%{?with_systemd:%{_sbindir}/rcslurmctld}
%{?OHPC_BUILD:%attr(0755, %slurm_u, %slurm_g) %_localstatedir/lib/slurm}
%{?with_systemd:%{_tmpfilesdir}/%{name}.conf}
%files openlava
%defattr(-,root,root)
@ -610,11 +614,6 @@ exit 0
%{_bindir}/sview
%{_mandir}/man1/sview.1*
%files sched-wiki
%defattr(-,root,root)
%{_libdir}/slurm/sched_wiki*.so
#%%{_mandir}/man5/wiki.*
%files auth-none
%defattr(-,root,root)
%{_libdir}/slurm/auth_none.so
@ -647,6 +646,12 @@ exit 0
%endif
%{_sbindir}/rcslurmdbd
%files sql
%defattr(-,root,root)
%dir %{_libdir}/slurm
%{_libdir}/slurm/accounting_storage_mysql.so
%{_libdir}/slurm/jobcomp_mysql.so
%files plugins
%defattr(-,root,root)
%{_sysconfdir}/ld.so.conf.d/slurm.conf
@ -654,10 +659,10 @@ exit 0
%{_libdir}/slurm/accounting_storage_filetxt.so
%{_libdir}/slurm/accounting_storage_none.so
%{_libdir}/slurm/accounting_storage_slurmdbd.so
%{_libdir}/slurm/acct_gather_energy_none.so
%{_libdir}/slurm/acct_gather_energy_rapl.so
%{_libdir}/slurm/acct_gather_energy_cray.so
%{_libdir}/slurm/acct_gather_energy_ibmaem.so
%{_libdir}/slurm/acct_gather_energy_none.so
%{_libdir}/slurm/acct_gather_energy_rapl.so
%{_libdir}/slurm/acct_gather_filesystem_lustre.so
%{_libdir}/slurm/acct_gather_filesystem_none.so
%{_libdir}/slurm/acct_gather_infiniband_none.so
@ -667,22 +672,34 @@ exit 0
%{_libdir}/slurm/checkpoint_ompi.so
%{_libdir}/slurm/core_spec_cray.so
%{_libdir}/slurm/core_spec_none.so
%{_libdir}/slurm/crypto_openssl.so
%{_libdir}/slurm/ext_sensors_none.so
%{_libdir}/slurm/jobacct_gather_aix.so
%{_libdir}/slurm/gres_gpu.so
%{_libdir}/slurm/gres_mic.so
%{_libdir}/slurm/gres_nic.so
%{_libdir}/slurm/jobacct_gather_cgroup.so
%{_libdir}/slurm/jobacct_gather_linux.so
%{_libdir}/slurm/jobacct_gather_none.so
%{_libdir}/slurm/jobcomp_filetxt.so
%{_libdir}/slurm/jobcomp_none.so
%{_libdir}/slurm/jobcomp_script.so
%{_libdir}/slurm/job_container_cncu.so
%{_libdir}/slurm/job_container_none.so
%{_libdir}/slurm/jobcomp_none.so
%{_libdir}/slurm/jobcomp_filetxt.so
%{_libdir}/slurm/jobcomp_script.so
%{_libdir}/slurm/job_submit_all_partitions.so
%{_libdir}/slurm/job_submit_cray.so
%{_libdir}/slurm/job_submit_pbs.so
%{_libdir}/slurm/job_submit_defaults.so
%{_libdir}/slurm/job_submit_logging.so
%{_libdir}/slurm/job_submit_partition.so
%{_libdir}/slurm/job_submit_require_timelimit.so
%{_libdir}/slurm/job_submit_throttle.so
%{_libdir}/slurm/launch_slurm.so
%{_libdir}/slurm/layouts_power_cpufreq.so
%{_libdir}/slurm/layouts_power_default.so
%{_libdir}/slurm/layouts_unit_default.so
%{_libdir}/slurm/mcs_account.so
%{_libdir}/slurm/mcs_group.so
%{_libdir}/slurm/mcs_none.so
%{_libdir}/slurm/mcs_user.so
%{_libdir}/slurm/mpi_lam.so
%{_libdir}/slurm/mpi_mpich1_p4.so
%{_libdir}/slurm/mpi_mpich1_shmem.so
@ -691,58 +708,41 @@ exit 0
%{_libdir}/slurm/mpi_mvapich.so
%{_libdir}/slurm/mpi_none.so
%{_libdir}/slurm/mpi_openmpi.so
%{_libdir}/slurm/mpi_pmi2.so
%{_libdir}/slurm/power_none.so
%{_libdir}/slurm/preempt_job_prio.so
%{_libdir}/slurm/preempt_none.so
%{_libdir}/slurm/preempt_partition_prio.so
%{_libdir}/slurm/preempt_qos.so
%{_libdir}/slurm/priority_basic.so
%{_libdir}/slurm/proctrack_pgid.so
%{_libdir}/slurm/priority_multifactor.so
%{_libdir}/slurm/proctrack_cgroup.so
%{_libdir}/slurm/proctrack_linuxproc.so
%{_libdir}/slurm/proctrack_pgid.so
%{_libdir}/slurm/route_default.so
%{_libdir}/slurm/route_topology.so
%{_libdir}/slurm/sched_backfill.so
%{_libdir}/slurm/sched_builtin.so
%{_libdir}/slurm/sched_hold.so
%{_libdir}/slurm/select_alps.so
%{_libdir}/slurm/select_bluegene.so
%{_libdir}/slurm/select_cons_res.so
%{_libdir}/slurm/select_cray.so
%{_libdir}/slurm/select_linear.so
%{_libdir}/slurm/select_serial.so
%{_libdir}/slurm/slurmctld_nonstop.so
%{_libdir}/slurm/switch_cray.so
%{_libdir}/slurm/switch_generic.so
%{_libdir}/slurm/switch_none.so
%{_libdir}/slurm/spank_pbs.so
%{_libdir}/slurm/task_affinity.so
%{_libdir}/slurm/task_cgroup.so
%{_libdir}/slurm/task_cray.so
%{_libdir}/slurm/task_none.so
%{_libdir}/slurm/topology_3d_torus.so
%{_libdir}/slurm/topology_hypercube.so
%{_libdir}/slurm/topology_node_rank.so
%{_libdir}/slurm/topology_none.so
%{_libdir}/slurm/topology_tree.so
%{_libdir}/slurm/accounting_storage_mysql.so
%{_libdir}/slurm/crypto_openssl.so
%{_libdir}/slurm/jobcomp_mysql.so
%{_libdir}/slurm/task_affinity.so
%{_libdir}/slurm/gres_gpu.so
%{_libdir}/slurm/gres_mic.so
%{_libdir}/slurm/gres_nic.so
%{_libdir}/slurm/job_submit_all_partitions.so
#%%{_libdir}/slurm/job_submit_cnode.so
%{_libdir}/slurm/job_submit_defaults.so
%{_libdir}/slurm/job_submit_logging.so
%{_libdir}/slurm/job_submit_partition.so
%{_libdir}/slurm/jobacct_gather_cgroup.so
%{_libdir}/slurm/launch_slurm.so
%{_libdir}/slurm/mpi_pmi2.so
%{_libdir}/slurm/proctrack_cgroup.so
%{_libdir}/slurm/priority_multifactor.so
%{_libdir}/slurm/select_bluegene.so
%{_libdir}/slurm/select_cray.so
%{_libdir}/slurm/select_serial.so
%{_libdir}/slurm/task_cgroup.so
%{_libdir}/slurm/topology_node_rank.so
%{_libdir}/slurm/mcs_group.so
%{_libdir}/slurm/mcs_none.so
%{_libdir}/slurm/mcs_user.so
%if 0%{?suse_version} > 1310
%{_libdir}/slurm/acct_gather_infiniband_ofed.so
%endif
@ -769,11 +769,9 @@ exit 0
%{_bindir}/qstat
%{_bindir}/qsub
%{_bindir}/mpiexec.slurm
%files slurmdb-direct
%defattr(-,root,root)
%config (noreplace) %{perl_vendorarch}/config.slurmdb.pl
%{_sbindir}/moab_2_slurmdb
%{_bindir}/generate_pbs_nodefile
%{_libdir}/slurm/job_submit_pbs.so
%{_libdir}/slurm/spank_pbs.so
%files sjstat
%defattr(-,root,root)

View File

@ -0,0 +1,39 @@
Index: slurm-slurm-16-05-8-1/etc/slurmctld.service.in
===================================================================
--- slurm-slurm-16-05-8-1.orig/etc/slurmctld.service.in
+++ slurm-slurm-16-05-8-1/etc/slurmctld.service.in
@@ -8,7 +8,7 @@ Type=forking
EnvironmentFile=-/etc/sysconfig/slurmctld
ExecStart=@sbindir@/slurmctld $SLURMCTLD_OPTIONS
ExecReload=/bin/kill -HUP $MAINPID
-PIDFile=/var/run/slurmctld.pid
+PIDFile=/var/run/slurm/slurmctld.pid
[Install]
WantedBy=multi-user.target
Index: slurm-slurm-16-05-8-1/etc/slurmd.service.in
===================================================================
--- slurm-slurm-16-05-8-1.orig/etc/slurmd.service.in
+++ slurm-slurm-16-05-8-1/etc/slurmd.service.in
@@ -8,7 +8,7 @@ Type=forking
EnvironmentFile=-/etc/sysconfig/slurmd
ExecStart=@sbindir@/slurmd $SLURMD_OPTIONS
ExecReload=/bin/kill -HUP $MAINPID
-PIDFile=/var/run/slurmd.pid
+PIDFile=/var/run/slurm/slurmd.pid
KillMode=process
LimitNOFILE=51200
LimitMEMLOCK=infinity
Index: slurm-slurm-16-05-8-1/etc/slurmdbd.service.in
===================================================================
--- slurm-slurm-16-05-8-1.orig/etc/slurmdbd.service.in
+++ slurm-slurm-16-05-8-1/etc/slurmdbd.service.in
@@ -8,7 +8,7 @@ Type=forking
EnvironmentFile=-/etc/sysconfig/slurmdbd
ExecStart=@sbindir@/slurmdbd $SLURMDBD_OPTIONS
ExecReload=/bin/kill -HUP $MAINPID
-PIDFile=/var/run/slurmdbd.pid
+PIDFile=/var/run/slurm/slurmdbd.pid
[Install]
WantedBy=multi-user.target

View File

@ -1,11 +0,0 @@
[Unit]
Description=SLURMDBD is a database server interface for SLURM
After=network.target
[Service]
Type=forking
EnvironmentFile=-/etc/sysconfig/slurm
ExecStart=/usr/sbin/slurmdbd
[Install]
WantedBy=multi-user.target