2013-04-08 23:59:36 +02:00
|
|
|
#
|
|
|
|
# spec file for package slurm
|
|
|
|
#
|
2014-07-26 16:36:03 +02:00
|
|
|
# Copyright (c) 2014 SUSE LINUX Products GmbH, Nuernberg, Germany.
|
2013-04-08 23:59:36 +02:00
|
|
|
#
|
|
|
|
# All modifications and additions to the file contributed by third parties
|
|
|
|
# remain the property of their copyright owners, unless otherwise agreed
|
|
|
|
# upon. The license for this file, and modifications and additions to the
|
|
|
|
# file, is the same license as for the pristine package itself (unless the
|
|
|
|
# license for the pristine package is not an Open Source License, in which
|
|
|
|
# case the license is the MIT License). An "Open Source License" is a
|
|
|
|
# license that conforms to the Open Source Definition (Version 1.9)
|
|
|
|
# published by the Open Source Initiative.
|
|
|
|
|
|
|
|
# Please submit bugfixes or comments via http://bugs.opensuse.org/
|
|
|
|
#
|
|
|
|
|
|
|
|
|
2014-07-26 16:36:03 +02:00
|
|
|
%define libslurm libslurm27
|
2013-04-08 23:59:36 +02:00
|
|
|
|
|
|
|
Name: slurm
|
2014-07-26 16:36:03 +02:00
|
|
|
Version: 14.03.6
|
2013-04-08 23:59:36 +02:00
|
|
|
Release: 0
|
|
|
|
Summary: Simple Linux Utility for Resource Management
|
|
|
|
License: GPL-3.0
|
|
|
|
Group: Productivity/Clustering/Computing
|
|
|
|
Url: https://computing.llnl.gov/linux/slurm/
|
|
|
|
Source: slurm-%{version}.tar.bz2
|
|
|
|
Patch0: slurm-2.4.4-rpath.patch
|
|
|
|
Patch1: slurm-2.4.4-init.patch
|
|
|
|
PreReq: %insserv_prereq %fillup_prereq
|
|
|
|
Requires: slurm-plugins = %{version}
|
|
|
|
BuildRequires: fdupes
|
|
|
|
BuildRequires: gcc-c++
|
|
|
|
BuildRequires: gtk2-devel
|
|
|
|
BuildRequires: libbitmask-devel
|
|
|
|
BuildRequires: libcpuset-devel
|
|
|
|
BuildRequires: libhwloc-devel
|
|
|
|
%ifarch x86_64
|
|
|
|
BuildRequires: libnuma-devel
|
|
|
|
%endif
|
|
|
|
BuildRequires: mysql-devel >= 5.0.0
|
|
|
|
BuildRequires: ncurses-devel
|
|
|
|
BuildRequires: openssl-devel >= 0.9.6
|
|
|
|
BuildRequires: pkgconfig
|
|
|
|
BuildRequires: postgresql-devel >= 8.0.0
|
|
|
|
BuildRequires: python
|
|
|
|
BuildRequires: readline-devel
|
|
|
|
BuildRoot: %{_tmppath}/%{name}-%{version}-build
|
|
|
|
|
|
|
|
%description
|
|
|
|
SLURM is an open source, fault-tolerant, and highly
|
|
|
|
scalable cluster management and job scheduling system for Linux clusters
|
|
|
|
containing up to 65,536 nodes. Components include machine status,
|
|
|
|
partition management, job management, scheduling and accounting modules.
|
|
|
|
|
|
|
|
|
|
|
|
%package -n perl-slurm
|
|
|
|
Summary: Perl API to SLURM
|
|
|
|
Group: Development/Languages/Perl
|
|
|
|
Requires: slurm = %{version}
|
|
|
|
%if 0%{?suse_version} < 1140
|
|
|
|
Requires: perl = %{perl_version}
|
|
|
|
%else
|
|
|
|
%{perl_requires}
|
|
|
|
%endif
|
|
|
|
|
|
|
|
%description -n perl-slurm
|
|
|
|
Perl API package for SLURM. This package includes the perl API to provide a
|
|
|
|
helpful interface to SLURM through Perl.
|
|
|
|
|
|
|
|
%package -n %{libslurm}
|
|
|
|
Summary: Libraries for slurm
|
|
|
|
Group: System/Libraries
|
|
|
|
|
|
|
|
%description -n %{libslurm}
|
|
|
|
This package contains the library needed to run programs dynamically linked
|
|
|
|
with slurm.
|
|
|
|
|
|
|
|
|
|
|
|
%package devel
|
|
|
|
Summary: Development package for SLURM
|
|
|
|
Group: Development/Libraries/C and C++
|
|
|
|
Requires: %{libslurm} = %{version}
|
|
|
|
Requires: slurm = %{version}
|
|
|
|
|
|
|
|
%description devel
|
|
|
|
Development package for SLURM. This package includes the header files
|
|
|
|
and libraries for the SLURM API.
|
|
|
|
|
|
|
|
|
|
|
|
%package auth-none
|
|
|
|
Summary: SLURM auth NULL implementation (no authentication)
|
|
|
|
Group: Productivity/Clustering/Computing
|
|
|
|
Requires: slurm = %{version}
|
|
|
|
|
|
|
|
%description auth-none
|
|
|
|
This package cobtains the SLURM NULL authentication module.
|
|
|
|
|
|
|
|
|
|
|
|
%package munge
|
|
|
|
Summary: SLURM authentication and crypto implementation using Munge
|
|
|
|
Group: Productivity/Clustering/Computing
|
|
|
|
Requires: slurm = %{version}
|
2014-07-26 16:36:03 +02:00
|
|
|
Requires: munge
|
2013-04-08 23:59:36 +02:00
|
|
|
BuildRequires: munge-devel
|
|
|
|
Obsoletes: slurm-auth-munge < %{version}
|
|
|
|
Provides: slurm-auth-munge = %{version}
|
|
|
|
|
|
|
|
%description munge
|
|
|
|
This package contains the SLURM authentication module for Chris Dunlap's Munge.
|
|
|
|
|
|
|
|
%package sview
|
|
|
|
Summary: SLURM graphical interface
|
|
|
|
Group: Productivity/Clustering/Computing
|
|
|
|
|
|
|
|
%description sview
|
|
|
|
sview is a graphical user interface to get and update state information for
|
|
|
|
jobs, partitions, and nodes managed by SLURM.
|
|
|
|
|
|
|
|
|
|
|
|
%package sched-wiki
|
|
|
|
Summary: SLURM plugin for the Maui or Moab scheduler wiki interface
|
|
|
|
Group: Productivity/Clustering/Computing
|
|
|
|
Requires: slurm = %{version}
|
|
|
|
|
|
|
|
%description sched-wiki
|
|
|
|
This package contains the SLURM plugin for the Maui or Moab scheduler wiki interface.
|
|
|
|
|
|
|
|
|
|
|
|
%package slurmdbd
|
|
|
|
Summary: SLURM database daemon
|
|
|
|
Group: Productivity/Clustering/Computing
|
|
|
|
Requires: slurm-plugins = %{version}
|
|
|
|
PreReq: %insserv_prereq %fillup_prereq
|
|
|
|
|
|
|
|
%description slurmdbd
|
|
|
|
The SLURM database daemon provides accounting of jobs in a database.
|
|
|
|
|
|
|
|
|
|
|
|
%package plugins
|
|
|
|
Summary: SLURM plugins (loadable shared objects)
|
|
|
|
Group: Productivity/Clustering/Computing
|
|
|
|
|
|
|
|
%description plugins
|
|
|
|
This package contains the SLURM plugins (loadable shared objects)
|
|
|
|
|
|
|
|
%package torque
|
|
|
|
Summary: Torque/PBS wrappers for transitition from Torque/PBS to SLURM
|
|
|
|
Group: Productivity/Clustering/Computing
|
|
|
|
Requires: perl-slurm = %{version}
|
|
|
|
Provides: torque-client
|
|
|
|
|
|
|
|
%description torque
|
|
|
|
Torque wrapper scripts used for helping migrate from Torque/PBS to SLURM.
|
|
|
|
|
|
|
|
|
|
|
|
%package slurmdb-direct
|
|
|
|
Summary: Wrappers to write directly to the slurmdb
|
|
|
|
Group: Productivity/Clustering/Computing
|
|
|
|
Requires: perl-slurm = %{version}
|
|
|
|
%if 0%{?suse_version} < 1140
|
|
|
|
Requires: perl = %{perl_version}
|
|
|
|
%else
|
|
|
|
%{perl_requires}
|
|
|
|
%endif
|
|
|
|
|
|
|
|
%description slurmdb-direct
|
2014-07-26 16:36:03 +02:00
|
|
|
This package contains the wrappers to write directly to the slurmdb.
|
2013-04-08 23:59:36 +02:00
|
|
|
|
|
|
|
|
|
|
|
%package sjstat
|
|
|
|
Summary: Perl tool to print SLURM job state information
|
|
|
|
Group: Productivity/Clustering/Computing
|
|
|
|
Requires: slurm = %{version}
|
|
|
|
|
|
|
|
%description sjstat
|
|
|
|
This package contains the perl tool to print SLURM job state information.
|
|
|
|
|
|
|
|
%package pam_slurm
|
|
|
|
Summary: PAM module for restricting access to compute nodes via SLURM
|
|
|
|
Group: Productivity/Clustering/Computing
|
|
|
|
Requires: slurm = %{version}
|
|
|
|
BuildRequires: pam-devel
|
|
|
|
|
|
|
|
%description pam_slurm
|
|
|
|
This module restricts access to compute nodes in a cluster where the Simple
|
|
|
|
Linux Utility for Resource Managment (SLURM) is in use. Access is granted
|
|
|
|
to root, any user with an SLURM-launched job currently running on the node,
|
|
|
|
or any user who has allocated resources on the node according to the SLURM
|
|
|
|
|
|
|
|
|
|
|
|
%prep
|
|
|
|
%setup -q
|
|
|
|
%patch0 -p1
|
|
|
|
%patch1 -p1
|
|
|
|
chmod 0644 doc/html/*.{gif,jpg}
|
|
|
|
|
|
|
|
%build
|
|
|
|
%configure --enable-shared \
|
|
|
|
--disable-static \
|
|
|
|
--without-rpath \
|
|
|
|
--sysconfdir=%{_sysconfdir}/%{name}
|
|
|
|
make %{?_smp_mflags}
|
|
|
|
|
|
|
|
%install
|
|
|
|
%makeinstall
|
|
|
|
make install-contrib DESTDIR=$RPM_BUILD_ROOT
|
|
|
|
|
|
|
|
install -D -m755 etc/init.d.slurm $RPM_BUILD_ROOT%{_initrddir}/slurm
|
|
|
|
install -D -m755 etc/init.d.slurmdbd $RPM_BUILD_ROOT%{_initrddir}/slurmdbd
|
|
|
|
ln -sf %{_initrddir}/slurm %{buildroot}%{_sbindir}/rcslurm
|
|
|
|
ln -sf %{_initrddir}/slurmdbd %{buildroot}%{_sbindir}/rcslurmdbd
|
|
|
|
|
|
|
|
install -D -m644 etc/slurm.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/slurm.conf
|
|
|
|
install -D -m644 etc/slurmdbd.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/slurmdbd.conf
|
|
|
|
install -D -m644 etc/cgroup.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup.conf
|
|
|
|
install -D -m755 etc/cgroup.release_common.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup/release_common
|
|
|
|
install -D -m644 etc/cgroup_allowed_devices_file.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup_allowed_devices_file.conf
|
|
|
|
install -D -m755 etc/slurm.epilog.clean $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/slurm.epilog.clean
|
|
|
|
install -D -m755 contribs/sjstat $RPM_BUILD_ROOT%{_bindir}/sjstat
|
|
|
|
|
|
|
|
# Delete unpackaged files:
|
|
|
|
rm -rf $RPM_BUILD_ROOT/%{_libdir}/slurm/*.{a,la} \
|
|
|
|
$RPM_BUILD_ROOT/%{_libdir}/*.la \
|
|
|
|
$RPM_BUILD_ROOT/%_lib/security/*.la \
|
|
|
|
$RPM_BUILD_ROOT/%{_datadir}/doc/slurm-%{version}/ \
|
|
|
|
$RPM_BUILD_ROOT/%{_mandir}/man5/bluegene*
|
|
|
|
|
|
|
|
rm -f $RPM_BUILD_ROOT%{_mandir}/man1/srun_cr* \
|
|
|
|
$RPM_BUILD_ROOT%{_bindir}/srun_cr \
|
|
|
|
$RPM_BUILD_ROOT%{_libexecdir}/slurm/cr_*
|
|
|
|
|
|
|
|
mkdir -p $RPM_BUILD_ROOT%{perl_vendorarch}
|
|
|
|
mv $RPM_BUILD_ROOT%{perl_sitearch}/* $RPM_BUILD_ROOT%{perl_vendorarch}
|
|
|
|
%perl_process_packlist
|
|
|
|
|
|
|
|
rm doc/html/shtml2html.py doc/html/Makefile*
|
|
|
|
|
|
|
|
%fdupes -s $RPM_BUILD_ROOT
|
|
|
|
|
|
|
|
%post
|
|
|
|
%fillup_and_insserv slurm
|
|
|
|
|
|
|
|
%preun
|
|
|
|
%stop_on_removal slurm
|
|
|
|
|
|
|
|
%postun
|
|
|
|
%restart_on_update slurm
|
|
|
|
%insserv_cleanup
|
|
|
|
|
|
|
|
%post slurmdbd
|
|
|
|
%fillup_and_insserv slurmdbd
|
|
|
|
|
|
|
|
%preun slurmdbd
|
|
|
|
%stop_on_removal slurmdbd
|
|
|
|
|
|
|
|
%postun slurmdbd
|
|
|
|
%restart_on_update slurmdbd
|
|
|
|
%insserv_cleanup
|
|
|
|
|
|
|
|
%post -n %{libslurm} -p /sbin/ldconfig
|
|
|
|
|
|
|
|
%postun -n %{libslurm} -p /sbin/ldconfig
|
|
|
|
|
|
|
|
%files
|
|
|
|
%defattr(-,root,root)
|
|
|
|
%doc AUTHORS NEWS RELEASE_NOTES DISCLAIMER COPYING
|
|
|
|
%doc doc/html
|
Accepting request 226317 from home:scorot:branches:network:cluster
- update to version 2.6.7
* Support for job arrays, which increases performance and ease of
use for sets of similar jobs.
* Job profiling capability added to record a wide variety of job
characteristics for each task on a user configurable periodic
basis. Data currently available includes CPU use, memory use,
energy use, Infiniband network use, Lustre file system use, etc.
* Support for MPICH2 using PMI2 communications interface with much
greater scalability.
* Prolog and epilog support for advanced reservations.
* Much faster throughput for job step execution with --exclusive
option. The srun process is notified when resources become
available rather than periodic polling.
* Support improved for Intel MIC (Many Integrated Core) processor.
* Advanced reservations with hostname and core counts now supports
asymmetric reservations (e.g. specific different core count for
each node).
* External sensor plugin infrastructure added to record power
consumption, temperature, etc.
* Improved performance for high-throughput computing.
* MapReduce+ support (launches ~1000x faster, runs ~10x faster).
* Added "MaxCPUsPerNode" partition configuration parameter. This
can be especially useful to schedule GPUs. For example a node
can be associated with two Slurm partitions (e.g. "cpu" and
"gpu") and the partition/queue "cpu" could be limited to only a
subset of the node's CPUs, insuring that one or more CPUs would
be available to jobs in the "gpu" partition/queue.
OBS-URL: https://build.opensuse.org/request/show/226317
OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
|
|
|
%{_bindir}/generate_pbs_nodefile
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_bindir}/sacct
|
|
|
|
%{_bindir}/sacctmgr
|
|
|
|
%{_bindir}/salloc
|
|
|
|
%{_bindir}/sattach
|
|
|
|
%{_bindir}/sbatch
|
|
|
|
%{_bindir}/sbcast
|
|
|
|
%{_bindir}/scancel
|
|
|
|
%{_bindir}/scontrol
|
|
|
|
%{_bindir}/sdiag
|
2014-07-26 16:36:03 +02:00
|
|
|
%{_bindir}/sgather
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_bindir}/sinfo
|
|
|
|
%{_bindir}/sjobexitmod
|
|
|
|
%{_bindir}/sprio
|
|
|
|
%{_bindir}/squeue
|
|
|
|
%{_bindir}/sreport
|
|
|
|
%{_bindir}/srun
|
|
|
|
%{_bindir}/smap
|
|
|
|
%{_bindir}/sshare
|
|
|
|
%{_bindir}/sstat
|
|
|
|
%{_bindir}/strigger
|
|
|
|
%{_sbindir}/slurmctld
|
|
|
|
%{_sbindir}/slurmd
|
|
|
|
%{_sbindir}/slurmstepd
|
|
|
|
%{_mandir}/man1/sacct.1*
|
|
|
|
%{_mandir}/man1/sacctmgr.1*
|
|
|
|
%{_mandir}/man1/salloc.1*
|
|
|
|
%{_mandir}/man1/sattach.1*
|
|
|
|
%{_mandir}/man1/sbatch.1*
|
|
|
|
%{_mandir}/man1/sbcast.1*
|
|
|
|
%{_mandir}/man1/scancel.1*
|
|
|
|
%{_mandir}/man1/scontrol.1*
|
|
|
|
%{_mandir}/man1/sdiag.1.*
|
2014-07-26 16:36:03 +02:00
|
|
|
%{_mandir}/man1/sgather.1.*
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_mandir}/man1/sinfo.1*
|
|
|
|
%{_mandir}/man1/slurm.1*
|
|
|
|
%{_mandir}/man1/smap.1*
|
|
|
|
%{_mandir}/man1/sprio.1*
|
|
|
|
%{_mandir}/man1/squeue.1*
|
|
|
|
%{_mandir}/man1/sreport.1*
|
|
|
|
%{_mandir}/man1/srun.1*
|
|
|
|
%{_mandir}/man1/sshare.1*
|
|
|
|
%{_mandir}/man1/sstat.1*
|
|
|
|
%{_mandir}/man1/strigger.1*
|
Accepting request 226317 from home:scorot:branches:network:cluster
- update to version 2.6.7
* Support for job arrays, which increases performance and ease of
use for sets of similar jobs.
* Job profiling capability added to record a wide variety of job
characteristics for each task on a user configurable periodic
basis. Data currently available includes CPU use, memory use,
energy use, Infiniband network use, Lustre file system use, etc.
* Support for MPICH2 using PMI2 communications interface with much
greater scalability.
* Prolog and epilog support for advanced reservations.
* Much faster throughput for job step execution with --exclusive
option. The srun process is notified when resources become
available rather than periodic polling.
* Support improved for Intel MIC (Many Integrated Core) processor.
* Advanced reservations with hostname and core counts now supports
asymmetric reservations (e.g. specific different core count for
each node).
* External sensor plugin infrastructure added to record power
consumption, temperature, etc.
* Improved performance for high-throughput computing.
* MapReduce+ support (launches ~1000x faster, runs ~10x faster).
* Added "MaxCPUsPerNode" partition configuration parameter. This
can be especially useful to schedule GPUs. For example a node
can be associated with two Slurm partitions (e.g. "cpu" and
"gpu") and the partition/queue "cpu" could be limited to only a
subset of the node's CPUs, insuring that one or more CPUs would
be available to jobs in the "gpu" partition/queue.
OBS-URL: https://build.opensuse.org/request/show/226317
OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
|
|
|
%{_mandir}/man1/sh5util.1*
|
|
|
|
%{_mandir}/man5/acct_gather.conf.*
|
|
|
|
%{_mandir}/man5/ext_sensors.conf.*
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_mandir}/man5/slurm.*
|
|
|
|
%{_mandir}/man5/cgroup.*
|
|
|
|
%{_mandir}/man5/cray.*
|
|
|
|
%{_mandir}/man5/gres.*
|
2014-07-26 16:36:03 +02:00
|
|
|
%{_mandir}/man5/nonstop.conf.5.*
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_mandir}/man5/topology.*
|
|
|
|
%{_mandir}/man8/slurmctld.*
|
|
|
|
%{_mandir}/man8/slurmd.*
|
|
|
|
%{_mandir}/man8/slurmstepd*
|
|
|
|
%{_mandir}/man8/spank*
|
|
|
|
%dir %{_libdir}/slurm/src
|
|
|
|
%dir %{_sysconfdir}/%{name}
|
|
|
|
%config(noreplace) %{_sysconfdir}/%{name}/slurm.conf
|
|
|
|
%config(noreplace) %{_sysconfdir}/%{name}/cgroup.conf
|
|
|
|
%config(noreplace) %{_sysconfdir}/%{name}/cgroup_allowed_devices_file.conf
|
|
|
|
%config(noreplace) %{_sysconfdir}/%{name}/slurm.epilog.clean
|
|
|
|
%dir %{_sysconfdir}/%{name}/cgroup
|
|
|
|
%config(noreplace) %{_sysconfdir}/%{name}/cgroup/release_common
|
|
|
|
%{_initrddir}/slurm
|
|
|
|
%{_sbindir}/rcslurm
|
|
|
|
|
|
|
|
%files -n %{libslurm}
|
|
|
|
%defattr(-,root,root)
|
|
|
|
%{_libdir}/*.so.*
|
|
|
|
|
|
|
|
%files devel
|
|
|
|
%defattr(-,root,root)
|
|
|
|
%{_prefix}/include/slurm
|
|
|
|
%{_libdir}/libpmi.so
|
Accepting request 226317 from home:scorot:branches:network:cluster
- update to version 2.6.7
* Support for job arrays, which increases performance and ease of
use for sets of similar jobs.
* Job profiling capability added to record a wide variety of job
characteristics for each task on a user configurable periodic
basis. Data currently available includes CPU use, memory use,
energy use, Infiniband network use, Lustre file system use, etc.
* Support for MPICH2 using PMI2 communications interface with much
greater scalability.
* Prolog and epilog support for advanced reservations.
* Much faster throughput for job step execution with --exclusive
option. The srun process is notified when resources become
available rather than periodic polling.
* Support improved for Intel MIC (Many Integrated Core) processor.
* Advanced reservations with hostname and core counts now supports
asymmetric reservations (e.g. specific different core count for
each node).
* External sensor plugin infrastructure added to record power
consumption, temperature, etc.
* Improved performance for high-throughput computing.
* MapReduce+ support (launches ~1000x faster, runs ~10x faster).
* Added "MaxCPUsPerNode" partition configuration parameter. This
can be especially useful to schedule GPUs. For example a node
can be associated with two Slurm partitions (e.g. "cpu" and
"gpu") and the partition/queue "cpu" could be limited to only a
subset of the node's CPUs, insuring that one or more CPUs would
be available to jobs in the "gpu" partition/queue.
OBS-URL: https://build.opensuse.org/request/show/226317
OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
|
|
|
%{_libdir}/libpmi2.so
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_libdir}/libslurm.so
|
|
|
|
%{_libdir}/libslurmdb.so
|
|
|
|
%{_libdir}/slurm/src/*
|
|
|
|
%{_mandir}/man3/slurm_*
|
|
|
|
|
|
|
|
%files sview
|
|
|
|
%defattr(-,root,root)
|
|
|
|
%{_bindir}/sview
|
|
|
|
%{_mandir}/man1/sview.1*
|
|
|
|
|
|
|
|
%files sched-wiki
|
|
|
|
%defattr(-,root,root)
|
|
|
|
%{_libdir}/slurm/sched_wiki*.so
|
|
|
|
%{_mandir}/man5/wiki.*
|
|
|
|
|
|
|
|
%files auth-none
|
|
|
|
%defattr(-,root,root)
|
|
|
|
%{_libdir}/slurm/auth_none.so
|
|
|
|
|
|
|
|
%files munge
|
|
|
|
%defattr(-,root,root)
|
|
|
|
%{_libdir}/slurm/auth_munge.so
|
|
|
|
%{_libdir}/slurm/crypto_munge.so
|
|
|
|
|
|
|
|
%files -n perl-slurm
|
|
|
|
%defattr(-,root,root)
|
|
|
|
%{perl_vendorarch}/Slurm.pm
|
|
|
|
%{perl_vendorarch}/Slurm
|
|
|
|
%{perl_vendorarch}/auto/Slurm
|
|
|
|
%{perl_vendorarch}/Slurmdb.pm
|
|
|
|
%{perl_vendorarch}/auto/Slurmdb
|
|
|
|
%{_mandir}/man3/Slurm*.3pm.*
|
|
|
|
%if 0%{?suse_version} <= 1110
|
|
|
|
/var/adm/perl-modules/slurm
|
|
|
|
%endif
|
|
|
|
|
|
|
|
%files slurmdbd
|
|
|
|
%defattr(-,root,root)
|
|
|
|
%{_sbindir}/slurmdbd
|
|
|
|
%{_mandir}/man5/slurmdbd.*
|
|
|
|
%{_mandir}/man8/slurmdbd.*
|
|
|
|
%config(noreplace) %{_sysconfdir}/%{name}/slurmdbd.conf
|
|
|
|
%{_initrddir}/slurmdbd
|
|
|
|
%{_sbindir}/rcslurmdbd
|
|
|
|
|
|
|
|
%files plugins
|
|
|
|
%defattr(-,root,root)
|
|
|
|
%dir %{_libdir}/slurm
|
|
|
|
%{_libdir}/slurm/accounting_storage_filetxt.so
|
|
|
|
%{_libdir}/slurm/accounting_storage_none.so
|
|
|
|
%{_libdir}/slurm/accounting_storage_slurmdbd.so
|
|
|
|
%{_libdir}/slurm/acct_gather_energy_none.so
|
|
|
|
%{_libdir}/slurm/acct_gather_energy_rapl.so
|
Accepting request 226317 from home:scorot:branches:network:cluster
- update to version 2.6.7
* Support for job arrays, which increases performance and ease of
use for sets of similar jobs.
* Job profiling capability added to record a wide variety of job
characteristics for each task on a user configurable periodic
basis. Data currently available includes CPU use, memory use,
energy use, Infiniband network use, Lustre file system use, etc.
* Support for MPICH2 using PMI2 communications interface with much
greater scalability.
* Prolog and epilog support for advanced reservations.
* Much faster throughput for job step execution with --exclusive
option. The srun process is notified when resources become
available rather than periodic polling.
* Support improved for Intel MIC (Many Integrated Core) processor.
* Advanced reservations with hostname and core counts now supports
asymmetric reservations (e.g. specific different core count for
each node).
* External sensor plugin infrastructure added to record power
consumption, temperature, etc.
* Improved performance for high-throughput computing.
* MapReduce+ support (launches ~1000x faster, runs ~10x faster).
* Added "MaxCPUsPerNode" partition configuration parameter. This
can be especially useful to schedule GPUs. For example a node
can be associated with two Slurm partitions (e.g. "cpu" and
"gpu") and the partition/queue "cpu" could be limited to only a
subset of the node's CPUs, insuring that one or more CPUs would
be available to jobs in the "gpu" partition/queue.
OBS-URL: https://build.opensuse.org/request/show/226317
OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
|
|
|
%{_libdir}/slurm/acct_gather_filesystem_lustre.so
|
|
|
|
%{_libdir}/slurm/acct_gather_filesystem_none.so
|
|
|
|
%{_libdir}/slurm/acct_gather_infiniband_none.so
|
|
|
|
%{_libdir}/slurm/acct_gather_profile_none.so
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_libdir}/slurm/checkpoint_none.so
|
|
|
|
%{_libdir}/slurm/checkpoint_ompi.so
|
2014-07-26 16:36:03 +02:00
|
|
|
%{_libdir}/slurm/core_spec_cray.so
|
|
|
|
%{_libdir}/slurm/core_spec_none.so
|
Accepting request 226317 from home:scorot:branches:network:cluster
- update to version 2.6.7
* Support for job arrays, which increases performance and ease of
use for sets of similar jobs.
* Job profiling capability added to record a wide variety of job
characteristics for each task on a user configurable periodic
basis. Data currently available includes CPU use, memory use,
energy use, Infiniband network use, Lustre file system use, etc.
* Support for MPICH2 using PMI2 communications interface with much
greater scalability.
* Prolog and epilog support for advanced reservations.
* Much faster throughput for job step execution with --exclusive
option. The srun process is notified when resources become
available rather than periodic polling.
* Support improved for Intel MIC (Many Integrated Core) processor.
* Advanced reservations with hostname and core counts now supports
asymmetric reservations (e.g. specific different core count for
each node).
* External sensor plugin infrastructure added to record power
consumption, temperature, etc.
* Improved performance for high-throughput computing.
* MapReduce+ support (launches ~1000x faster, runs ~10x faster).
* Added "MaxCPUsPerNode" partition configuration parameter. This
can be especially useful to schedule GPUs. For example a node
can be associated with two Slurm partitions (e.g. "cpu" and
"gpu") and the partition/queue "cpu" could be limited to only a
subset of the node's CPUs, insuring that one or more CPUs would
be available to jobs in the "gpu" partition/queue.
OBS-URL: https://build.opensuse.org/request/show/226317
OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
|
|
|
%{_libdir}/slurm/ext_sensors_none.so
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_libdir}/slurm/jobacct_gather_aix.so
|
|
|
|
%{_libdir}/slurm/jobacct_gather_linux.so
|
|
|
|
%{_libdir}/slurm/jobacct_gather_none.so
|
2014-07-26 16:36:03 +02:00
|
|
|
%{_libdir}/slurm/job_container_cncu.so
|
|
|
|
%{_libdir}/slurm/job_container_none.so
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_libdir}/slurm/jobcomp_none.so
|
|
|
|
%{_libdir}/slurm/jobcomp_filetxt.so
|
|
|
|
%{_libdir}/slurm/jobcomp_script.so
|
2014-07-26 16:36:03 +02:00
|
|
|
%{_libdir}/slurm/job_submit_cray.so
|
Accepting request 226317 from home:scorot:branches:network:cluster
- update to version 2.6.7
* Support for job arrays, which increases performance and ease of
use for sets of similar jobs.
* Job profiling capability added to record a wide variety of job
characteristics for each task on a user configurable periodic
basis. Data currently available includes CPU use, memory use,
energy use, Infiniband network use, Lustre file system use, etc.
* Support for MPICH2 using PMI2 communications interface with much
greater scalability.
* Prolog and epilog support for advanced reservations.
* Much faster throughput for job step execution with --exclusive
option. The srun process is notified when resources become
available rather than periodic polling.
* Support improved for Intel MIC (Many Integrated Core) processor.
* Advanced reservations with hostname and core counts now supports
asymmetric reservations (e.g. specific different core count for
each node).
* External sensor plugin infrastructure added to record power
consumption, temperature, etc.
* Improved performance for high-throughput computing.
* MapReduce+ support (launches ~1000x faster, runs ~10x faster).
* Added "MaxCPUsPerNode" partition configuration parameter. This
can be especially useful to schedule GPUs. For example a node
can be associated with two Slurm partitions (e.g. "cpu" and
"gpu") and the partition/queue "cpu" could be limited to only a
subset of the node's CPUs, insuring that one or more CPUs would
be available to jobs in the "gpu" partition/queue.
OBS-URL: https://build.opensuse.org/request/show/226317
OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
|
|
|
%{_libdir}/slurm/job_submit_pbs.so
|
|
|
|
%{_libdir}/slurm/job_submit_require_timelimit.so
|
2014-07-26 16:36:03 +02:00
|
|
|
%{_libdir}/slurm/job_submit_throttle.so
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_libdir}/slurm/mpi_lam.so
|
|
|
|
%{_libdir}/slurm/mpi_mpich1_p4.so
|
|
|
|
%{_libdir}/slurm/mpi_mpich1_shmem.so
|
|
|
|
%{_libdir}/slurm/mpi_mpichgm.so
|
|
|
|
%{_libdir}/slurm/mpi_mpichmx.so
|
|
|
|
%{_libdir}/slurm/mpi_mvapich.so
|
|
|
|
%{_libdir}/slurm/mpi_none.so
|
|
|
|
%{_libdir}/slurm/mpi_openmpi.so
|
|
|
|
%{_libdir}/slurm/preempt_none.so
|
|
|
|
%{_libdir}/slurm/preempt_partition_prio.so
|
|
|
|
%{_libdir}/slurm/preempt_qos.so
|
|
|
|
%{_libdir}/slurm/priority_basic.so
|
|
|
|
%{_libdir}/slurm/proctrack_pgid.so
|
|
|
|
%{_libdir}/slurm/proctrack_linuxproc.so
|
|
|
|
%{_libdir}/slurm/sched_backfill.so
|
|
|
|
%{_libdir}/slurm/sched_builtin.so
|
|
|
|
%{_libdir}/slurm/sched_hold.so
|
2014-07-26 16:36:03 +02:00
|
|
|
%{_libdir}/slurm/select_alps.so
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_libdir}/slurm/select_cons_res.so
|
|
|
|
%{_libdir}/slurm/select_linear.so
|
2014-07-26 16:36:03 +02:00
|
|
|
%{_libdir}/slurm/slurmctld_nonstop.so
|
|
|
|
%{_libdir}/slurm/switch_cray.so
|
|
|
|
%{_libdir}/slurm/switch_generic.so
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_libdir}/slurm/switch_none.so
|
Accepting request 226317 from home:scorot:branches:network:cluster
- update to version 2.6.7
* Support for job arrays, which increases performance and ease of
use for sets of similar jobs.
* Job profiling capability added to record a wide variety of job
characteristics for each task on a user configurable periodic
basis. Data currently available includes CPU use, memory use,
energy use, Infiniband network use, Lustre file system use, etc.
* Support for MPICH2 using PMI2 communications interface with much
greater scalability.
* Prolog and epilog support for advanced reservations.
* Much faster throughput for job step execution with --exclusive
option. The srun process is notified when resources become
available rather than periodic polling.
* Support improved for Intel MIC (Many Integrated Core) processor.
* Advanced reservations with hostname and core counts now supports
asymmetric reservations (e.g. specific different core count for
each node).
* External sensor plugin infrastructure added to record power
consumption, temperature, etc.
* Improved performance for high-throughput computing.
* MapReduce+ support (launches ~1000x faster, runs ~10x faster).
* Added "MaxCPUsPerNode" partition configuration parameter. This
can be especially useful to schedule GPUs. For example a node
can be associated with two Slurm partitions (e.g. "cpu" and
"gpu") and the partition/queue "cpu" could be limited to only a
subset of the node's CPUs, insuring that one or more CPUs would
be available to jobs in the "gpu" partition/queue.
OBS-URL: https://build.opensuse.org/request/show/226317
OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
|
|
|
%{_libdir}/slurm/spank_pbs.so
|
2014-07-26 16:36:03 +02:00
|
|
|
%{_libdir}/slurm/task_cray.so
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_libdir}/slurm/task_none.so
|
|
|
|
%{_libdir}/slurm/topology_3d_torus.so
|
|
|
|
%{_libdir}/slurm/topology_none.so
|
|
|
|
%{_libdir}/slurm/topology_tree.so
|
|
|
|
%{_libdir}/slurm/accounting_storage_mysql.so
|
|
|
|
%{_libdir}/slurm/crypto_openssl.so
|
|
|
|
%{_libdir}/slurm/jobcomp_mysql.so
|
|
|
|
%{_libdir}/slurm/task_affinity.so
|
|
|
|
%{_libdir}/slurm/gres_gpu.so
|
|
|
|
%{_libdir}/slurm/gres_mic.so
|
|
|
|
%{_libdir}/slurm/gres_nic.so
|
|
|
|
%{_libdir}/slurm/job_submit_all_partitions.so
|
|
|
|
%{_libdir}/slurm/job_submit_cnode.so
|
|
|
|
%{_libdir}/slurm/job_submit_defaults.so
|
|
|
|
%{_libdir}/slurm/job_submit_logging.so
|
|
|
|
%{_libdir}/slurm/job_submit_partition.so
|
|
|
|
%{_libdir}/slurm/jobacct_gather_cgroup.so
|
|
|
|
%{_libdir}/slurm/launch_slurm.so
|
|
|
|
%{_libdir}/slurm/mpi_pmi2.so
|
|
|
|
%{_libdir}/slurm/proctrack_cgroup.so
|
Accepting request 226317 from home:scorot:branches:network:cluster
- update to version 2.6.7
* Support for job arrays, which increases performance and ease of
use for sets of similar jobs.
* Job profiling capability added to record a wide variety of job
characteristics for each task on a user configurable periodic
basis. Data currently available includes CPU use, memory use,
energy use, Infiniband network use, Lustre file system use, etc.
* Support for MPICH2 using PMI2 communications interface with much
greater scalability.
* Prolog and epilog support for advanced reservations.
* Much faster throughput for job step execution with --exclusive
option. The srun process is notified when resources become
available rather than periodic polling.
* Support improved for Intel MIC (Many Integrated Core) processor.
* Advanced reservations with hostname and core counts now supports
asymmetric reservations (e.g. specific different core count for
each node).
* External sensor plugin infrastructure added to record power
consumption, temperature, etc.
* Improved performance for high-throughput computing.
* MapReduce+ support (launches ~1000x faster, runs ~10x faster).
* Added "MaxCPUsPerNode" partition configuration parameter. This
can be especially useful to schedule GPUs. For example a node
can be associated with two Slurm partitions (e.g. "cpu" and
"gpu") and the partition/queue "cpu" could be limited to only a
subset of the node's CPUs, insuring that one or more CPUs would
be available to jobs in the "gpu" partition/queue.
OBS-URL: https://build.opensuse.org/request/show/226317
OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
|
|
|
%{_libdir}/slurm/priority_multifactor.so
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_libdir}/slurm/select_bluegene.so
|
|
|
|
%{_libdir}/slurm/select_cray.so
|
|
|
|
%{_libdir}/slurm/select_serial.so
|
|
|
|
%{_libdir}/slurm/task_cgroup.so
|
|
|
|
%{_libdir}/slurm/topology_node_rank.so
|
|
|
|
|
|
|
|
%files torque
|
|
|
|
%defattr(-,root,root)
|
|
|
|
%{_bindir}/pbsnodes
|
Accepting request 226317 from home:scorot:branches:network:cluster
- update to version 2.6.7
* Support for job arrays, which increases performance and ease of
use for sets of similar jobs.
* Job profiling capability added to record a wide variety of job
characteristics for each task on a user configurable periodic
basis. Data currently available includes CPU use, memory use,
energy use, Infiniband network use, Lustre file system use, etc.
* Support for MPICH2 using PMI2 communications interface with much
greater scalability.
* Prolog and epilog support for advanced reservations.
* Much faster throughput for job step execution with --exclusive
option. The srun process is notified when resources become
available rather than periodic polling.
* Support improved for Intel MIC (Many Integrated Core) processor.
* Advanced reservations with hostname and core counts now supports
asymmetric reservations (e.g. specific different core count for
each node).
* External sensor plugin infrastructure added to record power
consumption, temperature, etc.
* Improved performance for high-throughput computing.
* MapReduce+ support (launches ~1000x faster, runs ~10x faster).
* Added "MaxCPUsPerNode" partition configuration parameter. This
can be especially useful to schedule GPUs. For example a node
can be associated with two Slurm partitions (e.g. "cpu" and
"gpu") and the partition/queue "cpu" could be limited to only a
subset of the node's CPUs, insuring that one or more CPUs would
be available to jobs in the "gpu" partition/queue.
OBS-URL: https://build.opensuse.org/request/show/226317
OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
|
|
|
%{_bindir}/qalter
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_bindir}/qdel
|
|
|
|
%{_bindir}/qhold
|
|
|
|
%{_bindir}/qrls
|
Accepting request 226317 from home:scorot:branches:network:cluster
- update to version 2.6.7
* Support for job arrays, which increases performance and ease of
use for sets of similar jobs.
* Job profiling capability added to record a wide variety of job
characteristics for each task on a user configurable periodic
basis. Data currently available includes CPU use, memory use,
energy use, Infiniband network use, Lustre file system use, etc.
* Support for MPICH2 using PMI2 communications interface with much
greater scalability.
* Prolog and epilog support for advanced reservations.
* Much faster throughput for job step execution with --exclusive
option. The srun process is notified when resources become
available rather than periodic polling.
* Support improved for Intel MIC (Many Integrated Core) processor.
* Advanced reservations with hostname and core counts now supports
asymmetric reservations (e.g. specific different core count for
each node).
* External sensor plugin infrastructure added to record power
consumption, temperature, etc.
* Improved performance for high-throughput computing.
* MapReduce+ support (launches ~1000x faster, runs ~10x faster).
* Added "MaxCPUsPerNode" partition configuration parameter. This
can be especially useful to schedule GPUs. For example a node
can be associated with two Slurm partitions (e.g. "cpu" and
"gpu") and the partition/queue "cpu" could be limited to only a
subset of the node's CPUs, insuring that one or more CPUs would
be available to jobs in the "gpu" partition/queue.
OBS-URL: https://build.opensuse.org/request/show/226317
OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
|
|
|
%{_bindir}/qrerun
|
2013-04-08 23:59:36 +02:00
|
|
|
%{_bindir}/qstat
|
|
|
|
%{_bindir}/qsub
|
|
|
|
%{_bindir}/mpiexec
|
|
|
|
|
|
|
|
%files slurmdb-direct
|
|
|
|
%defattr(-,root,root)
|
|
|
|
%config (noreplace) %{perl_vendorarch}/config.slurmdb.pl
|
|
|
|
%{_sbindir}/moab_2_slurmdb
|
|
|
|
|
|
|
|
%files sjstat
|
|
|
|
%defattr(-,root,root)
|
|
|
|
%{_bindir}/sjstat
|
|
|
|
|
|
|
|
%files pam_slurm
|
|
|
|
%defattr(-,root,root)
|
|
|
|
/%_lib/security/pam_slurm.so
|
|
|
|
|
|
|
|
%changelog
|