SHA256
1
0
forked from pool/slurm
slurm/slurm.spec

611 lines
17 KiB
RPMSpec
Raw Normal View History

#
# spec file for package slurm
#
# Copyright (c) 2015 SUSE LINUX Products GmbH, Nuernberg, Germany.
#
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
# upon. The license for this file, and modifications and additions to the
# file, is the same license as for the pristine package itself (unless the
# license for the pristine package is not an Open Source License, in which
# case the license is the MIT License). An "Open Source License" is a
# license that conforms to the Open Source Definition (Version 1.9)
# published by the Open Source Initiative.
# Please submit bugfixes or comments via http://bugs.opensuse.org/
#
Accepting request 435622 from home:eeich:branches:network:cluster - version 15.08.7.1 * Remove the 1024-character limit on lines in batch scripts. task/affinity: Disable core-level task binding if more CPUs required than available cores. * Preemption/gang scheduling: If a job is suspended at slurmctld restart or reconfiguration time, then leave it suspended rather than resume+suspend. * Don't use lower weight nodes for job allocation when topology/tree used. * Don't allow user specified reservation names to disrupt the normal reservation sequeuece numbering scheme. * Avoid hard-link/copy of script/environment files for job arrays. Use the master job record file for all tasks of the job array. NOTE: Job arrays submitted to Slurm version 15.08.6 or later will fail if the slurmctld daemon is downgraded to an earlier version of Slurm. * In slurmctld log file, log duplicate job ID found by slurmd. Previously was being logged as prolog/epilog failure. * If a job is requeued while in the process of being launch, remove it's job ID from slurmd's record of active jobs in order to avoid generating a duplicate job ID error when launched for the second time (which would drain the node). * Cleanup messages when handling job script and environment variables in older directory structure formats. * Prevent triggering gang scheduling within a partition if configured with PreemptType=partition_prio and PreemptMode=suspend,gang. * Decrease parallelism in job cancel request to prevent denial of service when cancelling huge numbers of jobs. * If all ephemeral ports are in use, try using other port numbers. * Prevent "scontrol update job" from updating jobs that have already finished. * Show requested TRES in "squeue -O tres" when job is pending. * Backfill scheduler: Test association and QOS node limits before reserving resources for pending job. * Many bug fixes. - Use source services to download package. - Fix code for new API of hwloc-2.0. - package netloc_to_topology where avialable. - Package documentation. OBS-URL: https://build.opensuse.org/request/show/435622 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=10
2016-10-16 21:51:20 +02:00
%define trans() ( echo %{1} | sed -e "s#-#\\.#g" )
%define trunc() ( echo %{1} | sed -e "s#\\([^.]\\+\\.[^.]\\+\\.[^.]\\+\\).*#\\1#" )
%define vers_f() %(%trans)
%define vers_t() %(%trunc)
%if 0%{?suse_version} >= 1220
%define with_systemd 1
%else
%define with_systemd 0
%endif
Accepting request 435622 from home:eeich:branches:network:cluster - version 15.08.7.1 * Remove the 1024-character limit on lines in batch scripts. task/affinity: Disable core-level task binding if more CPUs required than available cores. * Preemption/gang scheduling: If a job is suspended at slurmctld restart or reconfiguration time, then leave it suspended rather than resume+suspend. * Don't use lower weight nodes for job allocation when topology/tree used. * Don't allow user specified reservation names to disrupt the normal reservation sequeuece numbering scheme. * Avoid hard-link/copy of script/environment files for job arrays. Use the master job record file for all tasks of the job array. NOTE: Job arrays submitted to Slurm version 15.08.6 or later will fail if the slurmctld daemon is downgraded to an earlier version of Slurm. * In slurmctld log file, log duplicate job ID found by slurmd. Previously was being logged as prolog/epilog failure. * If a job is requeued while in the process of being launch, remove it's job ID from slurmd's record of active jobs in order to avoid generating a duplicate job ID error when launched for the second time (which would drain the node). * Cleanup messages when handling job script and environment variables in older directory structure formats. * Prevent triggering gang scheduling within a partition if configured with PreemptType=partition_prio and PreemptMode=suspend,gang. * Decrease parallelism in job cancel request to prevent denial of service when cancelling huge numbers of jobs. * If all ephemeral ports are in use, try using other port numbers. * Prevent "scontrol update job" from updating jobs that have already finished. * Show requested TRES in "squeue -O tres" when job is pending. * Backfill scheduler: Test association and QOS node limits before reserving resources for pending job. * Many bug fixes. - Use source services to download package. - Fix code for new API of hwloc-2.0. - package netloc_to_topology where avialable. - Package documentation. OBS-URL: https://build.opensuse.org/request/show/435622 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=10
2016-10-16 21:51:20 +02:00
%if 0%{suse_version} >= 1310
%define have_netloc 1
%endif
%define libslurm libslurm29
Accepting request 435622 from home:eeich:branches:network:cluster - version 15.08.7.1 * Remove the 1024-character limit on lines in batch scripts. task/affinity: Disable core-level task binding if more CPUs required than available cores. * Preemption/gang scheduling: If a job is suspended at slurmctld restart or reconfiguration time, then leave it suspended rather than resume+suspend. * Don't use lower weight nodes for job allocation when topology/tree used. * Don't allow user specified reservation names to disrupt the normal reservation sequeuece numbering scheme. * Avoid hard-link/copy of script/environment files for job arrays. Use the master job record file for all tasks of the job array. NOTE: Job arrays submitted to Slurm version 15.08.6 or later will fail if the slurmctld daemon is downgraded to an earlier version of Slurm. * In slurmctld log file, log duplicate job ID found by slurmd. Previously was being logged as prolog/epilog failure. * If a job is requeued while in the process of being launch, remove it's job ID from slurmd's record of active jobs in order to avoid generating a duplicate job ID error when launched for the second time (which would drain the node). * Cleanup messages when handling job script and environment variables in older directory structure formats. * Prevent triggering gang scheduling within a partition if configured with PreemptType=partition_prio and PreemptMode=suspend,gang. * Decrease parallelism in job cancel request to prevent denial of service when cancelling huge numbers of jobs. * If all ephemeral ports are in use, try using other port numbers. * Prevent "scontrol update job" from updating jobs that have already finished. * Show requested TRES in "squeue -O tres" when job is pending. * Backfill scheduler: Test association and QOS node limits before reserving resources for pending job. * Many bug fixes. - Use source services to download package. - Fix code for new API of hwloc-2.0. - package netloc_to_topology where avialable. - Package documentation. OBS-URL: https://build.opensuse.org/request/show/435622 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=10
2016-10-16 21:51:20 +02:00
%define ver_exp 15-08-7-1
Name: slurm
Accepting request 435622 from home:eeich:branches:network:cluster - version 15.08.7.1 * Remove the 1024-character limit on lines in batch scripts. task/affinity: Disable core-level task binding if more CPUs required than available cores. * Preemption/gang scheduling: If a job is suspended at slurmctld restart or reconfiguration time, then leave it suspended rather than resume+suspend. * Don't use lower weight nodes for job allocation when topology/tree used. * Don't allow user specified reservation names to disrupt the normal reservation sequeuece numbering scheme. * Avoid hard-link/copy of script/environment files for job arrays. Use the master job record file for all tasks of the job array. NOTE: Job arrays submitted to Slurm version 15.08.6 or later will fail if the slurmctld daemon is downgraded to an earlier version of Slurm. * In slurmctld log file, log duplicate job ID found by slurmd. Previously was being logged as prolog/epilog failure. * If a job is requeued while in the process of being launch, remove it's job ID from slurmd's record of active jobs in order to avoid generating a duplicate job ID error when launched for the second time (which would drain the node). * Cleanup messages when handling job script and environment variables in older directory structure formats. * Prevent triggering gang scheduling within a partition if configured with PreemptType=partition_prio and PreemptMode=suspend,gang. * Decrease parallelism in job cancel request to prevent denial of service when cancelling huge numbers of jobs. * If all ephemeral ports are in use, try using other port numbers. * Prevent "scontrol update job" from updating jobs that have already finished. * Show requested TRES in "squeue -O tres" when job is pending. * Backfill scheduler: Test association and QOS node limits before reserving resources for pending job. * Many bug fixes. - Use source services to download package. - Fix code for new API of hwloc-2.0. - package netloc_to_topology where avialable. - Package documentation. OBS-URL: https://build.opensuse.org/request/show/435622 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=10
2016-10-16 21:51:20 +02:00
Version: %{vers_f %ver_exp}
Release: 0
Summary: Simple Linux Utility for Resource Management
License: GPL-3.0
Group: Productivity/Clustering/Computing
Url: https://computing.llnl.gov/linux/slurm/
Accepting request 435622 from home:eeich:branches:network:cluster - version 15.08.7.1 * Remove the 1024-character limit on lines in batch scripts. task/affinity: Disable core-level task binding if more CPUs required than available cores. * Preemption/gang scheduling: If a job is suspended at slurmctld restart or reconfiguration time, then leave it suspended rather than resume+suspend. * Don't use lower weight nodes for job allocation when topology/tree used. * Don't allow user specified reservation names to disrupt the normal reservation sequeuece numbering scheme. * Avoid hard-link/copy of script/environment files for job arrays. Use the master job record file for all tasks of the job array. NOTE: Job arrays submitted to Slurm version 15.08.6 or later will fail if the slurmctld daemon is downgraded to an earlier version of Slurm. * In slurmctld log file, log duplicate job ID found by slurmd. Previously was being logged as prolog/epilog failure. * If a job is requeued while in the process of being launch, remove it's job ID from slurmd's record of active jobs in order to avoid generating a duplicate job ID error when launched for the second time (which would drain the node). * Cleanup messages when handling job script and environment variables in older directory structure formats. * Prevent triggering gang scheduling within a partition if configured with PreemptType=partition_prio and PreemptMode=suspend,gang. * Decrease parallelism in job cancel request to prevent denial of service when cancelling huge numbers of jobs. * If all ephemeral ports are in use, try using other port numbers. * Prevent "scontrol update job" from updating jobs that have already finished. * Show requested TRES in "squeue -O tres" when job is pending. * Backfill scheduler: Test association and QOS node limits before reserving resources for pending job. * Many bug fixes. - Use source services to download package. - Fix code for new API of hwloc-2.0. - package netloc_to_topology where avialable. - Package documentation. OBS-URL: https://build.opensuse.org/request/show/435622 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=10
2016-10-16 21:51:20 +02:00
Source: https://github.com/SchedMD/slurm/archive/%{name}-%{ver_exp}.tar.gz
Source1: slurm.service
Source2: slurmdbd.service
Patch0: slurm-2.4.4-rpath.patch
Patch1: slurm-2.4.4-init.patch
Accepting request 435622 from home:eeich:branches:network:cluster - version 15.08.7.1 * Remove the 1024-character limit on lines in batch scripts. task/affinity: Disable core-level task binding if more CPUs required than available cores. * Preemption/gang scheduling: If a job is suspended at slurmctld restart or reconfiguration time, then leave it suspended rather than resume+suspend. * Don't use lower weight nodes for job allocation when topology/tree used. * Don't allow user specified reservation names to disrupt the normal reservation sequeuece numbering scheme. * Avoid hard-link/copy of script/environment files for job arrays. Use the master job record file for all tasks of the job array. NOTE: Job arrays submitted to Slurm version 15.08.6 or later will fail if the slurmctld daemon is downgraded to an earlier version of Slurm. * In slurmctld log file, log duplicate job ID found by slurmd. Previously was being logged as prolog/epilog failure. * If a job is requeued while in the process of being launch, remove it's job ID from slurmd's record of active jobs in order to avoid generating a duplicate job ID error when launched for the second time (which would drain the node). * Cleanup messages when handling job script and environment variables in older directory structure formats. * Prevent triggering gang scheduling within a partition if configured with PreemptType=partition_prio and PreemptMode=suspend,gang. * Decrease parallelism in job cancel request to prevent denial of service when cancelling huge numbers of jobs. * If all ephemeral ports are in use, try using other port numbers. * Prevent "scontrol update job" from updating jobs that have already finished. * Show requested TRES in "squeue -O tres" when job is pending. * Backfill scheduler: Test association and QOS node limits before reserving resources for pending job. * Many bug fixes. - Use source services to download package. - Fix code for new API of hwloc-2.0. - package netloc_to_topology where avialable. - Package documentation. OBS-URL: https://build.opensuse.org/request/show/435622 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=10
2016-10-16 21:51:20 +02:00
Patch2: slurmd-Fix-for-newer-API-versions.patch
Requires: slurm-plugins = %{version}
BuildRequires: fdupes
BuildRequires: gcc-c++
BuildRequires: gtk2-devel
BuildRequires: libbitmask-devel
BuildRequires: libcpuset-devel
BuildRequires: libhwloc-devel
%ifarch x86_64
BuildRequires: libnuma-devel
%endif
BuildRequires: mysql-devel >= 5.0.0
BuildRequires: ncurses-devel
BuildRequires: openssl-devel >= 0.9.6
BuildRequires: pkgconfig
BuildRequires: postgresql-devel >= 8.0.0
BuildRequires: python
BuildRequires: readline-devel
%if %{with_systemd}
%{?systemd_requires}
BuildRequires: systemd
%else
PreReq: %insserv_prereq %fillup_prereq
%endif
BuildRoot: %{_tmppath}/%{name}-%{version}-build
%description
SLURM is an open source, fault-tolerant, and highly
scalable cluster management and job scheduling system for Linux clusters
containing up to 65,536 nodes. Components include machine status,
partition management, job management, scheduling and accounting modules.
Accepting request 435622 from home:eeich:branches:network:cluster - version 15.08.7.1 * Remove the 1024-character limit on lines in batch scripts. task/affinity: Disable core-level task binding if more CPUs required than available cores. * Preemption/gang scheduling: If a job is suspended at slurmctld restart or reconfiguration time, then leave it suspended rather than resume+suspend. * Don't use lower weight nodes for job allocation when topology/tree used. * Don't allow user specified reservation names to disrupt the normal reservation sequeuece numbering scheme. * Avoid hard-link/copy of script/environment files for job arrays. Use the master job record file for all tasks of the job array. NOTE: Job arrays submitted to Slurm version 15.08.6 or later will fail if the slurmctld daemon is downgraded to an earlier version of Slurm. * In slurmctld log file, log duplicate job ID found by slurmd. Previously was being logged as prolog/epilog failure. * If a job is requeued while in the process of being launch, remove it's job ID from slurmd's record of active jobs in order to avoid generating a duplicate job ID error when launched for the second time (which would drain the node). * Cleanup messages when handling job script and environment variables in older directory structure formats. * Prevent triggering gang scheduling within a partition if configured with PreemptType=partition_prio and PreemptMode=suspend,gang. * Decrease parallelism in job cancel request to prevent denial of service when cancelling huge numbers of jobs. * If all ephemeral ports are in use, try using other port numbers. * Prevent "scontrol update job" from updating jobs that have already finished. * Show requested TRES in "squeue -O tres" when job is pending. * Backfill scheduler: Test association and QOS node limits before reserving resources for pending job. * Many bug fixes. - Use source services to download package. - Fix code for new API of hwloc-2.0. - package netloc_to_topology where avialable. - Package documentation. OBS-URL: https://build.opensuse.org/request/show/435622 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=10
2016-10-16 21:51:20 +02:00
%package doc
Summary: Documentation for SLURM
Group: Documentation/Clustering/Computing
%description doc
Documentation (html) for the SLURM cluster managment software
%package -n perl-slurm
Summary: Perl API to SLURM
Group: Development/Languages/Perl
Requires: slurm = %{version}
%if 0%{?suse_version} < 1140
Requires: perl = %{perl_version}
%else
%{perl_requires}
%endif
%description -n perl-slurm
Perl API package for SLURM. This package includes the perl API to provide a
helpful interface to SLURM through Perl.
%package -n %{libslurm}
Summary: Libraries for slurm
Group: System/Libraries
%description -n %{libslurm}
This package contains the library needed to run programs dynamically linked
with slurm.
%package devel
Summary: Development package for SLURM
Group: Development/Libraries/C and C++
Requires: %{libslurm} = %{version}
Requires: slurm = %{version}
%description devel
Development package for SLURM. This package includes the header files
and libraries for the SLURM API.
%package auth-none
Summary: SLURM auth NULL implementation (no authentication)
Group: Productivity/Clustering/Computing
Requires: slurm = %{version}
%description auth-none
This package cobtains the SLURM NULL authentication module.
%package munge
Summary: SLURM authentication and crypto implementation using Munge
Group: Productivity/Clustering/Computing
Requires: slurm = %{version}
Requires: munge
BuildRequires: munge-devel
Obsoletes: slurm-auth-munge < %{version}
Provides: slurm-auth-munge = %{version}
%description munge
Accepting request 435622 from home:eeich:branches:network:cluster - version 15.08.7.1 * Remove the 1024-character limit on lines in batch scripts. task/affinity: Disable core-level task binding if more CPUs required than available cores. * Preemption/gang scheduling: If a job is suspended at slurmctld restart or reconfiguration time, then leave it suspended rather than resume+suspend. * Don't use lower weight nodes for job allocation when topology/tree used. * Don't allow user specified reservation names to disrupt the normal reservation sequeuece numbering scheme. * Avoid hard-link/copy of script/environment files for job arrays. Use the master job record file for all tasks of the job array. NOTE: Job arrays submitted to Slurm version 15.08.6 or later will fail if the slurmctld daemon is downgraded to an earlier version of Slurm. * In slurmctld log file, log duplicate job ID found by slurmd. Previously was being logged as prolog/epilog failure. * If a job is requeued while in the process of being launch, remove it's job ID from slurmd's record of active jobs in order to avoid generating a duplicate job ID error when launched for the second time (which would drain the node). * Cleanup messages when handling job script and environment variables in older directory structure formats. * Prevent triggering gang scheduling within a partition if configured with PreemptType=partition_prio and PreemptMode=suspend,gang. * Decrease parallelism in job cancel request to prevent denial of service when cancelling huge numbers of jobs. * If all ephemeral ports are in use, try using other port numbers. * Prevent "scontrol update job" from updating jobs that have already finished. * Show requested TRES in "squeue -O tres" when job is pending. * Backfill scheduler: Test association and QOS node limits before reserving resources for pending job. * Many bug fixes. - Use source services to download package. - Fix code for new API of hwloc-2.0. - package netloc_to_topology where avialable. - Package documentation. OBS-URL: https://build.opensuse.org/request/show/435622 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=10
2016-10-16 21:51:20 +02:00
This package contains the SLURM authentication module for Chris Dunlap''s Munge.
%package sview
Summary: SLURM graphical interface
Group: Productivity/Clustering/Computing
%description sview
sview is a graphical user interface to get and update state information for
jobs, partitions, and nodes managed by SLURM.
%package sched-wiki
Summary: SLURM plugin for the Maui or Moab scheduler wiki interface
Group: Productivity/Clustering/Computing
Requires: slurm = %{version}
%description sched-wiki
This package contains the SLURM plugin for the Maui or Moab scheduler wiki interface.
%package slurmdbd
Summary: SLURM database daemon
Group: Productivity/Clustering/Computing
Requires: slurm-plugins = %{version}
%if %{with_systemd}
%{?systemd_requires}
%else
PreReq: %insserv_prereq %fillup_prereq
%endif
%description slurmdbd
The SLURM database daemon provides accounting of jobs in a database.
%package plugins
Summary: SLURM plugins (loadable shared objects)
Group: Productivity/Clustering/Computing
%description plugins
This package contains the SLURM plugins (loadable shared objects)
%package torque
Summary: Torque/PBS wrappers for transitition from Torque/PBS to SLURM
Group: Productivity/Clustering/Computing
Requires: perl-slurm = %{version}
Provides: torque-client
%description torque
Torque wrapper scripts used for helping migrate from Torque/PBS to SLURM.
%package slurmdb-direct
Summary: Wrappers to write directly to the slurmdb
Group: Productivity/Clustering/Computing
Requires: perl-slurm = %{version}
%if 0%{?suse_version} < 1140
Requires: perl = %{perl_version}
%else
%{perl_requires}
%endif
%description slurmdb-direct
This package contains the wrappers to write directly to the slurmdb.
%package sjstat
Summary: Perl tool to print SLURM job state information
Group: Productivity/Clustering/Computing
Requires: slurm = %{version}
%if 0%{?suse_version} < 1140
Requires: perl = %{perl_version}
%else
%{perl_requires}
%endif
%description sjstat
This package contains the perl tool to print SLURM job state information.
%package pam_slurm
Summary: PAM module for restricting access to compute nodes via SLURM
Group: Productivity/Clustering/Computing
Requires: slurm = %{version}
BuildRequires: pam-devel
%description pam_slurm
This module restricts access to compute nodes in a cluster where the Simple
Linux Utility for Resource Managment (SLURM) is in use. Access is granted
to root, any user with an SLURM-launched job currently running on the node,
or any user who has allocated resources on the node according to the SLURM
%prep
Accepting request 435622 from home:eeich:branches:network:cluster - version 15.08.7.1 * Remove the 1024-character limit on lines in batch scripts. task/affinity: Disable core-level task binding if more CPUs required than available cores. * Preemption/gang scheduling: If a job is suspended at slurmctld restart or reconfiguration time, then leave it suspended rather than resume+suspend. * Don't use lower weight nodes for job allocation when topology/tree used. * Don't allow user specified reservation names to disrupt the normal reservation sequeuece numbering scheme. * Avoid hard-link/copy of script/environment files for job arrays. Use the master job record file for all tasks of the job array. NOTE: Job arrays submitted to Slurm version 15.08.6 or later will fail if the slurmctld daemon is downgraded to an earlier version of Slurm. * In slurmctld log file, log duplicate job ID found by slurmd. Previously was being logged as prolog/epilog failure. * If a job is requeued while in the process of being launch, remove it's job ID from slurmd's record of active jobs in order to avoid generating a duplicate job ID error when launched for the second time (which would drain the node). * Cleanup messages when handling job script and environment variables in older directory structure formats. * Prevent triggering gang scheduling within a partition if configured with PreemptType=partition_prio and PreemptMode=suspend,gang. * Decrease parallelism in job cancel request to prevent denial of service when cancelling huge numbers of jobs. * If all ephemeral ports are in use, try using other port numbers. * Prevent "scontrol update job" from updating jobs that have already finished. * Show requested TRES in "squeue -O tres" when job is pending. * Backfill scheduler: Test association and QOS node limits before reserving resources for pending job. * Many bug fixes. - Use source services to download package. - Fix code for new API of hwloc-2.0. - package netloc_to_topology where avialable. - Package documentation. OBS-URL: https://build.opensuse.org/request/show/435622 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=10
2016-10-16 21:51:20 +02:00
%setup -q -n %{name}-%{name}-%{ver_exp}
%patch0 -p1
%patch1 -p1
Accepting request 435622 from home:eeich:branches:network:cluster - version 15.08.7.1 * Remove the 1024-character limit on lines in batch scripts. task/affinity: Disable core-level task binding if more CPUs required than available cores. * Preemption/gang scheduling: If a job is suspended at slurmctld restart or reconfiguration time, then leave it suspended rather than resume+suspend. * Don't use lower weight nodes for job allocation when topology/tree used. * Don't allow user specified reservation names to disrupt the normal reservation sequeuece numbering scheme. * Avoid hard-link/copy of script/environment files for job arrays. Use the master job record file for all tasks of the job array. NOTE: Job arrays submitted to Slurm version 15.08.6 or later will fail if the slurmctld daemon is downgraded to an earlier version of Slurm. * In slurmctld log file, log duplicate job ID found by slurmd. Previously was being logged as prolog/epilog failure. * If a job is requeued while in the process of being launch, remove it's job ID from slurmd's record of active jobs in order to avoid generating a duplicate job ID error when launched for the second time (which would drain the node). * Cleanup messages when handling job script and environment variables in older directory structure formats. * Prevent triggering gang scheduling within a partition if configured with PreemptType=partition_prio and PreemptMode=suspend,gang. * Decrease parallelism in job cancel request to prevent denial of service when cancelling huge numbers of jobs. * If all ephemeral ports are in use, try using other port numbers. * Prevent "scontrol update job" from updating jobs that have already finished. * Show requested TRES in "squeue -O tres" when job is pending. * Backfill scheduler: Test association and QOS node limits before reserving resources for pending job. * Many bug fixes. - Use source services to download package. - Fix code for new API of hwloc-2.0. - package netloc_to_topology where avialable. - Package documentation. OBS-URL: https://build.opensuse.org/request/show/435622 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=10
2016-10-16 21:51:20 +02:00
%patch2 -p1
chmod 0644 doc/html/*.{gif,jpg}
%build
%configure --enable-shared \
--disable-static \
--without-rpath \
--sysconfdir=%{_sysconfdir}/%{name}
make %{?_smp_mflags}
%install
%makeinstall
make install-contrib DESTDIR=$RPM_BUILD_ROOT
%if %{with_systemd}
mkdir -p %{buildroot}%{_unitdir}
install -p -m644 %{S:1} %{S:2} %{buildroot}%{_unitdir}
ln -s /usr/sbin/service %{buildroot}%{_sbindir}/rcslurm
ln -s /usr/sbin/service %{buildroot}%{_sbindir}/rcslurmdbd
%else
install -D -m755 etc/init.d.slurm $RPM_BUILD_ROOT%{_initrddir}/slurm
install -D -m755 etc/init.d.slurmdbd $RPM_BUILD_ROOT%{_initrddir}/slurmdbd
ln -sf %{_initrddir}/slurm %{buildroot}%{_sbindir}/rcslurm
ln -sf %{_initrddir}/slurmdbd %{buildroot}%{_sbindir}/rcslurmdbd
%endif
install -D -m644 etc/slurm.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/slurm.conf
install -D -m644 etc/slurmdbd.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/slurmdbd.conf
install -D -m644 etc/cgroup.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup.conf
install -D -m755 etc/cgroup.release_common.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup/release_common
install -D -m644 etc/cgroup_allowed_devices_file.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup_allowed_devices_file.conf
install -D -m755 etc/slurm.epilog.clean $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/slurm.epilog.clean
install -D -m755 contribs/sjstat $RPM_BUILD_ROOT%{_bindir}/sjstat
# Delete unpackaged files:
rm -rf $RPM_BUILD_ROOT/%{_libdir}/slurm/*.{a,la} \
$RPM_BUILD_ROOT/%{_libdir}/*.la \
$RPM_BUILD_ROOT/%_lib/security/*.la \
$RPM_BUILD_ROOT/%{_mandir}/man5/bluegene*
rm -f $RPM_BUILD_ROOT%{_mandir}/man1/srun_cr* \
$RPM_BUILD_ROOT%{_bindir}/srun_cr \
$RPM_BUILD_ROOT%{_libexecdir}/slurm/cr_*
mkdir -p $RPM_BUILD_ROOT%{perl_vendorarch}
mv $RPM_BUILD_ROOT%{perl_sitearch}/* $RPM_BUILD_ROOT%{perl_vendorarch}
%perl_process_packlist
rm doc/html/shtml2html.py doc/html/Makefile*
# rpmlint reports wrong end of line for those files
sed -i 's/\r$//' $RPM_BUILD_ROOT%{_bindir}/qrerun
sed -i 's/\r$//' $RPM_BUILD_ROOT%{_bindir}/qalter
%fdupes -s $RPM_BUILD_ROOT
%if %{with_systemd}
%pre
%service_add_pre slurm.service
%endif
%post
%if %{with_systemd}
%service_add_post slurm.service
%else
%fillup_and_insserv slurm
%endif
%preun
%if %{with_systemd}
%service_del_preun slurm.service
%else
%stop_on_removal slurm
%endif
%postun
%if %{with_systemd}
%service_del_postun slurm.service
%else
%restart_on_update slurm
%insserv_cleanup
%endif
%if %{with_systemd}
%pre slurmdbd
%service_add_pre slurmdbd.service
%endif
%post slurmdbd
%if %{with_systemd}
%service_add_post slurmdbd.service
%else
%fillup_and_insserv slurmdbd
%endif
%preun slurmdbd
%if %{with_systemd}
%service_del_preun slurmdbd.service
%else
%stop_on_removal slurmdbd
%endif
%postun slurmdbd
%if %{with_systemd}
%service_del_postun slurmdbd.service
%else
%restart_on_update slurmdbd
%insserv_cleanup
%endif
%post -n %{libslurm} -p /sbin/ldconfig
%postun -n %{libslurm} -p /sbin/ldconfig
%files
%defattr(-,root,root)
%doc AUTHORS NEWS RELEASE_NOTES DISCLAIMER COPYING
%doc doc/html
Accepting request 226317 from home:scorot:branches:network:cluster - update to version 2.6.7 * Support for job arrays, which increases performance and ease of use for sets of similar jobs. * Job profiling capability added to record a wide variety of job characteristics for each task on a user configurable periodic basis. Data currently available includes CPU use, memory use, energy use, Infiniband network use, Lustre file system use, etc. * Support for MPICH2 using PMI2 communications interface with much greater scalability. * Prolog and epilog support for advanced reservations. * Much faster throughput for job step execution with --exclusive option. The srun process is notified when resources become available rather than periodic polling. * Support improved for Intel MIC (Many Integrated Core) processor. * Advanced reservations with hostname and core counts now supports asymmetric reservations (e.g. specific different core count for each node). * External sensor plugin infrastructure added to record power consumption, temperature, etc. * Improved performance for high-throughput computing. * MapReduce+ support (launches ~1000x faster, runs ~10x faster). * Added "MaxCPUsPerNode" partition configuration parameter. This can be especially useful to schedule GPUs. For example a node can be associated with two Slurm partitions (e.g. "cpu" and "gpu") and the partition/queue "cpu" could be limited to only a subset of the node's CPUs, insuring that one or more CPUs would be available to jobs in the "gpu" partition/queue. OBS-URL: https://build.opensuse.org/request/show/226317 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
%{_bindir}/generate_pbs_nodefile
%{_bindir}/sacct
%{_bindir}/sacctmgr
%{_bindir}/salloc
%{_bindir}/sattach
%{_bindir}/sbatch
%{_bindir}/sbcast
%{_bindir}/scancel
%{_bindir}/scontrol
%{_bindir}/sdiag
%{_bindir}/sgather
%{_bindir}/sinfo
%{_bindir}/sjobexitmod
%{_bindir}/sprio
%{_bindir}/squeue
%{_bindir}/sreport
%{_bindir}/srun
%{_bindir}/smap
%{_bindir}/sshare
%{_bindir}/sstat
%{_bindir}/strigger
Accepting request 435622 from home:eeich:branches:network:cluster - version 15.08.7.1 * Remove the 1024-character limit on lines in batch scripts. task/affinity: Disable core-level task binding if more CPUs required than available cores. * Preemption/gang scheduling: If a job is suspended at slurmctld restart or reconfiguration time, then leave it suspended rather than resume+suspend. * Don't use lower weight nodes for job allocation when topology/tree used. * Don't allow user specified reservation names to disrupt the normal reservation sequeuece numbering scheme. * Avoid hard-link/copy of script/environment files for job arrays. Use the master job record file for all tasks of the job array. NOTE: Job arrays submitted to Slurm version 15.08.6 or later will fail if the slurmctld daemon is downgraded to an earlier version of Slurm. * In slurmctld log file, log duplicate job ID found by slurmd. Previously was being logged as prolog/epilog failure. * If a job is requeued while in the process of being launch, remove it's job ID from slurmd's record of active jobs in order to avoid generating a duplicate job ID error when launched for the second time (which would drain the node). * Cleanup messages when handling job script and environment variables in older directory structure formats. * Prevent triggering gang scheduling within a partition if configured with PreemptType=partition_prio and PreemptMode=suspend,gang. * Decrease parallelism in job cancel request to prevent denial of service when cancelling huge numbers of jobs. * If all ephemeral ports are in use, try using other port numbers. * Prevent "scontrol update job" from updating jobs that have already finished. * Show requested TRES in "squeue -O tres" when job is pending. * Backfill scheduler: Test association and QOS node limits before reserving resources for pending job. * Many bug fixes. - Use source services to download package. - Fix code for new API of hwloc-2.0. - package netloc_to_topology where avialable. - Package documentation. OBS-URL: https://build.opensuse.org/request/show/435622 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=10
2016-10-16 21:51:20 +02:00
%{?have_netloc: %{_bindir}/netloc_to_topology}
%{_sbindir}/slurmctld
%{_sbindir}/slurmd
%{_sbindir}/slurmstepd
%{_mandir}/man1/sacct.1*
%{_mandir}/man1/sacctmgr.1*
%{_mandir}/man1/salloc.1*
%{_mandir}/man1/sattach.1*
%{_mandir}/man1/sbatch.1*
%{_mandir}/man1/sbcast.1*
%{_mandir}/man1/scancel.1*
%{_mandir}/man1/scontrol.1*
%{_mandir}/man1/sdiag.1.*
%{_mandir}/man1/sgather.1.*
%{_mandir}/man1/sinfo.1*
%{_mandir}/man1/slurm.1*
%{_mandir}/man1/smap.1*
%{_mandir}/man1/sprio.1*
%{_mandir}/man1/squeue.1*
%{_mandir}/man1/sreport.1*
%{_mandir}/man1/srun.1*
%{_mandir}/man1/sshare.1*
%{_mandir}/man1/sstat.1*
%{_mandir}/man1/strigger.1*
Accepting request 226317 from home:scorot:branches:network:cluster - update to version 2.6.7 * Support for job arrays, which increases performance and ease of use for sets of similar jobs. * Job profiling capability added to record a wide variety of job characteristics for each task on a user configurable periodic basis. Data currently available includes CPU use, memory use, energy use, Infiniband network use, Lustre file system use, etc. * Support for MPICH2 using PMI2 communications interface with much greater scalability. * Prolog and epilog support for advanced reservations. * Much faster throughput for job step execution with --exclusive option. The srun process is notified when resources become available rather than periodic polling. * Support improved for Intel MIC (Many Integrated Core) processor. * Advanced reservations with hostname and core counts now supports asymmetric reservations (e.g. specific different core count for each node). * External sensor plugin infrastructure added to record power consumption, temperature, etc. * Improved performance for high-throughput computing. * MapReduce+ support (launches ~1000x faster, runs ~10x faster). * Added "MaxCPUsPerNode" partition configuration parameter. This can be especially useful to schedule GPUs. For example a node can be associated with two Slurm partitions (e.g. "cpu" and "gpu") and the partition/queue "cpu" could be limited to only a subset of the node's CPUs, insuring that one or more CPUs would be available to jobs in the "gpu" partition/queue. OBS-URL: https://build.opensuse.org/request/show/226317 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
%{_mandir}/man1/sh5util.1*
%{_mandir}/man5/acct_gather.conf.*
%{_mandir}/man5/burst_buffer.conf.*
Accepting request 226317 from home:scorot:branches:network:cluster - update to version 2.6.7 * Support for job arrays, which increases performance and ease of use for sets of similar jobs. * Job profiling capability added to record a wide variety of job characteristics for each task on a user configurable periodic basis. Data currently available includes CPU use, memory use, energy use, Infiniband network use, Lustre file system use, etc. * Support for MPICH2 using PMI2 communications interface with much greater scalability. * Prolog and epilog support for advanced reservations. * Much faster throughput for job step execution with --exclusive option. The srun process is notified when resources become available rather than periodic polling. * Support improved for Intel MIC (Many Integrated Core) processor. * Advanced reservations with hostname and core counts now supports asymmetric reservations (e.g. specific different core count for each node). * External sensor plugin infrastructure added to record power consumption, temperature, etc. * Improved performance for high-throughput computing. * MapReduce+ support (launches ~1000x faster, runs ~10x faster). * Added "MaxCPUsPerNode" partition configuration parameter. This can be especially useful to schedule GPUs. For example a node can be associated with two Slurm partitions (e.g. "cpu" and "gpu") and the partition/queue "cpu" could be limited to only a subset of the node's CPUs, insuring that one or more CPUs would be available to jobs in the "gpu" partition/queue. OBS-URL: https://build.opensuse.org/request/show/226317 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
%{_mandir}/man5/ext_sensors.conf.*
%{_mandir}/man5/slurm.*
%{_mandir}/man5/cgroup.*
%{_mandir}/man5/cray.*
%{_mandir}/man5/gres.*
%{_mandir}/man5/nonstop.conf.5.*
%{_mandir}/man5/topology.*
%{_mandir}/man8/slurmctld.*
%{_mandir}/man8/slurmd.*
%{_mandir}/man8/slurmstepd*
%{_mandir}/man8/spank*
%dir %{_libdir}/slurm/src
%dir %{_sysconfdir}/%{name}
%config(noreplace) %{_sysconfdir}/%{name}/slurm.conf
%config(noreplace) %{_sysconfdir}/%{name}/cgroup.conf
%config(noreplace) %{_sysconfdir}/%{name}/cgroup_allowed_devices_file.conf
%config(noreplace) %{_sysconfdir}/%{name}/slurm.epilog.clean
%dir %{_sysconfdir}/%{name}/cgroup
%config(noreplace) %{_sysconfdir}/%{name}/cgroup/release_common
%if %{with_systemd}
Accepting request 435622 from home:eeich:branches:network:cluster - version 15.08.7.1 * Remove the 1024-character limit on lines in batch scripts. task/affinity: Disable core-level task binding if more CPUs required than available cores. * Preemption/gang scheduling: If a job is suspended at slurmctld restart or reconfiguration time, then leave it suspended rather than resume+suspend. * Don't use lower weight nodes for job allocation when topology/tree used. * Don't allow user specified reservation names to disrupt the normal reservation sequeuece numbering scheme. * Avoid hard-link/copy of script/environment files for job arrays. Use the master job record file for all tasks of the job array. NOTE: Job arrays submitted to Slurm version 15.08.6 or later will fail if the slurmctld daemon is downgraded to an earlier version of Slurm. * In slurmctld log file, log duplicate job ID found by slurmd. Previously was being logged as prolog/epilog failure. * If a job is requeued while in the process of being launch, remove it's job ID from slurmd's record of active jobs in order to avoid generating a duplicate job ID error when launched for the second time (which would drain the node). * Cleanup messages when handling job script and environment variables in older directory structure formats. * Prevent triggering gang scheduling within a partition if configured with PreemptType=partition_prio and PreemptMode=suspend,gang. * Decrease parallelism in job cancel request to prevent denial of service when cancelling huge numbers of jobs. * If all ephemeral ports are in use, try using other port numbers. * Prevent "scontrol update job" from updating jobs that have already finished. * Show requested TRES in "squeue -O tres" when job is pending. * Backfill scheduler: Test association and QOS node limits before reserving resources for pending job. * Many bug fixes. - Use source services to download package. - Fix code for new API of hwloc-2.0. - package netloc_to_topology where avialable. - Package documentation. OBS-URL: https://build.opensuse.org/request/show/435622 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=10
2016-10-16 21:51:20 +02:00
%{_unitdir}/slurm.service
%else
%{_initrddir}/slurm
%endif
%{_sbindir}/rcslurm
Accepting request 435622 from home:eeich:branches:network:cluster - version 15.08.7.1 * Remove the 1024-character limit on lines in batch scripts. task/affinity: Disable core-level task binding if more CPUs required than available cores. * Preemption/gang scheduling: If a job is suspended at slurmctld restart or reconfiguration time, then leave it suspended rather than resume+suspend. * Don't use lower weight nodes for job allocation when topology/tree used. * Don't allow user specified reservation names to disrupt the normal reservation sequeuece numbering scheme. * Avoid hard-link/copy of script/environment files for job arrays. Use the master job record file for all tasks of the job array. NOTE: Job arrays submitted to Slurm version 15.08.6 or later will fail if the slurmctld daemon is downgraded to an earlier version of Slurm. * In slurmctld log file, log duplicate job ID found by slurmd. Previously was being logged as prolog/epilog failure. * If a job is requeued while in the process of being launch, remove it's job ID from slurmd's record of active jobs in order to avoid generating a duplicate job ID error when launched for the second time (which would drain the node). * Cleanup messages when handling job script and environment variables in older directory structure formats. * Prevent triggering gang scheduling within a partition if configured with PreemptType=partition_prio and PreemptMode=suspend,gang. * Decrease parallelism in job cancel request to prevent denial of service when cancelling huge numbers of jobs. * If all ephemeral ports are in use, try using other port numbers. * Prevent "scontrol update job" from updating jobs that have already finished. * Show requested TRES in "squeue -O tres" when job is pending. * Backfill scheduler: Test association and QOS node limits before reserving resources for pending job. * Many bug fixes. - Use source services to download package. - Fix code for new API of hwloc-2.0. - package netloc_to_topology where avialable. - Package documentation. OBS-URL: https://build.opensuse.org/request/show/435622 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=10
2016-10-16 21:51:20 +02:00
%files doc
%defattr(-,root,root)
%dir %{_datadir}/doc/%{name}-%{vers_t %{version}}
%{_datadir}/doc/%{name}-%{vers_t %{version}}/*
%files -n %{libslurm}
%defattr(-,root,root)
%{_libdir}/*.so.*
%files devel
%defattr(-,root,root)
%{_prefix}/include/slurm
%{_libdir}/libpmi.so
Accepting request 226317 from home:scorot:branches:network:cluster - update to version 2.6.7 * Support for job arrays, which increases performance and ease of use for sets of similar jobs. * Job profiling capability added to record a wide variety of job characteristics for each task on a user configurable periodic basis. Data currently available includes CPU use, memory use, energy use, Infiniband network use, Lustre file system use, etc. * Support for MPICH2 using PMI2 communications interface with much greater scalability. * Prolog and epilog support for advanced reservations. * Much faster throughput for job step execution with --exclusive option. The srun process is notified when resources become available rather than periodic polling. * Support improved for Intel MIC (Many Integrated Core) processor. * Advanced reservations with hostname and core counts now supports asymmetric reservations (e.g. specific different core count for each node). * External sensor plugin infrastructure added to record power consumption, temperature, etc. * Improved performance for high-throughput computing. * MapReduce+ support (launches ~1000x faster, runs ~10x faster). * Added "MaxCPUsPerNode" partition configuration parameter. This can be especially useful to schedule GPUs. For example a node can be associated with two Slurm partitions (e.g. "cpu" and "gpu") and the partition/queue "cpu" could be limited to only a subset of the node's CPUs, insuring that one or more CPUs would be available to jobs in the "gpu" partition/queue. OBS-URL: https://build.opensuse.org/request/show/226317 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
%{_libdir}/libpmi2.so
%{_libdir}/libslurm.so
%{_libdir}/libslurmdb.so
%{_libdir}/slurm/src/*
%{_mandir}/man3/slurm_*
%files sview
%defattr(-,root,root)
%{_bindir}/sview
%{_mandir}/man1/sview.1*
%files sched-wiki
%defattr(-,root,root)
%{_libdir}/slurm/sched_wiki*.so
%{_mandir}/man5/wiki.*
%files auth-none
%defattr(-,root,root)
%{_libdir}/slurm/auth_none.so
%files munge
%defattr(-,root,root)
%{_libdir}/slurm/auth_munge.so
%{_libdir}/slurm/crypto_munge.so
%files -n perl-slurm
%defattr(-,root,root)
%{perl_vendorarch}/Slurm.pm
%{perl_vendorarch}/Slurm
%{perl_vendorarch}/auto/Slurm
%{perl_vendorarch}/Slurmdb.pm
%{perl_vendorarch}/auto/Slurmdb
%{_mandir}/man3/Slurm*.3pm.*
%if 0%{?suse_version} <= 1110
/var/adm/perl-modules/slurm
%endif
%files slurmdbd
%defattr(-,root,root)
%{_sbindir}/slurmdbd
%{_mandir}/man5/slurmdbd.*
%{_mandir}/man8/slurmdbd.*
%config(noreplace) %{_sysconfdir}/%{name}/slurmdbd.conf
%if %{with_systemd}
%config %{_unitdir}/slurmdbd.service
%else
%{_initrddir}/slurmdbd
%endif
%{_sbindir}/rcslurmdbd
%files plugins
%defattr(-,root,root)
%dir %{_libdir}/slurm
%{_libdir}/slurm/accounting_storage_filetxt.so
%{_libdir}/slurm/accounting_storage_none.so
%{_libdir}/slurm/accounting_storage_slurmdbd.so
%{_libdir}/slurm/acct_gather_energy_none.so
%{_libdir}/slurm/acct_gather_energy_rapl.so
%{_libdir}/slurm/acct_gather_energy_cray.so
%{_libdir}/slurm/acct_gather_energy_ibmaem.so
Accepting request 226317 from home:scorot:branches:network:cluster - update to version 2.6.7 * Support for job arrays, which increases performance and ease of use for sets of similar jobs. * Job profiling capability added to record a wide variety of job characteristics for each task on a user configurable periodic basis. Data currently available includes CPU use, memory use, energy use, Infiniband network use, Lustre file system use, etc. * Support for MPICH2 using PMI2 communications interface with much greater scalability. * Prolog and epilog support for advanced reservations. * Much faster throughput for job step execution with --exclusive option. The srun process is notified when resources become available rather than periodic polling. * Support improved for Intel MIC (Many Integrated Core) processor. * Advanced reservations with hostname and core counts now supports asymmetric reservations (e.g. specific different core count for each node). * External sensor plugin infrastructure added to record power consumption, temperature, etc. * Improved performance for high-throughput computing. * MapReduce+ support (launches ~1000x faster, runs ~10x faster). * Added "MaxCPUsPerNode" partition configuration parameter. This can be especially useful to schedule GPUs. For example a node can be associated with two Slurm partitions (e.g. "cpu" and "gpu") and the partition/queue "cpu" could be limited to only a subset of the node's CPUs, insuring that one or more CPUs would be available to jobs in the "gpu" partition/queue. OBS-URL: https://build.opensuse.org/request/show/226317 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
%{_libdir}/slurm/acct_gather_filesystem_lustre.so
%{_libdir}/slurm/acct_gather_filesystem_none.so
%{_libdir}/slurm/acct_gather_infiniband_none.so
%{_libdir}/slurm/acct_gather_profile_none.so
%{_libdir}/slurm/burst_buffer_generic.so
%{_libdir}/slurm/checkpoint_none.so
%{_libdir}/slurm/checkpoint_ompi.so
%{_libdir}/slurm/core_spec_cray.so
%{_libdir}/slurm/core_spec_none.so
Accepting request 226317 from home:scorot:branches:network:cluster - update to version 2.6.7 * Support for job arrays, which increases performance and ease of use for sets of similar jobs. * Job profiling capability added to record a wide variety of job characteristics for each task on a user configurable periodic basis. Data currently available includes CPU use, memory use, energy use, Infiniband network use, Lustre file system use, etc. * Support for MPICH2 using PMI2 communications interface with much greater scalability. * Prolog and epilog support for advanced reservations. * Much faster throughput for job step execution with --exclusive option. The srun process is notified when resources become available rather than periodic polling. * Support improved for Intel MIC (Many Integrated Core) processor. * Advanced reservations with hostname and core counts now supports asymmetric reservations (e.g. specific different core count for each node). * External sensor plugin infrastructure added to record power consumption, temperature, etc. * Improved performance for high-throughput computing. * MapReduce+ support (launches ~1000x faster, runs ~10x faster). * Added "MaxCPUsPerNode" partition configuration parameter. This can be especially useful to schedule GPUs. For example a node can be associated with two Slurm partitions (e.g. "cpu" and "gpu") and the partition/queue "cpu" could be limited to only a subset of the node's CPUs, insuring that one or more CPUs would be available to jobs in the "gpu" partition/queue. OBS-URL: https://build.opensuse.org/request/show/226317 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
%{_libdir}/slurm/ext_sensors_none.so
%{_libdir}/slurm/jobacct_gather_aix.so
%{_libdir}/slurm/jobacct_gather_linux.so
%{_libdir}/slurm/jobacct_gather_none.so
%{_libdir}/slurm/job_container_cncu.so
%{_libdir}/slurm/job_container_none.so
%{_libdir}/slurm/jobcomp_none.so
%{_libdir}/slurm/jobcomp_filetxt.so
%{_libdir}/slurm/jobcomp_script.so
%{_libdir}/slurm/job_submit_cray.so
Accepting request 226317 from home:scorot:branches:network:cluster - update to version 2.6.7 * Support for job arrays, which increases performance and ease of use for sets of similar jobs. * Job profiling capability added to record a wide variety of job characteristics for each task on a user configurable periodic basis. Data currently available includes CPU use, memory use, energy use, Infiniband network use, Lustre file system use, etc. * Support for MPICH2 using PMI2 communications interface with much greater scalability. * Prolog and epilog support for advanced reservations. * Much faster throughput for job step execution with --exclusive option. The srun process is notified when resources become available rather than periodic polling. * Support improved for Intel MIC (Many Integrated Core) processor. * Advanced reservations with hostname and core counts now supports asymmetric reservations (e.g. specific different core count for each node). * External sensor plugin infrastructure added to record power consumption, temperature, etc. * Improved performance for high-throughput computing. * MapReduce+ support (launches ~1000x faster, runs ~10x faster). * Added "MaxCPUsPerNode" partition configuration parameter. This can be especially useful to schedule GPUs. For example a node can be associated with two Slurm partitions (e.g. "cpu" and "gpu") and the partition/queue "cpu" could be limited to only a subset of the node's CPUs, insuring that one or more CPUs would be available to jobs in the "gpu" partition/queue. OBS-URL: https://build.opensuse.org/request/show/226317 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
%{_libdir}/slurm/job_submit_pbs.so
%{_libdir}/slurm/job_submit_require_timelimit.so
%{_libdir}/slurm/job_submit_throttle.so
%{_libdir}/slurm/layouts_power_cpufreq.so
%{_libdir}/slurm/layouts_power_default.so
%{_libdir}/slurm/layouts_unit_default.so
%{_libdir}/slurm/mpi_lam.so
%{_libdir}/slurm/mpi_mpich1_p4.so
%{_libdir}/slurm/mpi_mpich1_shmem.so
%{_libdir}/slurm/mpi_mpichgm.so
%{_libdir}/slurm/mpi_mpichmx.so
%{_libdir}/slurm/mpi_mvapich.so
%{_libdir}/slurm/mpi_none.so
%{_libdir}/slurm/mpi_openmpi.so
%{_libdir}/slurm/power_none.so
%{_libdir}/slurm/preempt_job_prio.so
%{_libdir}/slurm/preempt_none.so
%{_libdir}/slurm/preempt_partition_prio.so
%{_libdir}/slurm/preempt_qos.so
%{_libdir}/slurm/priority_basic.so
%{_libdir}/slurm/proctrack_pgid.so
%{_libdir}/slurm/proctrack_linuxproc.so
%{_libdir}/slurm/route_default.so
%{_libdir}/slurm/route_topology.so
%{_libdir}/slurm/sched_backfill.so
%{_libdir}/slurm/sched_builtin.so
%{_libdir}/slurm/sched_hold.so
%{_libdir}/slurm/select_alps.so
%{_libdir}/slurm/select_cons_res.so
%{_libdir}/slurm/select_linear.so
%{_libdir}/slurm/slurmctld_nonstop.so
%{_libdir}/slurm/switch_cray.so
%{_libdir}/slurm/switch_generic.so
%{_libdir}/slurm/switch_none.so
Accepting request 226317 from home:scorot:branches:network:cluster - update to version 2.6.7 * Support for job arrays, which increases performance and ease of use for sets of similar jobs. * Job profiling capability added to record a wide variety of job characteristics for each task on a user configurable periodic basis. Data currently available includes CPU use, memory use, energy use, Infiniband network use, Lustre file system use, etc. * Support for MPICH2 using PMI2 communications interface with much greater scalability. * Prolog and epilog support for advanced reservations. * Much faster throughput for job step execution with --exclusive option. The srun process is notified when resources become available rather than periodic polling. * Support improved for Intel MIC (Many Integrated Core) processor. * Advanced reservations with hostname and core counts now supports asymmetric reservations (e.g. specific different core count for each node). * External sensor plugin infrastructure added to record power consumption, temperature, etc. * Improved performance for high-throughput computing. * MapReduce+ support (launches ~1000x faster, runs ~10x faster). * Added "MaxCPUsPerNode" partition configuration parameter. This can be especially useful to schedule GPUs. For example a node can be associated with two Slurm partitions (e.g. "cpu" and "gpu") and the partition/queue "cpu" could be limited to only a subset of the node's CPUs, insuring that one or more CPUs would be available to jobs in the "gpu" partition/queue. OBS-URL: https://build.opensuse.org/request/show/226317 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
%{_libdir}/slurm/spank_pbs.so
%{_libdir}/slurm/task_cray.so
%{_libdir}/slurm/task_none.so
%{_libdir}/slurm/topology_3d_torus.so
%{_libdir}/slurm/topology_hypercube.so
%{_libdir}/slurm/topology_none.so
%{_libdir}/slurm/topology_tree.so
%{_libdir}/slurm/accounting_storage_mysql.so
%{_libdir}/slurm/crypto_openssl.so
%{_libdir}/slurm/jobcomp_mysql.so
%{_libdir}/slurm/task_affinity.so
%{_libdir}/slurm/gres_gpu.so
%{_libdir}/slurm/gres_mic.so
%{_libdir}/slurm/gres_nic.so
%{_libdir}/slurm/job_submit_all_partitions.so
%{_libdir}/slurm/job_submit_cnode.so
%{_libdir}/slurm/job_submit_defaults.so
%{_libdir}/slurm/job_submit_logging.so
%{_libdir}/slurm/job_submit_partition.so
%{_libdir}/slurm/jobacct_gather_cgroup.so
%{_libdir}/slurm/launch_slurm.so
%{_libdir}/slurm/mpi_pmi2.so
%{_libdir}/slurm/proctrack_cgroup.so
Accepting request 226317 from home:scorot:branches:network:cluster - update to version 2.6.7 * Support for job arrays, which increases performance and ease of use for sets of similar jobs. * Job profiling capability added to record a wide variety of job characteristics for each task on a user configurable periodic basis. Data currently available includes CPU use, memory use, energy use, Infiniband network use, Lustre file system use, etc. * Support for MPICH2 using PMI2 communications interface with much greater scalability. * Prolog and epilog support for advanced reservations. * Much faster throughput for job step execution with --exclusive option. The srun process is notified when resources become available rather than periodic polling. * Support improved for Intel MIC (Many Integrated Core) processor. * Advanced reservations with hostname and core counts now supports asymmetric reservations (e.g. specific different core count for each node). * External sensor plugin infrastructure added to record power consumption, temperature, etc. * Improved performance for high-throughput computing. * MapReduce+ support (launches ~1000x faster, runs ~10x faster). * Added "MaxCPUsPerNode" partition configuration parameter. This can be especially useful to schedule GPUs. For example a node can be associated with two Slurm partitions (e.g. "cpu" and "gpu") and the partition/queue "cpu" could be limited to only a subset of the node's CPUs, insuring that one or more CPUs would be available to jobs in the "gpu" partition/queue. OBS-URL: https://build.opensuse.org/request/show/226317 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
%{_libdir}/slurm/priority_multifactor.so
%{_libdir}/slurm/select_bluegene.so
%{_libdir}/slurm/select_cray.so
%{_libdir}/slurm/select_serial.so
%{_libdir}/slurm/task_cgroup.so
%{_libdir}/slurm/topology_node_rank.so
%files torque
%defattr(-,root,root)
%{_bindir}/pbsnodes
Accepting request 226317 from home:scorot:branches:network:cluster - update to version 2.6.7 * Support for job arrays, which increases performance and ease of use for sets of similar jobs. * Job profiling capability added to record a wide variety of job characteristics for each task on a user configurable periodic basis. Data currently available includes CPU use, memory use, energy use, Infiniband network use, Lustre file system use, etc. * Support for MPICH2 using PMI2 communications interface with much greater scalability. * Prolog and epilog support for advanced reservations. * Much faster throughput for job step execution with --exclusive option. The srun process is notified when resources become available rather than periodic polling. * Support improved for Intel MIC (Many Integrated Core) processor. * Advanced reservations with hostname and core counts now supports asymmetric reservations (e.g. specific different core count for each node). * External sensor plugin infrastructure added to record power consumption, temperature, etc. * Improved performance for high-throughput computing. * MapReduce+ support (launches ~1000x faster, runs ~10x faster). * Added "MaxCPUsPerNode" partition configuration parameter. This can be especially useful to schedule GPUs. For example a node can be associated with two Slurm partitions (e.g. "cpu" and "gpu") and the partition/queue "cpu" could be limited to only a subset of the node's CPUs, insuring that one or more CPUs would be available to jobs in the "gpu" partition/queue. OBS-URL: https://build.opensuse.org/request/show/226317 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
%{_bindir}/qalter
%{_bindir}/qdel
%{_bindir}/qhold
%{_bindir}/qrls
Accepting request 226317 from home:scorot:branches:network:cluster - update to version 2.6.7 * Support for job arrays, which increases performance and ease of use for sets of similar jobs. * Job profiling capability added to record a wide variety of job characteristics for each task on a user configurable periodic basis. Data currently available includes CPU use, memory use, energy use, Infiniband network use, Lustre file system use, etc. * Support for MPICH2 using PMI2 communications interface with much greater scalability. * Prolog and epilog support for advanced reservations. * Much faster throughput for job step execution with --exclusive option. The srun process is notified when resources become available rather than periodic polling. * Support improved for Intel MIC (Many Integrated Core) processor. * Advanced reservations with hostname and core counts now supports asymmetric reservations (e.g. specific different core count for each node). * External sensor plugin infrastructure added to record power consumption, temperature, etc. * Improved performance for high-throughput computing. * MapReduce+ support (launches ~1000x faster, runs ~10x faster). * Added "MaxCPUsPerNode" partition configuration parameter. This can be especially useful to schedule GPUs. For example a node can be associated with two Slurm partitions (e.g. "cpu" and "gpu") and the partition/queue "cpu" could be limited to only a subset of the node's CPUs, insuring that one or more CPUs would be available to jobs in the "gpu" partition/queue. OBS-URL: https://build.opensuse.org/request/show/226317 OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=4
2014-03-16 21:42:08 +01:00
%{_bindir}/qrerun
%{_bindir}/qstat
%{_bindir}/qsub
%{_bindir}/mpiexec
%files slurmdb-direct
%defattr(-,root,root)
%config (noreplace) %{perl_vendorarch}/config.slurmdb.pl
%{_sbindir}/moab_2_slurmdb
%files sjstat
%defattr(-,root,root)
%{_bindir}/sjstat
%files pam_slurm
%defattr(-,root,root)
/%_lib/security/pam_slurm.so
/%_lib/security/pam_slurm_adopt.so
%changelog