Accepting request 879660 from network:cluster

OBS-URL: https://build.opensuse.org/request/show/879660
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/slurm?expand=0&rev=57
This commit is contained in:
Dominique Leuenberger 2021-03-17 19:16:54 +00:00 committed by Git OBS Bridge
commit a4d0f3eef7
4 changed files with 85 additions and 35 deletions

View File

@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1c4899d337279c7ba3caf23d8333f8f0a250ba18ed4b46489bbbb68ed1fb1350
size 6540096

3
slurm-20.11.5.tar.bz2 Normal file
View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3e9729ed63c5fbd2dd7b99e9148eb6f73e0261006cbb9b09c37f28ce206d5801
size 6552357

View File

@ -1,3 +1,58 @@
-------------------------------------------------------------------
Wed Mar 17 08:55:58 UTC 2021 - Christian Goll <cgoll@suse.com>
- Udpate to 20.11.5:
- New features:
* New job_container/tmpfs plugin developed by NERSC that can be used to
create per-job filesystem namespaces. Documentaiion and configuration
can be found in the respecting man page.
- Bug fixes:
* Fix main scheduler bug where bf_hetjob_prio truncates SchedulerParameters.
* Fix sacct not displaying UserCPU, SystemCPU and TotalCPU for large times.
* scrontab - fix to return the correct index for a bad #SCRON option.
* scrontab - fix memory leak when invalid option found in #SCRON line.
* Add errno for when a user requests multiple partitions and they are using
partition based associations.
* Fix issue where a job could run in a wrong partition when using
EnforcePartLimits=any and partition based associations.
* Remove possible deadlock when adding associations/wckeys in multiple
threads.
* When using PrologFlags=alloc make sure the correct Slurm version is set
in the credential.
* When sending a job a warning signal make sure we always send SIGCONT
beforehand.
* Fix issue where a batch job would continue running if a prolog failed on a
node that wasn't the batch host and requeuing was disabled.
* Fix issue where sometimes salloc/srun wouldn't get a message about a prolog
failure in the job's stdout.
* Requeue or kill job on a prolog failure when PrologFlags is not set.
* Fix race condition causing node reboots to get requeued before
ResumeTimeout expires.
* Preserve node boot_req_time on reconfigure.
* Preserve node power_save_req_time on reconfigure.
* Fix node reboots being queued and issued multiple times and preventing the
reboot to time out.
* Fix run_command to exit correctly if track_script kills the calling thread.
* Only requeue a job when the PrologSlurmctld returns nonzero.
* When a job is signaled with SIGKILL make sure we flush all
prologs/setup scripts.
* Handle burst buffer scripts if the job is canceled while stage_in is
happening.
* When shutting down the slurmctld make note to ignore error message when
we have to kill a prolog/setup script we are tracking.
* scrontab - add support for the --open-mode option.
* acct_gather_profile/influxdb - avoid segfault on plugin shutdown if setup
has not completed successfully.
* Reduce delay in starting salloc allocations when running with prologs.
* Alter AllocNodes check to work if the allocating node's domain doesn't
match the slurmctld's. This restores the pre*20.11 behavior.
* Fix slurmctld segfault if jobs from a prior version had the now-removed
INVALID_DEPEND state flag set and were allowed to run in 20.11.
* Add job_container/tmpfs plugin to give a method to provide a private /tmp
per job.
* Set the correct core affinity when using AutoDetect.
* slurmrestd - mark "environment" as required for job submissions in schema.
------------------------------------------------------------------- -------------------------------------------------------------------
Tue Feb 23 16:24:16 UTC 2021 - Christian Goll <cgoll@suse.com> Tue Feb 23 16:24:16 UTC 2021 - Christian Goll <cgoll@suse.com>

View File

@ -18,7 +18,7 @@
# Check file META in sources: update so_version to (API_CURRENT - API_AGE) # Check file META in sources: update so_version to (API_CURRENT - API_AGE)
%define so_version 36 %define so_version 36
%define ver 20.11.4 %define ver 20.11.5
%define _ver _20_11 %define _ver _20_11
%define dl_ver %{ver} %define dl_ver %{ver}
# so-version is 0 and seems to be stable # so-version is 0 and seems to be stable
@ -104,7 +104,7 @@ ExclusiveArch: do_not_build
%ifarch x86_64 %ifarch x86_64
%define have_libnuma 1 %define have_libnuma 1
%else %else
%ifarch %{ix86} %ifarch %{ix86}
%if 0%{?sle_version} >= 120200 %if 0%{?sle_version} >= 120200
%define have_libnuma 1 %define have_libnuma 1
%endif %endif
@ -127,7 +127,7 @@ Version: %{ver}
Release: 0 Release: 0
Summary: Simple Linux Utility for Resource Management Summary: Simple Linux Utility for Resource Management
License: SUSE-GPL-2.0-with-openssl-exception License: SUSE-GPL-2.0-with-openssl-exception
Group: Productivity/Clustering/Computing Group: Productivity/Clustering/Computing
URL: https://www.schedmd.com URL: https://www.schedmd.com
Source: https://download.schedmd.com/slurm/%{pname}-%{dl_ver}.tar.bz2 Source: https://download.schedmd.com/slurm/%{pname}-%{dl_ver}.tar.bz2
Source1: slurm-rpmlintrc Source1: slurm-rpmlintrc
@ -146,8 +146,8 @@ Recommends: (%{name}-munge = %version if munge)
Recommends: %{name}-munge = %version Recommends: %{name}-munge = %version
%endif %endif
Requires(pre): %{name}-node = %{version} Requires(pre): %{name}-node = %{version}
Recommends: %{name}-doc = %{version}
Recommends: %{name}-config-man = %{version} Recommends: %{name}-config-man = %{version}
Recommends: %{name}-doc = %{version}
BuildRequires: autoconf BuildRequires: autoconf
BuildRequires: automake BuildRequires: automake
BuildRequires: coreutils BuildRequires: coreutils
@ -197,11 +197,11 @@ BuildRequires: rrdtool-devel
BuildRequires: dejagnu BuildRequires: dejagnu
BuildRequires: pkgconfig(systemd) BuildRequires: pkgconfig(systemd)
%else %else
Requires(post): %insserv_prereq %fillup_prereq Requires(post): %insserv_prereq %fillup_prereq
%endif %endif
BuildRoot: %{_tmppath}/%{name}-%{version}-build BuildRoot: %{_tmppath}/%{name}-%{version}-build
Obsoletes: slurm-sched-wiki < %{version} Obsoletes: slurm-sched-wiki < %{version}
Obsoletes: slurmdb-direct < %{version} Obsoletes: slurmdb-direct < %{version}
%description %description
SLURM is a fault-tolerant scalable cluster management and job SLURM is a fault-tolerant scalable cluster management and job
@ -264,7 +264,6 @@ Conflicts: libslurm
This package contains the library needed to run programs dynamically linked This package contains the library needed to run programs dynamically linked
with SLURM. with SLURM.
%package -n libpmi%{pmi_so}%{?upgrade:%{_ver}} %package -n libpmi%{pmi_so}%{?upgrade:%{_ver}}
Summary: SLURM PMI Library Summary: SLURM PMI Library
Group: System/Libraries Group: System/Libraries
@ -288,7 +287,7 @@ slurmstepd process.
%package devel %package devel
Summary: Development package for SLURM Summary: Development package for SLURM
Group: Development/Libraries/C and C++ Group: Development/Libraries/C and C++
Requires: %{libslurm} = %{version} Requires: %{libslurm} = %{version}
Requires: %{name} = %{version} Requires: %{name} = %{version}
Requires: libpmi%{pmi_so} = %{version} Requires: libpmi%{pmi_so} = %{version}
@ -308,7 +307,6 @@ Requires: %{name} = %{version}
%description auth-none %description auth-none
This package cobtains the SLURM NULL authentication module. This package cobtains the SLURM NULL authentication module.
%package munge %package munge
Summary: SLURM authentication and crypto implementation using Munge Summary: SLURM authentication and crypto implementation using Munge
Group: Productivity/Clustering/Computing Group: Productivity/Clustering/Computing
@ -333,7 +331,6 @@ Group: Productivity/Clustering/Computing
sview is a graphical user interface to get and update state information for sview is a graphical user interface to get and update state information for
jobs, partitions, and nodes managed by SLURM. jobs, partitions, and nodes managed by SLURM.
%package slurmdbd %package slurmdbd
Summary: SLURM database daemon Summary: SLURM database daemon
Group: Productivity/Clustering/Computing Group: Productivity/Clustering/Computing
@ -351,17 +348,16 @@ Recommends: %{name}-munge = %version
%if 0%{?with_systemd} %if 0%{?with_systemd}
%{?systemd_ordering} %{?systemd_ordering}
%else %else
Requires(post): %insserv_prereq %fillup_prereq Requires(post): %insserv_prereq %fillup_prereq
%endif %endif
Obsoletes: slurm-sched-wiki < %{version} Obsoletes: slurm-sched-wiki < %{version}
Obsoletes: slurmdb-direct < %{version} Obsoletes: slurmdb-direct < %{version}
%{?upgrade:Provides: %{pname}-slurmdbd = %{version}} %{?upgrade:Provides: %{pname}-slurmdbd = %{version}}
%{?upgrade:Conflicts: %{pname}-slurmdb} %{?upgrade:Conflicts: %{pname}-slurmdb}
%description slurmdbd %description slurmdbd
The SLURM database daemon provides accounting of jobs in a database. The SLURM database daemon provides accounting of jobs in a database.
%package sql %package sql
Summary: Slurm SQL support Summary: Slurm SQL support
Group: Productivity/Clustering/Computing Group: Productivity/Clustering/Computing
@ -371,7 +367,6 @@ Group: Productivity/Clustering/Computing
%description sql %description sql
Contains interfaces to MySQL for use by SLURM. Contains interfaces to MySQL for use by SLURM.
%package plugins %package plugins
Summary: SLURM plugins (loadable shared objects) Summary: SLURM plugins (loadable shared objects)
Group: Productivity/Clustering/Computing Group: Productivity/Clustering/Computing
@ -419,7 +414,6 @@ Mail program used directly by the SLURM daemons. On completion of a job,
it waits for accounting information to be available and includes that it waits for accounting information to be available and includes that
information in the email body. information in the email body.
%package sjstat %package sjstat
Summary: Perl tool to print SLURM job state information Summary: Perl tool to print SLURM job state information
Group: Productivity/Clustering/Computing Group: Productivity/Clustering/Computing
@ -463,7 +457,7 @@ through Lua.
%package rest %package rest
Summary: Slurm REST API Interface Summary: Slurm REST API Interface
Group: Productivity/Clustering/Computing Group: Productivity/Clustering/Computing
Requires: %{name}-config = %{version} Requires: %{name}-config = %{version}
%if 0%{?have_http_parser} %if 0%{?have_http_parser}
BuildRequires: http-parser-devel BuildRequires: http-parser-devel
@ -480,7 +474,7 @@ Recommends: %{name}-munge = %version
This package provides the interface to SLURM via REST API. This package provides the interface to SLURM via REST API.
%package node %package node
Summary: Minimal slurm node Summary: Minimal slurm node
Group: Productivity/Clustering/Computing Group: Productivity/Clustering/Computing
Requires: %{name}-config = %{version} Requires: %{name}-config = %{version}
Requires: %{name}-plugins = %{version} Requires: %{name}-plugins = %{version}
@ -492,7 +486,7 @@ Recommends: %{name}-munge = %version
%if 0%{?with_systemd} %if 0%{?with_systemd}
%{?systemd_ordering} %{?systemd_ordering}
%else %else
Requires(post): %insserv_prereq %fillup_prereq Requires(post): %insserv_prereq %fillup_prereq
%endif %endif
%{?upgrade:Provides: %{pname}-node = %{version}} %{?upgrade:Provides: %{pname}-node = %{version}}
%{?upgrade:Conflicts: %{pname}-node} %{?upgrade:Conflicts: %{pname}-node}
@ -533,8 +527,8 @@ Summary: Store accounting data in hdf5
Group: Productivity/Clustering/Computing Group: Productivity/Clustering/Computing
%description hdf5 %description hdf5
Plugin to store accounting in the hdf5 file format. This plugin has to be Plugin to store accounting in the hdf5 file format. This plugin has to be
activated in the slurm configuration. Includes also utility the program activated in the slurm configuration. Includes also utility the program
sh5utils to merge this hdf5 files or extract data from them. sh5utils to merge this hdf5 files or extract data from them.
%package cray %package cray
@ -545,7 +539,6 @@ Group: Productivity/Clustering/Computing
Plugins for specific cray hardware, includes power and knl node management. Plugins for specific cray hardware, includes power and knl node management.
Contains also cray specific documentation. Contains also cray specific documentation.
%prep %prep
%setup -q -n %{pname}-%{dl_ver} %setup -q -n %{pname}-%{dl_ver}
%patch0 -p1 %patch0 -p1
@ -629,7 +622,7 @@ sed -i -e '/^ControlMachine=/i# Ordered List of Control Nodes' \
-e 's#BackupController=.*#SlurmctldHost=linux1(10.0.10.21)#' \ -e 's#BackupController=.*#SlurmctldHost=linux1(10.0.10.21)#' \
-e '/.*ControlAddr=.*/d' \ -e '/.*ControlAddr=.*/d' \
-e '/.*BackupAddr=.*/d' %{buildroot}/%{_sysconfdir}/%{pname}/slurm.conf -e '/.*BackupAddr=.*/d' %{buildroot}/%{_sysconfdir}/%{pname}/slurm.conf
cat >>%{buildroot}/%{_sysconfdir}/%{pname}/slurm.conf <<EOF cat >>%{buildroot}/%{_sysconfdir}/%{pname}/slurm.conf <<EOF
# SUSE default configuration # SUSE default configuration
PropagateResourceLimitsExcept=MEMLOCK PropagateResourceLimitsExcept=MEMLOCK
NodeName=linux State=UNKNOWN NodeName=linux State=UNKNOWN
@ -686,8 +679,8 @@ libdir=%{_libdir}
Cflags: -I\${includedir} Cflags: -I\${includedir}
Libs: -L\${libdir} -lslurm Libs: -L\${libdir} -lslurm
Description: Slurm API Description: Slurm API
Name: %{pname} Name: %{pname}
Version: %{version} Version: %{version}
EOF EOF
# Enable rotation of log files # Enable rotation of log files
@ -706,7 +699,7 @@ cat <<EOF > %{buildroot}/%{_sysconfdir}/logrotate.d/${service}.conf
copytruncate copytruncate
postrotate postrotate
pgrep ${service} && killall -SIGUSR2 ${service} || exit 0 pgrep ${service} && killall -SIGUSR2 ${service} || exit 0
endscript endscript
} }
EOF EOF
done done
@ -728,7 +721,7 @@ Alias /slurm/ "/usr/share/doc/slurm-%{ver}/html/"
EOF EOF
cat > %{buildroot}/%{_sysconfdir}/%{pname}/nss_slurm.conf <<EOF cat > %{buildroot}/%{_sysconfdir}/%{pname}/nss_slurm.conf <<EOF
## Optional config for libnss_slurm ## Optional config for libnss_slurm
## Specify if different from default ## Specify if different from default
# SlurmdSpoolDir /var/spool/slurmd # SlurmdSpoolDir /var/spool/slurmd
## Specify if does not match hostname ## Specify if does not match hostname
# NodeName myname # NodeName myname
@ -826,7 +819,7 @@ rm -f %{buildroot}/%{_mandir}/man8/slurmrestd.*
%else %else
%restart_on_update slurmd %restart_on_update slurmd
%insserv_cleanup %insserv_cleanup
%endif %endif
%pre config %pre config
%define slurmdir %{_sysconfdir}/slurm %define slurmdir %{_sysconfdir}/slurm
@ -870,7 +863,7 @@ exit 0
str = string.gsub(str, '%%s+$', '') str = string.gsub(str, '%%s+$', '')
str = string.gsub(str, '[\\n\\r]+', ' ') str = string.gsub(str, '[\\n\\r]+', ' ')
if str == "active" then if str == "active" then
local file = io.open("/run/%{1}.rst","w"); file:close() local file = io.open("/run/%{1}.rst","w"); file:close()
end end
end end
} }
@ -880,7 +873,7 @@ exit 0
# Do NOT delete the line breaks in the macro definition: they help # Do NOT delete the line breaks in the macro definition: they help
# to cope with different versions of the %%_restart_on_update. # to cope with different versions of the %%_restart_on_update.
} }
%define _res_update() %{?with_systemd: %define _res_update() %{?with_systemd:
%{expand:%%_restart_on_update %{?*}} %{expand:%%_restart_on_update %{?*}}
} }
@ -906,9 +899,9 @@ exit 0
%_rest slurmdbd %_rest slurmdbd
%if 0%{?sle_version} > 120200 || 0%{?suse_version} > 1320 %if 0%{?sle_version} > 120200 || 0%{?suse_version} > 1320
%define my_license %license %define my_license %license
%else %else
%define my_license %doc %define my_license %doc
%endif %endif
%files %files
@ -1093,6 +1086,7 @@ exit 0
%{_libdir}/slurm/jobcomp_script.so %{_libdir}/slurm/jobcomp_script.so
%{_libdir}/slurm/job_container_cncu.so %{_libdir}/slurm/job_container_cncu.so
%{_libdir}/slurm/job_container_none.so %{_libdir}/slurm/job_container_none.so
%{_libdir}/slurm/job_container_tmpfs.so
%{_libdir}/slurm/job_submit_all_partitions.so %{_libdir}/slurm/job_submit_all_partitions.so
%{_libdir}/slurm/job_submit_defaults.so %{_libdir}/slurm/job_submit_defaults.so
%{_libdir}/slurm/job_submit_logging.so %{_libdir}/slurm/job_submit_logging.so
@ -1236,6 +1230,7 @@ exit 0
%{_mandir}/man5/nonstop.conf.5.* %{_mandir}/man5/nonstop.conf.5.*
%{_mandir}/man5/topology.* %{_mandir}/man5/topology.*
%{_mandir}/man5/knl.conf.5.* %{_mandir}/man5/knl.conf.5.*
%{_mandir}/man5/job_container.conf.5.*
%if 0%{?have_hdf5} %if 0%{?have_hdf5}
%files hdf5 %files hdf5