Accepting request 879660 from network:cluster

OBS-URL: https://build.opensuse.org/request/show/879660
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/slurm?expand=0&rev=57
This commit is contained in:
Dominique Leuenberger 2021-03-17 19:16:54 +00:00 committed by Git OBS Bridge
commit a4d0f3eef7
4 changed files with 85 additions and 35 deletions

View File

@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1c4899d337279c7ba3caf23d8333f8f0a250ba18ed4b46489bbbb68ed1fb1350
size 6540096

3
slurm-20.11.5.tar.bz2 Normal file
View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3e9729ed63c5fbd2dd7b99e9148eb6f73e0261006cbb9b09c37f28ce206d5801
size 6552357

View File

@ -1,3 +1,58 @@
-------------------------------------------------------------------
Wed Mar 17 08:55:58 UTC 2021 - Christian Goll <cgoll@suse.com>
- Udpate to 20.11.5:
- New features:
* New job_container/tmpfs plugin developed by NERSC that can be used to
create per-job filesystem namespaces. Documentaiion and configuration
can be found in the respecting man page.
- Bug fixes:
* Fix main scheduler bug where bf_hetjob_prio truncates SchedulerParameters.
* Fix sacct not displaying UserCPU, SystemCPU and TotalCPU for large times.
* scrontab - fix to return the correct index for a bad #SCRON option.
* scrontab - fix memory leak when invalid option found in #SCRON line.
* Add errno for when a user requests multiple partitions and they are using
partition based associations.
* Fix issue where a job could run in a wrong partition when using
EnforcePartLimits=any and partition based associations.
* Remove possible deadlock when adding associations/wckeys in multiple
threads.
* When using PrologFlags=alloc make sure the correct Slurm version is set
in the credential.
* When sending a job a warning signal make sure we always send SIGCONT
beforehand.
* Fix issue where a batch job would continue running if a prolog failed on a
node that wasn't the batch host and requeuing was disabled.
* Fix issue where sometimes salloc/srun wouldn't get a message about a prolog
failure in the job's stdout.
* Requeue or kill job on a prolog failure when PrologFlags is not set.
* Fix race condition causing node reboots to get requeued before
ResumeTimeout expires.
* Preserve node boot_req_time on reconfigure.
* Preserve node power_save_req_time on reconfigure.
* Fix node reboots being queued and issued multiple times and preventing the
reboot to time out.
* Fix run_command to exit correctly if track_script kills the calling thread.
* Only requeue a job when the PrologSlurmctld returns nonzero.
* When a job is signaled with SIGKILL make sure we flush all
prologs/setup scripts.
* Handle burst buffer scripts if the job is canceled while stage_in is
happening.
* When shutting down the slurmctld make note to ignore error message when
we have to kill a prolog/setup script we are tracking.
* scrontab - add support for the --open-mode option.
* acct_gather_profile/influxdb - avoid segfault on plugin shutdown if setup
has not completed successfully.
* Reduce delay in starting salloc allocations when running with prologs.
* Alter AllocNodes check to work if the allocating node's domain doesn't
match the slurmctld's. This restores the pre*20.11 behavior.
* Fix slurmctld segfault if jobs from a prior version had the now-removed
INVALID_DEPEND state flag set and were allowed to run in 20.11.
* Add job_container/tmpfs plugin to give a method to provide a private /tmp
per job.
* Set the correct core affinity when using AutoDetect.
* slurmrestd - mark "environment" as required for job submissions in schema.
-------------------------------------------------------------------
Tue Feb 23 16:24:16 UTC 2021 - Christian Goll <cgoll@suse.com>

View File

@ -18,7 +18,7 @@
# Check file META in sources: update so_version to (API_CURRENT - API_AGE)
%define so_version 36
%define ver 20.11.4
%define ver 20.11.5
%define _ver _20_11
%define dl_ver %{ver}
# so-version is 0 and seems to be stable
@ -146,8 +146,8 @@ Recommends: (%{name}-munge = %version if munge)
Recommends: %{name}-munge = %version
%endif
Requires(pre): %{name}-node = %{version}
Recommends: %{name}-doc = %{version}
Recommends: %{name}-config-man = %{version}
Recommends: %{name}-doc = %{version}
BuildRequires: autoconf
BuildRequires: automake
BuildRequires: coreutils
@ -264,7 +264,6 @@ Conflicts: libslurm
This package contains the library needed to run programs dynamically linked
with SLURM.
%package -n libpmi%{pmi_so}%{?upgrade:%{_ver}}
Summary: SLURM PMI Library
Group: System/Libraries
@ -308,7 +307,6 @@ Requires: %{name} = %{version}
%description auth-none
This package cobtains the SLURM NULL authentication module.
%package munge
Summary: SLURM authentication and crypto implementation using Munge
Group: Productivity/Clustering/Computing
@ -333,7 +331,6 @@ Group: Productivity/Clustering/Computing
sview is a graphical user interface to get and update state information for
jobs, partitions, and nodes managed by SLURM.
%package slurmdbd
Summary: SLURM database daemon
Group: Productivity/Clustering/Computing
@ -361,7 +358,6 @@ Obsoletes: slurmdb-direct < %{version}
%description slurmdbd
The SLURM database daemon provides accounting of jobs in a database.
%package sql
Summary: Slurm SQL support
Group: Productivity/Clustering/Computing
@ -371,7 +367,6 @@ Group: Productivity/Clustering/Computing
%description sql
Contains interfaces to MySQL for use by SLURM.
%package plugins
Summary: SLURM plugins (loadable shared objects)
Group: Productivity/Clustering/Computing
@ -419,7 +414,6 @@ Mail program used directly by the SLURM daemons. On completion of a job,
it waits for accounting information to be available and includes that
information in the email body.
%package sjstat
Summary: Perl tool to print SLURM job state information
Group: Productivity/Clustering/Computing
@ -545,7 +539,6 @@ Group: Productivity/Clustering/Computing
Plugins for specific cray hardware, includes power and knl node management.
Contains also cray specific documentation.
%prep
%setup -q -n %{pname}-%{dl_ver}
%patch0 -p1
@ -1093,6 +1086,7 @@ exit 0
%{_libdir}/slurm/jobcomp_script.so
%{_libdir}/slurm/job_container_cncu.so
%{_libdir}/slurm/job_container_none.so
%{_libdir}/slurm/job_container_tmpfs.so
%{_libdir}/slurm/job_submit_all_partitions.so
%{_libdir}/slurm/job_submit_defaults.so
%{_libdir}/slurm/job_submit_logging.so
@ -1236,6 +1230,7 @@ exit 0
%{_mandir}/man5/nonstop.conf.5.*
%{_mandir}/man5/topology.*
%{_mandir}/man5/knl.conf.5.*
%{_mandir}/man5/job_container.conf.5.*
%if 0%{?have_hdf5}
%files hdf5