Accepting request 441490 from home:eeich:branches:network:cluster

- Fix build with and without OHCP_BUILD define.
- Fix build for systemd and non-systemd.

- Updated to 16-05-5 - equvalent to OpenHPC 1.2.
  * Fix issue with resizing jobs and limits not be kept track of correctly.
  * BGQ - Remove redeclaration of job_read_lock.
  * BGQ - Tighter locks around structures when nodes/cables change state.
  * Make it possible to change CPUsPerTask with scontrol.
  * Make it so scontrol update part qos= will take away a partition QOS from
    a partition.
  * Backfill scheduling properly synchronized with Cray Node Health Check.
    Prior logic could result in highest priority job getting improperly
    postponed.
  * Make it so daemons also support TopologyParam=NoInAddrAny.
  * If scancel is operating on large number of jobs and RPC responses from
    slurmctld daemon are slow then introduce a delay in sending the cancel job
    requests from scancel in order to reduce load on slurmctld.
  * Remove redundant logic when updating a job's task count.
  * MySQL - Fix querying jobs with reservations when the id's have rolled.
  * Perl - Fix use of uninitialized variable in slurm_job_step_get_pids.
  * Launch batch job requsting --reboot after the boot completes.
  * Do not attempt to power down a node which has never responded if the
    slurmctld daemon restarts without state.
  * Fix for possible slurmstepd segfault on invalid user ID.
  * MySQL - Fix for possible race condition when archiving multiple clusters
    at the same time.
  * Add logic so that slurmstepd can be launched under valgrind.
  * Increase buffer size to read /proc/*/stat files.
  * Remove the SchedulerParameters option of "assoc_limit_continue", making it
    the default value. Add option of "assoc_limit_stop". If "assoc_limit_stop"
    is set and a job cannot start due to association limits, then do not attempt
    to initiate any lower priority jobs in that partition. Setting this can
    decrease system throughput and utlization, but avoid potentially starving
    larger jobs by preventing them from launching indefinitely.
  * Update a node's socket and cores per socket counts as needed after a node
    boot to reflect configuration changes which can occur on KNL processors.
    Note that the node's total core count must not change, only the distribution
    of cores across varying socket counts (KNL NUMA nodes treated as sockets by
    Slurm).
  * Rename partition configuration from "Shared" to "OverSubscribe". Rename
    salloc, sbatch, srun option from "--shared" to "--oversubscribe". The old
    options will continue to function. Output field names also changed in
    scontrol, sinfo, squeue and sview.
  * Add SLURM_UMASK environment variable to user job.
  * knl_conf: Added new configuration parameter of CapmcPollFreq.
  * Cleanup two minor Coverity warnings.
  * Make it so the tres units in a job's formatted string are converted like
    they are in a step.
  * Correct partition's MaxCPUsPerNode enforcement when nodes are shared by
    multiple partitions.
  * node_feature/knl_cray - Prevent slurmctld GRES errors for "hbm" references.
  * Display thread name instead of thread id and remove process name in stderr
    logging for "thread_id" LogTimeFormat.
  * Log IP address of bad incomming message to slurmctld.
  * If a user requests tasks, nodes and ntasks-per-node and
    tasks-per-node/nodes != tasks print warning and ignore ntasks-per-node.
  * Release CPU "owner" file locks.
  * Update seff to fix warnings with ncpus, and list slurm-perlapi dependency
    in spec file.
  * Allow QOS timelimit to override partition timelimit when EnforcePartLimits
    is set to all/any.
  * Make it so qsub will do a "basename" on a wrapped command for the output
    and error files.
  * Add logic so that slurmstepd can be launched under valgrind.
  * Increase buffer size to read /proc/*/stat files.
  * Prevent job stuck in configuring state if slurmctld daemon restarted while
    PrologSlurmctld is running. Also re-issue burst_buffer/pre-load operation
    as needed.
  * Move test for job wait reason value of BurstBufferResources and
    BurstBufferStageIn later in the scheduling logic.
  * Document which srun options apply to only job, only step, or job and step
    allocations.
  * Use more compatible function to get thread name (>= 2.6.11).
  * Make it so the extern step uses a reverse tree when cleaning up.
  * If extern step doesn't get added into the proctrack plugin make sure the
    sleep is killed.
  * Add web links to Slurm Diamond Collectors (from Harvard University) and
    collectd (from EDF).
  * Add job_submit plugin for the "reboot" field.
  * Make some more Slurm constants (INFINITE, NO_VAL64, etc.) available to
    job_submit/lua plugins.
  * Send in a -1 for a taskid into spank_task_post_fork for the extern_step.
  * MYSQL - Sightly better logic if a job completion comes in with an end time
    of 0.
  * task/cgroup plugin is configured with ConstrainRAMSpace=yes, then set soft
    memory limit to allocated memory limit (previously no soft limit was set).
  * Streamline when schedule() is called when running with message aggregation
    on batch script completes.
  * Fix incorrect casting when [un]packing derived_ec on slurmdb_job_rec_t.
  * Document that persistent burst buffers can not be created or destroyed using
    the salloc or srun --bb options.
  * Add support for setting the SLURM_JOB_ACCOUNT, SLURM_JOB_QOS and
    SLURM_JOB_RESERVAION environment variables are set for the salloc command.
    Document the same environment variables for the salloc, sbatch and srun
    commands in their man pages.
  * Fix issue where sacctmgr load cluster.cfg wouldn't load associations
    that had a partition in them.
  * Don't return the extern step from sstat by default.
  * In sstat print 'extern' instead of 4294967295 for the extern step.
  * Make advanced reservations work properly with core specialization.
  * slurmstepd modified to pre-load all relevant plugins at startup to avoid
    the possibility of modified plugins later resulting in inconsistent API
    or data structures and a failure of slurmstepd.
  * Export functions from parse_time.c in libslurm.so.
  * Export unit convert functions from slurm_protocol_api.c in libslurm.so.
  * Fix scancel to allow multiple steps from a job to be cancelled at once.
  * Update and expand upgrade guide (in Quick Start Administrator web page).
  * burst_buffer/cray: Requeue, but do not hold a job which fails the pre_run
    operation.
  * Insure reported expected job start time is not in the past for pending jobs.
  * Add support for PMIx v2.

OBS-URL: https://build.opensuse.org/request/show/441490
OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=12
This commit is contained in:
Corot Sebastien 2016-11-24 22:01:51 +00:00 committed by Git OBS Bridge
parent 3a8ac1a69b
commit 7bac92b6f9
6 changed files with 361 additions and 47 deletions

View File

@ -0,0 +1,45 @@
From: Egbert Eich <eich@freedesktop.org>
Date: Sun Oct 16 13:10:39 2016 +0200
Subject: plugins/cgroup: Fix slurmd for new API in hwloc-2.0
Git-repo: https://github.com/SchedMD/slurm
Git-commit: 018eee7d8dee1f769477263a891948e5bca8f738
References:
The API of hwloc has changed considerably for version 2.0.
For a summary check:
https://github.com/open-mpi/hwloc/wiki/Upgrading-to-v2.0-API
Test for the API version to support both the old and new API.
Signed-off-by: Egbert Eich <eich@freedesktop.org>
Signed-off-by: Egbert Eich <eich@suse.de>
---
src/plugins/task/cgroup/task_cgroup_cpuset.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/src/plugins/task/cgroup/task_cgroup_cpuset.c b/src/plugins/task/cgroup/task_cgroup_cpuset.c
index 9c41ea4..94a4b09 100644
--- a/src/plugins/task/cgroup/task_cgroup_cpuset.c
+++ b/src/plugins/task/cgroup/task_cgroup_cpuset.c
@@ -641,8 +641,23 @@ static int _get_cpuinfo(uint32_t *nsockets, uint32_t *ncores,
/* parse full system info */
hwloc_topology_set_flags(topology, HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM);
/* ignores cache, misc */
+#if HWLOC_API_VERSION < 0x00020000
hwloc_topology_ignore_type (topology, HWLOC_OBJ_CACHE);
hwloc_topology_ignore_type (topology, HWLOC_OBJ_MISC);
+#else
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_L1CACHE,
+ HWLOC_TYPE_FILTER_KEEP_NONE);
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_L2CACHE,
+ HWLOC_TYPE_FILTER_KEEP_NONE);
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_L3CACHE,
+ HWLOC_TYPE_FILTER_KEEP_NONE);
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_L4CACHE,
+ HWLOC_TYPE_FILTER_KEEP_NONE);
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_L5CACHE,
+ HWLOC_TYPE_FILTER_KEEP_NONE);
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_MISC,
+ HWLOC_TYPE_FILTER_KEEP_NONE);
+#endif
/* load topology */
if (hwloc_topology_load(topology)) {
error("%s: hwloc_topology_load() failed", __func__);

View File

@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:710a6d60c31b1627e7d102cf1aba0fd6aca3d16688c54d7203e0d5486819b1e6
size 9077914

3
slurm-16-05-5-1.tar.gz Normal file
View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7d3c30c1683fd207dda22f4078e038d110fa5bce133828fbd8e1ae6317f2ad38
size 8582827

View File

@ -1,3 +1,123 @@
-------------------------------------------------------------------
Tue Nov 22 21:42:04 UTC 2016 - eich@suse.com
- Fix build with and without OHCP_BUILD define.
- Fix build for systemd and non-systemd.
-------------------------------------------------------------------
Fri Nov 4 20:15:47 UTC 2016 - eich@suse.com
- Updated to 16-05-5 - equvalent to OpenHPC 1.2.
* Fix issue with resizing jobs and limits not be kept track of correctly.
* BGQ - Remove redeclaration of job_read_lock.
* BGQ - Tighter locks around structures when nodes/cables change state.
* Make it possible to change CPUsPerTask with scontrol.
* Make it so scontrol update part qos= will take away a partition QOS from
a partition.
* Backfill scheduling properly synchronized with Cray Node Health Check.
Prior logic could result in highest priority job getting improperly
postponed.
* Make it so daemons also support TopologyParam=NoInAddrAny.
* If scancel is operating on large number of jobs and RPC responses from
slurmctld daemon are slow then introduce a delay in sending the cancel job
requests from scancel in order to reduce load on slurmctld.
* Remove redundant logic when updating a job's task count.
* MySQL - Fix querying jobs with reservations when the id's have rolled.
* Perl - Fix use of uninitialized variable in slurm_job_step_get_pids.
* Launch batch job requsting --reboot after the boot completes.
* Do not attempt to power down a node which has never responded if the
slurmctld daemon restarts without state.
* Fix for possible slurmstepd segfault on invalid user ID.
* MySQL - Fix for possible race condition when archiving multiple clusters
at the same time.
* Add logic so that slurmstepd can be launched under valgrind.
* Increase buffer size to read /proc/*/stat files.
* Remove the SchedulerParameters option of "assoc_limit_continue", making it
the default value. Add option of "assoc_limit_stop". If "assoc_limit_stop"
is set and a job cannot start due to association limits, then do not attempt
to initiate any lower priority jobs in that partition. Setting this can
decrease system throughput and utlization, but avoid potentially starving
larger jobs by preventing them from launching indefinitely.
* Update a node's socket and cores per socket counts as needed after a node
boot to reflect configuration changes which can occur on KNL processors.
Note that the node's total core count must not change, only the distribution
of cores across varying socket counts (KNL NUMA nodes treated as sockets by
Slurm).
* Rename partition configuration from "Shared" to "OverSubscribe". Rename
salloc, sbatch, srun option from "--shared" to "--oversubscribe". The old
options will continue to function. Output field names also changed in
scontrol, sinfo, squeue and sview.
* Add SLURM_UMASK environment variable to user job.
* knl_conf: Added new configuration parameter of CapmcPollFreq.
* Cleanup two minor Coverity warnings.
* Make it so the tres units in a job's formatted string are converted like
they are in a step.
* Correct partition's MaxCPUsPerNode enforcement when nodes are shared by
multiple partitions.
* node_feature/knl_cray - Prevent slurmctld GRES errors for "hbm" references.
* Display thread name instead of thread id and remove process name in stderr
logging for "thread_id" LogTimeFormat.
* Log IP address of bad incomming message to slurmctld.
* If a user requests tasks, nodes and ntasks-per-node and
tasks-per-node/nodes != tasks print warning and ignore ntasks-per-node.
* Release CPU "owner" file locks.
* Update seff to fix warnings with ncpus, and list slurm-perlapi dependency
in spec file.
* Allow QOS timelimit to override partition timelimit when EnforcePartLimits
is set to all/any.
* Make it so qsub will do a "basename" on a wrapped command for the output
and error files.
* Add logic so that slurmstepd can be launched under valgrind.
* Increase buffer size to read /proc/*/stat files.
* Prevent job stuck in configuring state if slurmctld daemon restarted while
PrologSlurmctld is running. Also re-issue burst_buffer/pre-load operation
as needed.
* Move test for job wait reason value of BurstBufferResources and
BurstBufferStageIn later in the scheduling logic.
* Document which srun options apply to only job, only step, or job and step
allocations.
* Use more compatible function to get thread name (>= 2.6.11).
* Make it so the extern step uses a reverse tree when cleaning up.
* If extern step doesn't get added into the proctrack plugin make sure the
sleep is killed.
* Add web links to Slurm Diamond Collectors (from Harvard University) and
collectd (from EDF).
* Add job_submit plugin for the "reboot" field.
* Make some more Slurm constants (INFINITE, NO_VAL64, etc.) available to
job_submit/lua plugins.
* Send in a -1 for a taskid into spank_task_post_fork for the extern_step.
* MYSQL - Sightly better logic if a job completion comes in with an end time
of 0.
* task/cgroup plugin is configured with ConstrainRAMSpace=yes, then set soft
memory limit to allocated memory limit (previously no soft limit was set).
* Streamline when schedule() is called when running with message aggregation
on batch script completes.
* Fix incorrect casting when [un]packing derived_ec on slurmdb_job_rec_t.
* Document that persistent burst buffers can not be created or destroyed using
the salloc or srun --bb options.
* Add support for setting the SLURM_JOB_ACCOUNT, SLURM_JOB_QOS and
SLURM_JOB_RESERVAION environment variables are set for the salloc command.
Document the same environment variables for the salloc, sbatch and srun
commands in their man pages.
* Fix issue where sacctmgr load cluster.cfg wouldn't load associations
that had a partition in them.
* Don't return the extern step from sstat by default.
* In sstat print 'extern' instead of 4294967295 for the extern step.
* Make advanced reservations work properly with core specialization.
* slurmstepd modified to pre-load all relevant plugins at startup to avoid
the possibility of modified plugins later resulting in inconsistent API
or data structures and a failure of slurmstepd.
* Export functions from parse_time.c in libslurm.so.
* Export unit convert functions from slurm_protocol_api.c in libslurm.so.
* Fix scancel to allow multiple steps from a job to be cancelled at once.
* Update and expand upgrade guide (in Quick Start Administrator web page).
* burst_buffer/cray: Requeue, but do not hold a job which fails the pre_run
operation.
* Insure reported expected job start time is not in the past for pending jobs.
* Add support for PMIx v2.
Required for FATE#316379.
-------------------------------------------------------------------
Mon Oct 17 13:25:52 UTC 2016 - eich@suse.com

View File

@ -20,23 +20,21 @@
%define vers_f() %(%trans)
%define vers_t() %(%trunc)
%if 0%{?suse_version} >= 1220
%if 0%{?suse_version} >= 1220 || 0%{?sle_version} >= 120000
%define with_systemd 1
%else
%define with_systemd 0
%endif
%if 0%{suse_version} >= 1310
%define have_netloc 1
%endif
%define libslurm libslurm29
%define ver_exp 15-08-7-1
%define ver_exp 16-05-5-1
Name: slurm
Version: %{vers_f %ver_exp}
Release: 0
Summary: Simple Linux Utility for Resource Management
License: GPL-3.0
License: SUSE-GPL-2.0-with-openssl-exception
Group: Productivity/Clustering/Computing
Url: https://computing.llnl.gov/linux/slurm/
Source: https://github.com/SchedMD/slurm/archive/%{name}-%{ver_exp}.tar.gz
@ -44,7 +42,8 @@ Source1: slurm.service
Source2: slurmdbd.service
Patch0: slurm-2.4.4-rpath.patch
Patch1: slurm-2.4.4-init.patch
Patch2: slurmd-Fix-for-newer-API-versions.patch
Patch2: slurmd-Fix-slurmd-for-new-API-in-hwloc-2.0.patch
Patch3: plugins-cgroup-Fix-slurmd-for-new-API-in-hwloc-2.0.patch
Requires: slurm-plugins = %{version}
BuildRequires: fdupes
BuildRequires: gcc-c++
@ -62,7 +61,7 @@ BuildRequires: pkgconfig
BuildRequires: postgresql-devel >= 8.0.0
BuildRequires: python
BuildRequires: readline-devel
%if %{with_systemd}
%if 0%{?with_systemd}
%{?systemd_requires}
BuildRequires: systemd
%else
@ -161,7 +160,7 @@ This package contains the SLURM plugin for the Maui or Moab scheduler wiki inter
Summary: SLURM database daemon
Group: Productivity/Clustering/Computing
Requires: slurm-plugins = %{version}
%if %{with_systemd}
%if 0%{?with_systemd}
%{?systemd_requires}
%else
PreReq: %insserv_prereq %fillup_prereq
@ -187,6 +186,23 @@ Provides: torque-client
%description torque
Torque wrapper scripts used for helping migrate from Torque/PBS to SLURM.
%package openlava
Summary: Openlava/LSF wrappers for transitition from OpenLava/LSF to Slurm
Group: Development/System
Requires: slurm-perlapi
%package seff
Summary: Mail tool that includes job statistics in user notification email
Group: Development/System
Requires: slurm-perlapi
%description seff
Mail program used directly by the Slurm daemons. On completion of a job,
wait for it''s accounting information to be available and include that
information in the email body.
%description openlava
OpenLava wrapper scripts used for helping migrate from OpenLava/LSF to Slurm
%package slurmdb-direct
Summary: Wrappers to write directly to the slurmdb
@ -233,6 +249,7 @@ or any user who has allocated resources on the node according to the SLURM
%patch0 -p1
%patch1 -p1
%patch2 -p1
%patch3 -p1
chmod 0644 doc/html/*.{gif,jpg}
%build
@ -244,13 +261,17 @@ make %{?_smp_mflags}
%install
%makeinstall
make install-contrib DESTDIR=$RPM_BUILD_ROOT
make install-contrib DESTDIR=$RPM_BUILD_ROOT PERL_MM_PARAMS="INSTALLDIRS=vendor"
rm -f $RPM_BUILD_ROOT/%{_sysconfdir}/slurm.conf.template
rm -f $RPM_BUILD_ROOT/%{_sbindir}/slurmconfgen.py
%if %{with_systemd}
%if 0%{?with_systemd}
mkdir -p %{buildroot}%{_unitdir}
install -p -m644 %{S:1} %{S:2} %{buildroot}%{_unitdir}
ln -s /usr/sbin/service %{buildroot}%{_sbindir}/rcslurm
install -p -m644 etc/slurmd.service etc/slurmdbd.service etc/slurmctld.service %{buildroot}%{_unitdir}
ln -s /usr/sbin/service %{buildroot}%{_sbindir}/rcslurmd
ln -s /usr/sbin/service %{buildroot}%{_sbindir}/rcslurmdbd
ln -s /usr/sbin/service %{buildroot}%{_sbindir}/rcslurmctld
%else
install -D -m755 etc/init.d.slurm $RPM_BUILD_ROOT%{_initrddir}/slurm
install -D -m755 etc/init.d.slurmdbd $RPM_BUILD_ROOT%{_initrddir}/slurmdbd
@ -258,13 +279,37 @@ ln -sf %{_initrddir}/slurm %{buildroot}%{_sbindir}/rcslurm
ln -sf %{_initrddir}/slurmdbd %{buildroot}%{_sbindir}/rcslurmdbd
%endif
install -D -m644 etc/slurm.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/slurm.conf
install -D -m644 etc/slurm.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/slurm.conf%{?OHPC_BUILD:.example}
install -D -m644 etc/slurmdbd.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/slurmdbd.conf
install -D -m644 etc/cgroup.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup.conf
install -D -m755 etc/cgroup.release_common.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup/release_common
install -D -m644 etc/cgroup_allowed_devices_file.conf.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup_allowed_devices_file.conf
install -D -m755 etc/slurm.epilog.clean $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/slurm.epilog.clean
install -D -m755 contribs/sjstat $RPM_BUILD_ROOT%{_bindir}/sjstat
install -D -m755 etc/cgroup.release_common.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup/release_common.example
install -D -m755 etc/cgroup.release_common.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup/release_freezer
install -D -m755 etc/cgroup.release_common.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup/release_cpuset
install -D -m755 etc/cgroup.release_common.example $RPM_BUILD_ROOT%{_sysconfdir}/%{name}/cgroup/release_memory
install -D -m644 etc/slurmdbd.conf.example ${RPM_BUILD_ROOT}%{_sysconfdir}/%{name}/slurmdbd.conf.example
install -D -m755 etc/slurm.epilog.clean ${RPM_BUILD_ROOT}%{_sysconfdir}/%{name}/slurm.epilog.clean
install -D -m755 contribs/sgather/sgather ${RPM_BUILD_ROOT}%{_bindir}/sgather
install -D -m755 contribs/sjstat ${RPM_BUILD_ROOT}%{_bindir}/sjstat
%if 0%{?OHPC_BUILD}
# 6/16/15 karl.w.schulz@intel.com - do not package Slurm's version of libpmi with OpenHPC.
rm -f $RPM_BUILD_ROOT/%{_libdir}/libpmi*
rm -f $RPM_BUILD_ROOT/%{_libdir}/mpi_pmi2*
# 9/8/14 karl.w.schulz@intel.com - provide starting config file
head -n -2 $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf.example | grep -v ReturnToService > $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
echo "# OpenHPC default configuration" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
echo "PropagateResourceLimitsExcept=MEMLOCK" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
echo "SlurmdLogFile=/var/log/slurm.log" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
echo "SlurmctldLogFile=/var/log/slurmctld.log" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
echo "Epilog=/etc/slurm/slurm.epilog.clean" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
echo "NodeName=c[1-4] Sockets=2 CoresPerSocket=8 ThreadsPerCore=2 State=UNKNOWN" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
echo "PartitionName=normal Nodes=c[1-4] Default=YES MaxTime=24:00:00 State=UP" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
# 6/3/16 nirmalasrjn@gmail.com - Adding ReturnToService Directive to starting config file (note removal of variable during above creation)
echo "ReturnToService=1" >> $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.conf
# 9/17/14 karl.w.schulz@intel.com - Add option to drop VM cache during epilog
sed -i '/^# No other SLURM jobs,/i \\n# Drop clean caches (OpenHPC)\necho 3 > /proc/sys/vm/drop_caches\n\n#' $RPM_BUILD_ROOT/%{_sysconfdir}/%{name}/slurm.epilog.clean
%endif
# Delete unpackaged files:
rm -rf $RPM_BUILD_ROOT/%{_libdir}/slurm/*.{a,la} \
@ -276,66 +321,103 @@ rm -f $RPM_BUILD_ROOT%{_mandir}/man1/srun_cr* \
$RPM_BUILD_ROOT%{_bindir}/srun_cr \
$RPM_BUILD_ROOT%{_libexecdir}/slurm/cr_*
mkdir -p $RPM_BUILD_ROOT%{perl_vendorarch}
mv $RPM_BUILD_ROOT%{perl_sitearch}/* $RPM_BUILD_ROOT%{perl_vendorarch}
%perl_process_packlist
# Delete unpackaged files:
test -s $RPM_BUILD_ROOT/%{_perldir}/auto/Slurm/Slurm.bs ||
rm -f $RPM_BUILD_ROOT/%{_perldir}/auto/Slurm/Slurm.bs
test -s $RPM_BUILD_ROOT/%{_perldir}/auto/Slurmdb/Slurmdb.bs ||
rm -f $RPM_BUILD_ROOT/%{_perldir}/auto/Slurmdb/Slurmdb.bs
rm doc/html/shtml2html.py doc/html/Makefile*
%{__rm} -f %{buildroot}/%{perl_archlib}/perllocal.pod
%{__rm} -f %{buildroot}/%{perl_vendorarch}/auto/Slurm/.packlist
%{__rm} -f %{buildroot}/%{perl_vendorarch}/auto/Slurmdb/.packlist
%{__mv} %{buildroot}/%{perl_sitearch}/config.slurmdb.pl %{buildroot}/%{perl_vendorarch}
# Build man pages that are generated directly by the tools
rm -f $RPM_BUILD_ROOT/%{_mandir}/man1/sjobexitmod.1
${RPM_BUILD_ROOT}%{_bindir}/sjobexitmod --roff > $RPM_BUILD_ROOT/%{_mandir}/man1/sjobexitmod.1
rm -f $RPM_BUILD_ROOT/%{_mandir}/man1/sjstat.1
${RPM_BUILD_ROOT}%{_bindir}/sjstat --roff > $RPM_BUILD_ROOT/%{_mandir}/man1/sjstat.1
# rpmlint reports wrong end of line for those files
sed -i 's/\r$//' $RPM_BUILD_ROOT%{_bindir}/qrerun
sed -i 's/\r$//' $RPM_BUILD_ROOT%{_bindir}/qalter
mkdir -p $RPM_BUILD_ROOT/etc/ld.so.conf.d
echo '%{_libdir}
%{_libdir}/slurm' > $RPM_BUILD_ROOT/etc/ld.so.conf.d/slurm.conf
chmod 644 $RPM_BUILD_ROOT/etc/ld.so.conf.d/slurm.conf
# Make pkg-config file
mkdir -p $RPM_BUILD_ROOT/%{_libdir}/pkgconfig
cat > $RPM_BUILD_ROOT/%{_libdir}/pkgconfig/slurm.pc <<EOF
includedir=%{_prefix}/include
libdir=%{_libdir}
Cflags: -I\${includedir}
Libs: -L\${libdir} -lslurm
Description: Slurm API
Name: %{pname}
Version: %{version}
EOF
%fdupes -s $RPM_BUILD_ROOT
%if %{with_systemd}
%if 0%{?with_systemd}
%pre
%service_add_pre slurm.service
%service_add_pre slurmd.service
%service_add_pre slurmctld.service
%endif
%post
%if %{with_systemd}
%service_add_post slurm.service
%if 0%{?with_systemd}
%service_add_post slurmd.service
%service_add_post slurmctld.service
%else
%fillup_and_insserv slurm
%endif
%preun
%if %{with_systemd}
%service_del_preun slurm.service
%if 0%{?with_systemd}
%service_del_preun slurmd.service
%service_del_preun slurmctld.service
%else
%stop_on_removal slurm
%stop_on_removal slurmd
%endif
%postun
%if %{with_systemd}
%service_del_postun slurm.service
%if 0%{?with_systemd}
%service_del_postun slurmd.service
%service_del_postun slurmctld.service
%else
%restart_on_update slurm
%restart_on_update slurmd
%insserv_cleanup
%endif
%if %{with_systemd}
%if 0%{?with_systemd}
%pre slurmdbd
%service_add_pre slurmdbd.service
%endif
%post slurmdbd
%if %{with_systemd}
%if 0%{?with_systemd}
%service_add_post slurmdbd.service
%else
%fillup_and_insserv slurmdbd
%endif
%preun slurmdbd
%if %{with_systemd}
%if 0%{?with_systemd}
%service_del_preun slurmdbd.service
%else
%stop_on_removal slurmdbd
%endif
%postun slurmdbd
%if %{with_systemd}
%if 0%{?with_systemd}
%service_del_postun slurmdbd.service
%else
%restart_on_update slurmdbd
@ -396,6 +478,8 @@ sed -i 's/\r$//' $RPM_BUILD_ROOT%{_bindir}/qalter
%{_mandir}/man1/sstat.1*
%{_mandir}/man1/strigger.1*
%{_mandir}/man1/sh5util.1*
%{_mandir}/man1/sjobexitmod.1.*
%{_mandir}/man1/sjstat.1.*
%{_mandir}/man5/acct_gather.conf.*
%{_mandir}/man5/burst_buffer.conf.*
%{_mandir}/man5/ext_sensors.conf.*
@ -405,6 +489,7 @@ sed -i 's/\r$//' $RPM_BUILD_ROOT%{_bindir}/qalter
%{_mandir}/man5/gres.*
%{_mandir}/man5/nonstop.conf.5.*
%{_mandir}/man5/topology.*
%{_mandir}/man5/knl.conf.5.*
%{_mandir}/man8/slurmctld.*
%{_mandir}/man8/slurmd.*
%{_mandir}/man8/slurmstepd*
@ -412,17 +497,33 @@ sed -i 's/\r$//' $RPM_BUILD_ROOT%{_bindir}/qalter
%dir %{_libdir}/slurm/src
%dir %{_sysconfdir}/%{name}
%config(noreplace) %{_sysconfdir}/%{name}/slurm.conf
%{?OHPC_BUILD:%config %{_sysconfdir}/%{name}/slurm.conf.example}
%config(noreplace) %{_sysconfdir}/%{name}/cgroup.conf
%config(noreplace) %{_sysconfdir}/%{name}/cgroup_allowed_devices_file.conf
%config(noreplace) %{_sysconfdir}/%{name}/slurm.epilog.clean
%dir %{_sysconfdir}/%{name}/cgroup
%config(noreplace) %{_sysconfdir}/%{name}/cgroup/release_common
%if %{with_systemd}
%{_unitdir}/slurm.service
%config(noreplace) %{_sysconfdir}/%{name}/cgroup/release_*
%if 0%{?with_systemd}
%{_unitdir}/slurmd.service
%{_unitdir}/slurmctld.service
%{_sbindir}/rcslurmd
%else
%{_initrddir}/slurm
%endif
%{_sbindir}/rcslurm
%endif
%{?with_systemd:%{_sbindir}/rcslurmctld}
%files openlava
%defattr(-,root,root)
%{_bindir}/bjobs
%{_bindir}/bkill
%{_bindir}/bsub
%{_bindir}/lsid
%files seff
%defattr(-,root,root)
%{_bindir}/seff
%{_bindir}/smail
%files doc
%defattr(-,root,root)
@ -436,12 +537,13 @@ sed -i 's/\r$//' $RPM_BUILD_ROOT%{_bindir}/qalter
%files devel
%defattr(-,root,root)
%{_prefix}/include/slurm
%{_libdir}/libpmi.so
%{_libdir}/libpmi2.so
%{!?OHPC_BUILD:%{_libdir}/libpmi.so}
%{!?OHPC_BUILD:%{_libdir}/libpmi2.so}
%{_libdir}/libslurm.so
%{_libdir}/libslurmdb.so
%{_libdir}/slurm/src/*
%{_mandir}/man3/slurm_*
%{_libdir}/pkgconfig/slurm.pc
%files sview
%defattr(-,root,root)
@ -470,9 +572,6 @@ sed -i 's/\r$//' $RPM_BUILD_ROOT%{_bindir}/qalter
%{perl_vendorarch}/Slurmdb.pm
%{perl_vendorarch}/auto/Slurmdb
%{_mandir}/man3/Slurm*.3pm.*
%if 0%{?suse_version} <= 1110
/var/adm/perl-modules/slurm
%endif
%files slurmdbd
%defattr(-,root,root)
@ -480,7 +579,8 @@ sed -i 's/\r$//' $RPM_BUILD_ROOT%{_bindir}/qalter
%{_mandir}/man5/slurmdbd.*
%{_mandir}/man8/slurmdbd.*
%config(noreplace) %{_sysconfdir}/%{name}/slurmdbd.conf
%if %{with_systemd}
%{_sysconfdir}/%{name}/slurmdbd.conf.example
%if 0%{?with_systemd}
%config %{_unitdir}/slurmdbd.service
%else
%{_initrddir}/slurmdbd
@ -489,6 +589,7 @@ sed -i 's/\r$//' $RPM_BUILD_ROOT%{_bindir}/qalter
%files plugins
%defattr(-,root,root)
%{_sysconfdir}/ld.so.conf.d/slurm.conf
%dir %{_libdir}/slurm
%{_libdir}/slurm/accounting_storage_filetxt.so
%{_libdir}/slurm/accounting_storage_none.so
@ -565,7 +666,7 @@ sed -i 's/\r$//' $RPM_BUILD_ROOT%{_bindir}/qalter
%{_libdir}/slurm/gres_mic.so
%{_libdir}/slurm/gres_nic.so
%{_libdir}/slurm/job_submit_all_partitions.so
%{_libdir}/slurm/job_submit_cnode.so
#%%{_libdir}/slurm/job_submit_cnode.so
%{_libdir}/slurm/job_submit_defaults.so
%{_libdir}/slurm/job_submit_logging.so
%{_libdir}/slurm/job_submit_partition.so
@ -579,6 +680,9 @@ sed -i 's/\r$//' $RPM_BUILD_ROOT%{_bindir}/qalter
%{_libdir}/slurm/select_serial.so
%{_libdir}/slurm/task_cgroup.so
%{_libdir}/slurm/topology_node_rank.so
%{_libdir}/slurm/mcs_group.so
%{_libdir}/slurm/mcs_none.so
%{_libdir}/slurm/mcs_user.so
%files torque
%defattr(-,root,root)

View File

@ -0,0 +1,45 @@
From: Egbert Eich <eich@suse.com>
Date: Sun Oct 16 09:07:46 2016 +0200
Subject: slurmd: Fix slurmd for new API in hwloc-2.0
Git-repo: https://github.com/SchedMD/slurm
Git-commit: 2e431ed7fdf7a57c7ce1b5f3d3a8bbedaf94a51d
References:
The API of hwloc has changed considerably for version 2.0.
For a summary check:
https://github.com/open-mpi/hwloc/wiki/Upgrading-to-v2.0-API
Test for the API version to support both the old and new API.
Signed-off-by: Egbert Eich <eich@suse.com>
Signed-off-by: Egbert Eich <eich@suse.de>
---
src/slurmd/common/xcpuinfo.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/src/slurmd/common/xcpuinfo.c b/src/slurmd/common/xcpuinfo.c
index 4eec6cb..22a47d5 100644
--- a/src/slurmd/common/xcpuinfo.c
+++ b/src/slurmd/common/xcpuinfo.c
@@ -212,8 +212,23 @@ get_cpuinfo(uint16_t *p_cpus, uint16_t *p_boards,
hwloc_topology_set_flags(topology, HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM);
/* ignores cache, misc */
+#if HWLOC_API_VERSION < 0x00020000
hwloc_topology_ignore_type (topology, HWLOC_OBJ_CACHE);
hwloc_topology_ignore_type (topology, HWLOC_OBJ_MISC);
+#else
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_L1CACHE,
+ HWLOC_TYPE_FILTER_KEEP_NONE);
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_L2CACHE,
+ HWLOC_TYPE_FILTER_KEEP_NONE);
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_L3CACHE,
+ HWLOC_TYPE_FILTER_KEEP_NONE);
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_L4CACHE,
+ HWLOC_TYPE_FILTER_KEEP_NONE);
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_L5CACHE,
+ HWLOC_TYPE_FILTER_KEEP_NONE);
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_MISC,
+ HWLOC_TYPE_FILTER_KEEP_NONE);
+#endif
/* load topology */
debug2("hwloc_topology_load");