Accepting request 441490 from home:eeich:branches:network:cluster
- Fix build with and without OHCP_BUILD define.
- Fix build for systemd and non-systemd.
- Updated to 16-05-5 - equvalent to OpenHPC 1.2.
* Fix issue with resizing jobs and limits not be kept track of correctly.
* BGQ - Remove redeclaration of job_read_lock.
* BGQ - Tighter locks around structures when nodes/cables change state.
* Make it possible to change CPUsPerTask with scontrol.
* Make it so scontrol update part qos= will take away a partition QOS from
a partition.
* Backfill scheduling properly synchronized with Cray Node Health Check.
Prior logic could result in highest priority job getting improperly
postponed.
* Make it so daemons also support TopologyParam=NoInAddrAny.
* If scancel is operating on large number of jobs and RPC responses from
slurmctld daemon are slow then introduce a delay in sending the cancel job
requests from scancel in order to reduce load on slurmctld.
* Remove redundant logic when updating a job's task count.
* MySQL - Fix querying jobs with reservations when the id's have rolled.
* Perl - Fix use of uninitialized variable in slurm_job_step_get_pids.
* Launch batch job requsting --reboot after the boot completes.
* Do not attempt to power down a node which has never responded if the
slurmctld daemon restarts without state.
* Fix for possible slurmstepd segfault on invalid user ID.
* MySQL - Fix for possible race condition when archiving multiple clusters
at the same time.
* Add logic so that slurmstepd can be launched under valgrind.
* Increase buffer size to read /proc/*/stat files.
* Remove the SchedulerParameters option of "assoc_limit_continue", making it
the default value. Add option of "assoc_limit_stop". If "assoc_limit_stop"
is set and a job cannot start due to association limits, then do not attempt
to initiate any lower priority jobs in that partition. Setting this can
decrease system throughput and utlization, but avoid potentially starving
larger jobs by preventing them from launching indefinitely.
* Update a node's socket and cores per socket counts as needed after a node
boot to reflect configuration changes which can occur on KNL processors.
Note that the node's total core count must not change, only the distribution
of cores across varying socket counts (KNL NUMA nodes treated as sockets by
Slurm).
* Rename partition configuration from "Shared" to "OverSubscribe". Rename
salloc, sbatch, srun option from "--shared" to "--oversubscribe". The old
options will continue to function. Output field names also changed in
scontrol, sinfo, squeue and sview.
* Add SLURM_UMASK environment variable to user job.
* knl_conf: Added new configuration parameter of CapmcPollFreq.
* Cleanup two minor Coverity warnings.
* Make it so the tres units in a job's formatted string are converted like
they are in a step.
* Correct partition's MaxCPUsPerNode enforcement when nodes are shared by
multiple partitions.
* node_feature/knl_cray - Prevent slurmctld GRES errors for "hbm" references.
* Display thread name instead of thread id and remove process name in stderr
logging for "thread_id" LogTimeFormat.
* Log IP address of bad incomming message to slurmctld.
* If a user requests tasks, nodes and ntasks-per-node and
tasks-per-node/nodes != tasks print warning and ignore ntasks-per-node.
* Release CPU "owner" file locks.
* Update seff to fix warnings with ncpus, and list slurm-perlapi dependency
in spec file.
* Allow QOS timelimit to override partition timelimit when EnforcePartLimits
is set to all/any.
* Make it so qsub will do a "basename" on a wrapped command for the output
and error files.
* Add logic so that slurmstepd can be launched under valgrind.
* Increase buffer size to read /proc/*/stat files.
* Prevent job stuck in configuring state if slurmctld daemon restarted while
PrologSlurmctld is running. Also re-issue burst_buffer/pre-load operation
as needed.
* Move test for job wait reason value of BurstBufferResources and
BurstBufferStageIn later in the scheduling logic.
* Document which srun options apply to only job, only step, or job and step
allocations.
* Use more compatible function to get thread name (>= 2.6.11).
* Make it so the extern step uses a reverse tree when cleaning up.
* If extern step doesn't get added into the proctrack plugin make sure the
sleep is killed.
* Add web links to Slurm Diamond Collectors (from Harvard University) and
collectd (from EDF).
* Add job_submit plugin for the "reboot" field.
* Make some more Slurm constants (INFINITE, NO_VAL64, etc.) available to
job_submit/lua plugins.
* Send in a -1 for a taskid into spank_task_post_fork for the extern_step.
* MYSQL - Sightly better logic if a job completion comes in with an end time
of 0.
* task/cgroup plugin is configured with ConstrainRAMSpace=yes, then set soft
memory limit to allocated memory limit (previously no soft limit was set).
* Streamline when schedule() is called when running with message aggregation
on batch script completes.
* Fix incorrect casting when [un]packing derived_ec on slurmdb_job_rec_t.
* Document that persistent burst buffers can not be created or destroyed using
the salloc or srun --bb options.
* Add support for setting the SLURM_JOB_ACCOUNT, SLURM_JOB_QOS and
SLURM_JOB_RESERVAION environment variables are set for the salloc command.
Document the same environment variables for the salloc, sbatch and srun
commands in their man pages.
* Fix issue where sacctmgr load cluster.cfg wouldn't load associations
that had a partition in them.
* Don't return the extern step from sstat by default.
* In sstat print 'extern' instead of 4294967295 for the extern step.
* Make advanced reservations work properly with core specialization.
* slurmstepd modified to pre-load all relevant plugins at startup to avoid
the possibility of modified plugins later resulting in inconsistent API
or data structures and a failure of slurmstepd.
* Export functions from parse_time.c in libslurm.so.
* Export unit convert functions from slurm_protocol_api.c in libslurm.so.
* Fix scancel to allow multiple steps from a job to be cancelled at once.
* Update and expand upgrade guide (in Quick Start Administrator web page).
* burst_buffer/cray: Requeue, but do not hold a job which fails the pre_run
operation.
* Insure reported expected job start time is not in the past for pending jobs.
* Add support for PMIx v2.
OBS-URL: https://build.opensuse.org/request/show/441490
OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=12
2016-11-24 23:01:51 +01:00
|
|
|
From: Egbert Eich <eich@suse.com>
|
|
|
|
Date: Sun Oct 16 09:07:46 2016 +0200
|
|
|
|
Subject: slurmd: Fix slurmd for new API in hwloc-2.0
|
|
|
|
Git-repo: https://github.com/SchedMD/slurm
|
|
|
|
Git-commit: 2e431ed7fdf7a57c7ce1b5f3d3a8bbedaf94a51d
|
|
|
|
References:
|
|
|
|
|
|
|
|
The API of hwloc has changed considerably for version 2.0.
|
|
|
|
For a summary check:
|
|
|
|
https://github.com/open-mpi/hwloc/wiki/Upgrading-to-v2.0-API
|
|
|
|
Test for the API version to support both the old and new API.
|
|
|
|
|
|
|
|
Signed-off-by: Egbert Eich <eich@suse.com>
|
|
|
|
Signed-off-by: Egbert Eich <eich@suse.de>
|
|
|
|
---
|
|
|
|
src/slurmd/common/xcpuinfo.c | 15 +++++++++++++++
|
|
|
|
1 file changed, 15 insertions(+)
|
|
|
|
diff --git a/src/slurmd/common/xcpuinfo.c b/src/slurmd/common/xcpuinfo.c
|
|
|
|
index 4eec6cb..22a47d5 100644
|
|
|
|
--- a/src/slurmd/common/xcpuinfo.c
|
|
|
|
+++ b/src/slurmd/common/xcpuinfo.c
|
|
|
|
@@ -212,8 +212,23 @@ get_cpuinfo(uint16_t *p_cpus, uint16_t *p_boards,
|
|
|
|
hwloc_topology_set_flags(topology, HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM);
|
|
|
|
|
|
|
|
/* ignores cache, misc */
|
|
|
|
+#if HWLOC_API_VERSION < 0x00020000
|
|
|
|
hwloc_topology_ignore_type (topology, HWLOC_OBJ_CACHE);
|
|
|
|
hwloc_topology_ignore_type (topology, HWLOC_OBJ_MISC);
|
|
|
|
+#else
|
|
|
|
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_L1CACHE,
|
|
|
|
+ HWLOC_TYPE_FILTER_KEEP_NONE);
|
|
|
|
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_L2CACHE,
|
|
|
|
+ HWLOC_TYPE_FILTER_KEEP_NONE);
|
|
|
|
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_L3CACHE,
|
|
|
|
+ HWLOC_TYPE_FILTER_KEEP_NONE);
|
|
|
|
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_L4CACHE,
|
|
|
|
+ HWLOC_TYPE_FILTER_KEEP_NONE);
|
|
|
|
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_L5CACHE,
|
|
|
|
+ HWLOC_TYPE_FILTER_KEEP_NONE);
|
|
|
|
+ hwloc_topology_set_type_filter(topology,HWLOC_OBJ_MISC,
|
|
|
|
+ HWLOC_TYPE_FILTER_KEEP_NONE);
|
|
|
|
+#endif
|
|
|
|
|
|
|
|
/* load topology */
|
|
|
|
debug2("hwloc_topology_load");
|