- version 2.5.7
* Fix for linking to the select/cray plugin to not give warning
about undefined variable.
* Add missing symbols to the xlator.h
* Avoid placing pending jobs in AdminHold state due to backfill
scheduler interactions with advanced reservation.
* Accounting - make average by task not cpu.
* POE - Correct logic to support poe option "-euidevice sn_all"
and "-euidevice sn_single".
* Accounting - Fix minor initialization error.
* POE - Correct logic to support srun network instances count
with POE.
* POE - With the srun --launch-cmd option, report proper task
count when the --cpus-per-task option is used without the
--ntasks option.
* POE - Fix logic binding tasks to CPUs.
* sview - Fix race condition where new information could of
slipped past the node tab and we didn't notice.
* Accounting - Fix an invalid memory read when slurmctld sends
data about start job to slurmdbd.
* If a prolog or epilog failure occurs, drain the node rather
than setting it down and killing all of its jobs.
* Priority/multifactor - Avoid underflow in half-life calculation.
* POE - pack missing variable to allow fanout (more than 32
nodes)
* Prevent clearing reason field for pending jobs. This bug was
introduced in v2.5.5 (see "Reject job at submit time ...").
* BGQ - Fix issue with preemption on sub-block jobs where a job
would kill all preemptable jobs on the midplane instead of just
the ones it needed to.
OBS-URL: https://build.opensuse.org/request/show/177944
OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=3
- version 2.5.4
* Support for Intel® Many Integrated Core (MIC) processors.
* User control over CPU frequency of each job step.
* Recording power usage information for each job.
* Advanced reservation of cores rather than whole nodes.
* Integration with IBM's Parallel Environment including POE (Parallel Operating Environment) and NRT (Network Resource Table) API.
* Highly optimized throughput for serial jobs in a new "select/serial" plugin.
* CPU load is information available
* Configurable number of CPUs available to jobs in each SLURM partition, which provides a mechanism to reserve CPUs for use with GPUs.
OBS-URL: https://build.opensuse.org/request/show/163479
OBS-URL: https://build.opensuse.org/package/show/network:cluster/slurm?expand=0&rev=2