xen/579730e6-remove-buggy-initial-placement-algorithm.patch
Charles Arnold a89d75605e - bsc#970135 - new virtualization project clock test randomly fails
on Xen
  576001df-x86-time-use-local-stamp-in-TSC-calibration-fast-path.patch
  5769106e-x86-generate-assembler-equates-for-synthesized.patch
  57a1e603-x86-time-adjust-local-system-time-initialization.patch
  57a1e64c-x86-time-introduce-and-use-rdtsc_ordered.patch
  57a2f6ac-x86-time-calibrate-TSC-against-platform-timer.patch
- bsc#991934 - xen hypervisor crash in csched_acct
  57973099-have-schedulers-revise-initial-placement.patch
  579730e6-remove-buggy-initial-placement-algorithm.patch
- bsc#988675 - VUL-0: CVE-2016-6258: xen: x86: Privilege escalation
  in PV guests (XSA-182)
  57976073-x86-remove-unsafe-bits-from-mod_lN_entry-fastpath.patch
- bsc#988676 - VUL-0: CVE-2016-6259: xen: x86: Missing SMAP
  whitelisting in 32-bit exception / event delivery (XSA-183)
  57976078-x86-avoid-SMAP-violation-in-compat_create_bounce_frame.patch
- Upstream patches from Jan
  57a30261-x86-support-newer-Intel-CPU-models.patch

- bsc#985503 - vif-route broken
  vif-route.patch

OBS-URL: https://build.opensuse.org/package/show/Virtualization/xen?expand=0&rev=445
2016-08-04 19:26:11 +00:00

85 lines
2.8 KiB
Diff

References: bsc#991934
# Commit d5438accceecc8172db2d37d98b695eb8bc43afc
# Date 2016-07-26 10:44:06 +0100
# Author George Dunlap <george.dunlap@citrix.com>
# Committer George Dunlap <george.dunlap@citrix.com>
xen: Remove buggy initial placement algorithm
The initial placement algorithm sometimes picks cpus outside of the
mask it's given, does a lot of unnecessary bitmasking, does its own
separate load calculation, and completely ignores vcpu hard and soft
affinities. Just get rid of it and rely on the schedulers to do
initial placement.
Signed-off-by: George Dunlap <george.dunlap@citrix.com>
Reviewed-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -217,54 +217,6 @@ void getdomaininfo(struct domain *d, str
memcpy(info->handle, d->handle, sizeof(xen_domain_handle_t));
}
-static unsigned int default_vcpu0_location(cpumask_t *online)
-{
- struct domain *d;
- struct vcpu *v;
- unsigned int i, cpu, nr_cpus, *cnt;
- cpumask_t cpu_exclude_map;
-
- /* Do an initial CPU placement. Pick the least-populated CPU. */
- nr_cpus = cpumask_last(&cpu_online_map) + 1;
- cnt = xzalloc_array(unsigned int, nr_cpus);
- if ( cnt )
- {
- rcu_read_lock(&domlist_read_lock);
- for_each_domain ( d )
- for_each_vcpu ( d, v )
- if ( !(v->pause_flags & VPF_down)
- && ((cpu = v->processor) < nr_cpus) )
- cnt[cpu]++;
- rcu_read_unlock(&domlist_read_lock);
- }
-
- /*
- * If we're on a HT system, we only auto-allocate to a non-primary HT. We
- * favour high numbered CPUs in the event of a tie.
- */
- cpumask_copy(&cpu_exclude_map, per_cpu(cpu_sibling_mask, 0));
- cpu = cpumask_first(&cpu_exclude_map);
- i = cpumask_next(cpu, &cpu_exclude_map);
- if ( i < nr_cpu_ids )
- cpu = i;
- for_each_cpu(i, online)
- {
- if ( cpumask_test_cpu(i, &cpu_exclude_map) )
- continue;
- if ( (i == cpumask_first(per_cpu(cpu_sibling_mask, i))) &&
- (cpumask_next(i, per_cpu(cpu_sibling_mask, i)) < nr_cpu_ids) )
- continue;
- cpumask_or(&cpu_exclude_map, &cpu_exclude_map,
- per_cpu(cpu_sibling_mask, i));
- if ( !cnt || cnt[i] <= cnt[cpu] )
- cpu = i;
- }
-
- xfree(cnt);
-
- return cpu;
-}
-
bool_t domctl_lock_acquire(void)
{
/*
@@ -691,7 +643,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
continue;
cpu = (i == 0) ?
- default_vcpu0_location(online) :
+ cpumask_any(online) :
cpumask_cycle(d->vcpu[i-1]->processor, online);
if ( alloc_vcpu(d, i, cpu) == NULL )