9621add6e3
support qcow2, so blktap is needed to support domains with 'tap:qcow2' disk configurations. modified tmp-initscript-modprobe.patch - bnc#809203 - xen.efi isn't signed with SUSE Secure Boot key xen.spec - Fix adding managed PCI device to an inactive domain modified xen-managed-pci-device.patch - bnc#805094 - xen hot plug attach/detach fails modified blktap-pv-cdrom.patch - bnc# 802690 - domain locking can prevent a live migration from completing modified xend-domain-lock.patch - bnc#797014 - no way to control live migrations 26675-tools-xentoollog_update_tty_detection_in_stdiostream_progress.patch xen.migrate.tools-xc_print_messages_from_xc_save_with_xc_report.patch xen.migrate.tools-xc_document_printf_calls_in_xc_restore.patch xen.migrate.tools-xc_rework_xc_save.cswitch_qemu_logdirty.patch xen.migrate.tools_set_migration_constraints_from_cmdline.patch xen.migrate.tools_add_xm_migrate_--log_progress_option.patch - Upstream patches from Jan 26585-x86-mm-Take-the-p2m-lock-even-in-shadow-mode.patch 26595-x86-nhvm-properly-clean-up-after-failure-to-set-up-all-vCPU-s.patch 26601-honor-ACPI-v4-FADT-flags.patch OBS-URL: https://build.opensuse.org/package/show/Virtualization/xen?expand=0&rev=232
59 lines
2.4 KiB
Diff
59 lines
2.4 KiB
Diff
# Commit e6a6fd63652814e5c36a0016c082032f798ced1f
|
|
# Date 2013-03-04 10:17:52 +0100
|
|
# Author Jan Beulich <jbeulich@suse.com>
|
|
# Committer Jan Beulich <jbeulich@suse.com>
|
|
SEDF: avoid gathering vCPU-s on pCPU0
|
|
|
|
The introduction of vcpu_force_reschedule() in 14320:215b799fa181 was
|
|
incompatible with the SEDF scheduler: Any vCPU using
|
|
VCPUOP_stop_periodic_timer (e.g. any vCPU of half way modern PV Linux
|
|
guests) ends up on pCPU0 after that call. Obviously, running all PV
|
|
guests' (and namely Dom0's) vCPU-s on pCPU0 causes problems for those
|
|
guests rather sooner than later.
|
|
|
|
So the main thing that was clearly wrong (and bogus from the beginning)
|
|
was the use of cpumask_first() in sedf_pick_cpu(). It is being replaced
|
|
by a construct that prefers to put back the vCPU on the pCPU that it
|
|
got launched on.
|
|
|
|
However, there's one more glitch: When reducing the affinity of a vCPU
|
|
temporarily, and then widening it again to a set that includes the pCPU
|
|
that the vCPU was last running on, the generic scheduler code would not
|
|
force a migration of that vCPU, and hence it would forever stay on the
|
|
pCPU it last ran on. Since that can again create a load imbalance, the
|
|
SEDF scheduler wants a migration to happen regardless of it being
|
|
apparently unnecessary.
|
|
|
|
Of course, an alternative to checking for SEDF explicitly in
|
|
vcpu_set_affinity() would be to introduce a flags field in struct
|
|
scheduler, and have SEDF set a "always-migrate-on-affinity-change"
|
|
flag.
|
|
|
|
Signed-off-by: Jan Beulich <jbeulich@suse.com>
|
|
Acked-by: Keir Fraser <keir@xen.org>
|
|
|
|
--- a/xen/common/sched_sedf.c
|
|
+++ b/xen/common/sched_sedf.c
|
|
@@ -396,7 +396,8 @@ static int sedf_pick_cpu(const struct sc
|
|
|
|
online = cpupool_scheduler_cpumask(v->domain->cpupool);
|
|
cpumask_and(&online_affinity, v->cpu_affinity, online);
|
|
- return cpumask_first(&online_affinity);
|
|
+ return cpumask_cycle(v->vcpu_id % cpumask_weight(&online_affinity) - 1,
|
|
+ &online_affinity);
|
|
}
|
|
|
|
/*
|
|
--- a/xen/common/schedule.c
|
|
+++ b/xen/common/schedule.c
|
|
@@ -611,7 +611,8 @@ int vcpu_set_affinity(struct vcpu *v, co
|
|
vcpu_schedule_lock_irq(v);
|
|
|
|
cpumask_copy(v->cpu_affinity, affinity);
|
|
- if ( !cpumask_test_cpu(v->processor, v->cpu_affinity) )
|
|
+ if ( VCPU2OP(v)->sched_id == XEN_SCHEDULER_SEDF ||
|
|
+ !cpumask_test_cpu(v->processor, v->cpu_affinity) )
|
|
set_bit(_VPF_migrating, &v->pause_flags);
|
|
|
|
vcpu_schedule_unlock_irq(v);
|