sles10. Patch pygrub to get the kernel and initrd from the image. pygrub-boot-legacy-sles.patch - bnc#842515 - VUL-0: CVE-2013-4375: XSA-71: xen: qemu disk backend (qdisk) resource leak CVE-2013-4375-xsa71.patch - Upstream patches from Jan 52496bea-x86-properly-handle-hvm_copy_from_guest_-phys-virt-errors.patch (Replaces CVE-2013-4355-xsa63.patch) 52496c11-x86-mm-shadow-Fix-initialization-of-PV-shadow-L4-tables.patch (Replaces CVE-2013-4356-xsa64.patch) 52496c32-x86-properly-set-up-fbld-emulation-operand-address.patch (Replaces CVE-2013-4361-xsa66.patch) 52497c6c-x86-don-t-blindly-create-L3-tables-for-the-direct-map.patch 524e971b-x86-idle-Fix-get_cpu_idle_time-s-interaction-with-offline-pcpus.patch 524e9762-x86-percpu-Force-INVALID_PERCPU_AREA-to-non-canonical.patch 524e983e-Nested-VMX-check-VMX-capability-before-read-VMX-related-MSRs.patch 524e98b1-Nested-VMX-fix-IA32_VMX_CR4_FIXED1-msr-emulation.patch 524e9dc0-xsm-forbid-PV-guest-console-reads.patch 5256a979-x86-check-segment-descriptor-read-result-in-64-bit-OUTS-emulation.patch 5256be57-libxl-fix-vif-rate-parsing.patch 5256be84-tools-ocaml-fix-erroneous-free-of-cpumap-in-stub_xc_vcpu_getaffinity.patch 5256be92-libxl-fix-out-of-memory-error-handling-in-libxl_list_cpupool.patch 5257a89a-x86-correct-LDT-checks.patch 5257a8e7-x86-add-address-validity-check-to-guest_map_l1e.patch 5257a944-x86-check-for-canonical-address-before-doing-page-walks.patch 525b95f4-scheduler-adjust-internal-locking-interface.patch 525b9617-sched-fix-race-between-sched_move_domain-and-vcpu_wake.patch 525e69e8-credit-unpause-parked-vcpu-before-destroying-it.patch 525faf5e-x86-print-relevant-tail-part-of-filename-for-warnings-and-crashes.patch - bnc#840196 - L3: MTU size on Dom0 gets reset when booting DomU OBS-URL: https://build.opensuse.org/package/show/Virtualization/xen?expand=0&rev=276
64 lines
2.2 KiB
Diff
64 lines
2.2 KiB
Diff
# Commit ef55257bc81204e34691f1c2aa9e01f2d0768bdd
|
|
# Date 2013-10-14 08:58:31 +0200
|
|
# Author David Vrabel <david.vrabel@citrix.com>
|
|
# Committer Jan Beulich <jbeulich@suse.com>
|
|
sched: fix race between sched_move_domain() and vcpu_wake()
|
|
|
|
From: David Vrabel <david.vrabel@citrix.com>
|
|
|
|
sched_move_domain() changes v->processor for all the domain's VCPUs.
|
|
If another domain, softirq etc. triggers a simultaneous call to
|
|
vcpu_wake() (e.g., by setting an event channel as pending), then
|
|
vcpu_wake() may lock one schedule lock and try to unlock another.
|
|
|
|
vcpu_schedule_lock() attempts to handle this but only does so for the
|
|
window between reading the schedule_lock from the per-CPU data and the
|
|
spin_lock() call. This does not help with sched_move_domain()
|
|
changing v->processor between the calls to vcpu_schedule_lock() and
|
|
vcpu_schedule_unlock().
|
|
|
|
Fix the race by taking the schedule_lock for v->processor in
|
|
sched_move_domain().
|
|
|
|
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
|
|
Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>
|
|
|
|
Use vcpu_schedule_lock_irq() (which now returns the lock) to properly
|
|
retry the locking should the to be used lock have changed in the course
|
|
of acquiring it (issue pointed out by George Dunlap).
|
|
|
|
Add a comment explaining the state after the v->processor adjustment.
|
|
|
|
Signed-off-by: Jan Beulich <jbeulich@suse.com>
|
|
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
|
|
Acked-by: Keir Fraser <keir@xen.org>
|
|
|
|
--- a/xen/common/schedule.c
|
|
+++ b/xen/common/schedule.c
|
|
@@ -276,6 +276,8 @@ int sched_move_domain(struct domain *d,
|
|
new_p = cpumask_first(c->cpu_valid);
|
|
for_each_vcpu ( d, v )
|
|
{
|
|
+ spinlock_t *lock;
|
|
+
|
|
vcpudata = v->sched_priv;
|
|
|
|
migrate_timer(&v->periodic_timer, new_p);
|
|
@@ -283,7 +285,16 @@ int sched_move_domain(struct domain *d,
|
|
migrate_timer(&v->poll_timer, new_p);
|
|
|
|
cpumask_setall(v->cpu_affinity);
|
|
+
|
|
+ lock = vcpu_schedule_lock_irq(v);
|
|
v->processor = new_p;
|
|
+ /*
|
|
+ * With v->processor modified we must not
|
|
+ * - make any further changes assuming we hold the scheduler lock,
|
|
+ * - use vcpu_schedule_unlock_irq().
|
|
+ */
|
|
+ spin_unlock_irq(lock);
|
|
+
|
|
v->sched_priv = vcpu_priv[v->vcpu_id];
|
|
evtchn_move_pirqs(v);
|
|
|