xen/52932418-x86-xsave-fix-nonlazy-state-handling.patch
Charles Arnold a11c33863f - Upstream patches from Jan
5281fad4-numa-sched-leave-node-affinity-alone-if-not-in-auto-mode.patch
  52820823-nested-SVM-adjust-guest-handling-of-structure-mappings.patch
  52820863-VMX-don-t-crash-processing-d-debug-key.patch
  5282492f-x86-eliminate-has_arch_mmios.patch
  52864df2-credit-Update-other-parameters-when-setting-tslice_ms.patch
  52864f30-fix-leaking-of-v-cpu_affinity_saved-on-domain-destruction.patch
  5289d225-nested-VMX-don-t-ignore-mapping-errors.patch
  528a0eb0-x86-consider-modules-when-cutting-off-memory.patch
  528f606c-x86-hvm-reset-TSC-to-0-after-domain-resume-from-S3.patch
  528f609c-x86-crash-disable-the-watchdog-NMIs-on-the-crashing-cpu.patch
  52932418-x86-xsave-fix-nonlazy-state-handling.patch

- Add missing requires to pciutils package for xend-tools

- bnc#851749 - Xen service file does not call xend properly
  xend.service 

- bnc#851386 - VUL-0: xen: XSA-78: Insufficient TLB flushing in
  VT-d (iommu) code
  528a0e5b-TLB-flushing-in-dma_pte_clear_one.patch

- bnc#849667 - VUL-0: xen: XSA-74: Lock order reversal between
  page_alloc_lock and mm_rwlock
  CVE-2013-4553-xsa74.patch
- bnc#849665 - VUL-0: CVE-2013-4551: xen: XSA-75: Host crash due to
  guest VMX instruction execution
  52809208-nested-VMX-VMLANUCH-VMRESUME-emulation-must-check-permission-1st.patch
- bnc#849668 - VUL-0: xen: XSA-76: Hypercalls exposed to privilege
  rings 1 and 2 of HVM guests

OBS-URL: https://build.opensuse.org/package/show/Virtualization/xen?expand=0&rev=279
2013-11-26 20:18:36 +00:00

90 lines
2.9 KiB
Diff

# Commit 7d8b5dd98463524686bdee8b973b53c00c232122
# Date 2013-11-25 11:19:04 +0100
# Author Liu Jinsong <jinsong.liu@intel.com>
# Committer Jan Beulich <jbeulich@suse.com>
x86/xsave: fix nonlazy state handling
Nonlazy xstates should be xsaved each time when vcpu_save_fpu.
Operation to nonlazy xstates will not trigger #NM exception, so
whenever vcpu scheduled in it got restored and whenever scheduled
out it should get saved.
Currently this bug affects AMD LWP feature, and later Intel MPX
feature. With the bugfix both LWP and MPX will work fine.
Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
Furthermore, during restore we also need to set nonlazy_xstate_used
according to the incoming accumulated XCR0.
Also adjust the changes to i387.c such that there won't be a pointless
clts()/stts() pair.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1146,6 +1146,8 @@ long arch_do_domctl(
{
v->arch.xcr0 = _xcr0;
v->arch.xcr0_accum = _xcr0_accum;
+ if ( _xcr0_accum & XSTATE_NONLAZY )
+ v->arch.nonlazy_xstate_used = 1;
memcpy(v->arch.xsave_area, _xsave_area,
evc->size - 2 * sizeof(uint64_t));
}
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1073,6 +1073,8 @@ static int hvm_load_cpu_xsave_states(str
v->arch.xcr0 = ctxt->xcr0;
v->arch.xcr0_accum = ctxt->xcr0_accum;
+ if ( ctxt->xcr0_accum & XSTATE_NONLAZY )
+ v->arch.nonlazy_xstate_used = 1;
memcpy(v->arch.xsave_area, &ctxt->save_area,
desc->length - offsetof(struct hvm_hw_cpu_xsave, save_area));
--- a/xen/arch/x86/i387.c
+++ b/xen/arch/x86/i387.c
@@ -120,11 +120,22 @@ static inline void fpu_frstor(struct vcp
/*******************************/
/* FPU Save Functions */
/*******************************/
+
+static inline uint64_t vcpu_xsave_mask(const struct vcpu *v)
+{
+ if ( v->fpu_dirtied )
+ return v->arch.nonlazy_xstate_used ? XSTATE_ALL : XSTATE_LAZY;
+
+ return v->arch.nonlazy_xstate_used ? XSTATE_NONLAZY : 0;
+}
+
/* Save x87 extended state */
static inline void fpu_xsave(struct vcpu *v)
{
bool_t ok;
+ uint64_t mask = vcpu_xsave_mask(v);
+ ASSERT(mask);
ASSERT(v->arch.xsave_area);
/*
* XCR0 normally represents what guest OS set. In case of Xen itself,
@@ -132,7 +143,7 @@ static inline void fpu_xsave(struct vcpu
*/
ok = set_xcr0(v->arch.xcr0_accum | XSTATE_FP_SSE);
ASSERT(ok);
- xsave(v, v->arch.nonlazy_xstate_used ? XSTATE_ALL : XSTATE_LAZY);
+ xsave(v, mask);
ok = set_xcr0(v->arch.xcr0 ?: XSTATE_FP_SSE);
ASSERT(ok);
}
@@ -263,7 +274,7 @@ void vcpu_restore_fpu_lazy(struct vcpu *
*/
void vcpu_save_fpu(struct vcpu *v)
{
- if ( !v->fpu_dirtied )
+ if ( !v->fpu_dirtied && !v->arch.nonlazy_xstate_used )
return;
ASSERT(!is_idle_vcpu(v));