xen/53cfddaf-x86-mem_event-validate-the-response-vcpu_id-before-acting-on-it.patch
Charles Arnold b94eda4466 - Upstream patches from Jan
5347b524-evtchn-eliminate-64k-ports-limitation.patch
  53aac342-x86-HVM-consolidate-and-sanitize-CR4-guest-reserved-bit-determination.patch
  53b16cd4-VT-d-ATS-correct-and-clean-up-dev_invalidate_iotlb.patch
  53b56de1-properly-reference-count-DOMCTL_-un-pausedomain-hypercalls.patch
  53cfdcc7-avoid-crash-when-doing-shutdown-with-active-cpupools.patch
  53cfddaf-x86-mem_event-validate-the-response-vcpu_id-before-acting-on-it.patch
  53cfdde4-x86-mem_event-prevent-underflow-of-vcpu-pause-counts.patch

- bnc#886801 - xl vncviewer: The first domu can be accessed by any id
  53c9151b-Fix-xl-vncviewer-accesses-port-0-by-any-invalid-domid.patch

- Upstream pygrub bug fix
  5370e03b-pygrub-fix-error-handling-if-no-valid-partitions-are-found.patch

- Fix pygrub to handle old 32 bit VMs
  pygrub-boot-legacy-sles.patch (Mike Latimer)

- Remove xen-vmresync utility.  It is an old Platespin Orchestrate
  utility that should have never been included in the Xen package.
  Updated xen.spec

- Rework xen-destroy utility included in xen-utils
  bnc#885292 and bnc#886063
  Updated xen-utils-0.1.tar.bz2

- bnc#886063 - Xen monitor fails (xl list --long output different
  from xm list --long output)
- bnc#885292 - VirtualDomain: pid_status does not know how to check
  status on SLE12

OBS-URL: https://build.opensuse.org/package/show/Virtualization/xen?expand=0&rev=322
2014-07-24 19:43:18 +00:00

87 lines
2.7 KiB
Diff

# Commit ee75480b3c8856db9ef1aa45418f35ec0d78989d
# Date 2014-07-23 18:07:11 +0200
# Author Andrew Cooper <andrew.cooper3@citrix.com>
# Committer Jan Beulich <jbeulich@suse.com>
x86/mem_event: validate the response vcpu_id before acting on it
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Tim Deegan <tim@xen.org>
Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Tested-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -596,11 +596,20 @@ int mem_sharing_sharing_resume(struct do
/* Get all requests off the ring */
while ( mem_event_get_response(d, &d->mem_event->share, &rsp) )
{
+ struct vcpu *v;
+
if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
continue;
+
+ /* Validate the vcpu_id in the response. */
+ if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
+ continue;
+
+ v = d->vcpu[rsp.vcpu_id];
+
/* Unpause domain/vcpu */
if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
- vcpu_unpause(d->vcpu[rsp.vcpu_id]);
+ vcpu_unpause(v);
}
return 0;
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1228,8 +1228,17 @@ void p2m_mem_paging_resume(struct domain
/* Pull all responses off the ring */
while( mem_event_get_response(d, &d->mem_event->paging, &rsp) )
{
+ struct vcpu *v;
+
if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
continue;
+
+ /* Validate the vcpu_id in the response. */
+ if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
+ continue;
+
+ v = d->vcpu[rsp.vcpu_id];
+
/* Fix p2m entry if the page was not dropped */
if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
{
@@ -1248,7 +1257,7 @@ void p2m_mem_paging_resume(struct domain
}
/* Unpause domain */
if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
- vcpu_unpause(d->vcpu[rsp.vcpu_id]);
+ vcpu_unpause(v);
}
}
@@ -1356,11 +1365,20 @@ void p2m_mem_access_resume(struct domain
/* Pull all responses off the ring */
while( mem_event_get_response(d, &d->mem_event->access, &rsp) )
{
+ struct vcpu *v;
+
if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
continue;
+
+ /* Validate the vcpu_id in the response. */
+ if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
+ continue;
+
+ v = d->vcpu[rsp.vcpu_id];
+
/* Unpause domain */
if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
- vcpu_unpause(d->vcpu[rsp.vcpu_id]);
+ vcpu_unpause(v);
}
}