- Update to changeset 20900 RC2+ for sle11-sp1 beta4.

OBS-URL: https://build.opensuse.org/package/show/Virtualization/xen?expand=0&rev=29
This commit is contained in:
Charles Arnold 2010-02-05 23:33:58 +00:00 committed by Git OBS Bridge
parent f196fa2c00
commit 6c7e8be7db
6 changed files with 83 additions and 9 deletions

View File

@ -31,6 +31,7 @@ optional packages are also installed:
vm-install (Optional, to install VMs)
python-gtk (Optional, to install VMs graphically)
virt-manager (Optional, to manage VMs graphically)
virt-viewer (Optional, to view VMs outside virt-manager)
tightvnc (Optional, to view VMs outside virt-manager)
Additional packages:
@ -328,7 +329,7 @@ documentation for workarounds.
Networking
----------
Your virtual machines become much more useful if your can reach them via the
Your virtual machines become much more useful if you can reach them via the
network. Starting with openSUSE11.1 and SLE11, networking in domain 0 is
configured and managed via YaST. The yast2-networking module can be used
to create and manage bridged networks. During initial installation, a bridged

66
shadow.patch Normal file
View File

@ -0,0 +1,66 @@
In domain_create, previously we reserve 1M memory for domain creation (as
described in xend comment), and these memory SHOULD NOT related with vcpu
number. And later, shadow_mem_control() will modify the shadow size to 256
pages per vcpu (also plus some other values related with guest memory size...).
Therefore the C/S 20389 which modifies 1M to 4M to fit more vcpu number is
wrong. I'm sorry for that.
Following is the reason why currently 1M doesn't work for big number vcpus,
as we mentioned, it caused Xen crash.
Each time when sh_set_allocation() is called, it checks whether
shadow_min_acceptable_pages() has been allocated, if not, it will allocate
them. That is to say, it is 128 pages per vcpu. But before we define
d->max_vcpu, guest vcpu hasn't been initialized, so
shadow_min_acceptable_pages() always returns 0. Therefore we only allocated 1M
shadow memory for domain_create, and didn't satisfy 128 pages per vcpu for
alloc_vcpu().
As we know, vcpu allocation is done in the hypercall of
XEN_DOMCTL_max_vcpus. However, at this point we haven't called
shadow_mem_control() and are still using the pre-allocated 1M shadow memory to
allocate so many vcpus. So it should be a BUG. Therefore when vcpu number
increases, 1M is not enough and causes Xen crash. C/S 20389 exposes this issue.
So I think the right process should be, after d->max_vcpu is set and before
alloc_vcpu(), we should call sh_set_allocation() to satisfy 128 pages per vcpu.
The following patch does this work. Is it work for you? Thanks!
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Index: xen-4.0.0-testing/xen/arch/x86/mm/shadow/common.c
===================================================================
--- xen-4.0.0-testing.orig/xen/arch/x86/mm/shadow/common.c
+++ xen-4.0.0-testing/xen/arch/x86/mm/shadow/common.c
@@ -41,6 +41,9 @@
DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
+static unsigned int sh_set_allocation(struct domain *d,
+ unsigned int pages,
+ int *preempted);
/* Set up the shadow-specific parts of a domain struct at start of day.
* Called for every domain from arch_domain_create() */
void shadow_domain_init(struct domain *d, unsigned int domcr_flags)
@@ -82,6 +85,12 @@ void shadow_vcpu_init(struct vcpu *v)
}
#endif
+ if ( !is_idle_domain(v->domain) )
+ {
+ shadow_lock(v->domain);
+ sh_set_allocation(v->domain, 128, NULL);
+ shadow_unlock(v->domain);
+ }
v->arch.paging.mode = &SHADOW_INTERNAL_NAME(sh_paging_mode, 3);
}
@@ -3100,7 +3109,7 @@ int shadow_enable(struct domain *d, u32
{
unsigned int r;
shadow_lock(d);
- r = sh_set_allocation(d, 1024, NULL); /* Use at least 4MB */
+ r = sh_set_allocation(d, 256, NULL); /* Use at least 1MB */
if ( r != 0 )
{
sh_set_allocation(d, 0, NULL);

View File

@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4947d275a04f0a6ce9b6c027c84281f03611ef2fb6d81f8d0175d2f7c72b7619
size 23218651
oid sha256:fadb3f78dfaf163464c6fcfed57a1f76a6d7cc2f65771bc9886800afdbb528bb
size 23224505

View File

@ -9,9 +9,9 @@ Index: xen-4.0.0-testing/Config.mk
-CONFIG_QEMU ?= $(QEMU_REMOTE)
+CONFIG_QEMU ?= ioemu-remote
QEMU_TAG := xen-4.0.0-rc2
#QEMU_TAG ?= a0066d08514ecfec34c717c7184250e95519f39c
@@ -164,9 +164,9 @@ CONFIG_OCAML_XENSTORED ?= n
QEMU_TAG ?= 575ed1016f6fba1c6a6cd32a828cb468bdee96bb
# Mon Feb 1 16:33:52 2010 +0000
@@ -163,9 +163,9 @@ CONFIG_OCAML_XENSTORED ?= n
# Optional components
XENSTAT_XENTOP ?= y
VTPM_TOOLS ?= n

View File

@ -1,3 +1,8 @@
-------------------------------------------------------------------
Fri Feb 5 08:16:39 MST 2010 - carnold@novell.com
- Update to changeset 20900 RC2+ for sle11-sp1 beta4.
-------------------------------------------------------------------
Fri Jan 29 09:22:46 MST 2010 - carnold@novell.com

View File

@ -1,5 +1,5 @@
#
# spec file for package xen (Version 4.0.0_20873_01)
# spec file for package xen (Version 4.0.0_20900_01)
#
# Copyright (c) 2009 SUSE LINUX Products GmbH, Nuernberg, Germany.
#
@ -22,7 +22,7 @@ Name: xen
ExclusiveArch: %ix86 x86_64
%define xvers 4.0
%define xvermaj 4
%define changeset 20873
%define changeset 20900
%define xen_build_dir xen-4.0.0-testing
%define with_kmp 1
BuildRequires: LibVNCServer-devel SDL-devel automake bin86 curl-devel dev86 graphviz latex2html libjpeg-devel libxml2-devel ncurses-devel openssl openssl-devel pciutils-devel python-devel texinfo transfig
@ -37,7 +37,7 @@ BuildRequires: glibc-32bit glibc-devel-32bit
%if %{?with_kmp}0
BuildRequires: kernel-source kernel-syms module-init-tools xorg-x11
%endif
Version: 4.0.0_20873_01
Version: 4.0.0_20900_01
Release: 1
License: GPL v2 only
Group: System/Kernel
@ -146,6 +146,7 @@ Patch424: ioemu-7615-qcow2-fix-alloc_cluster_link_l2.patch
Patch425: ioemu-bdrv-open-CACHE_WB.patch
Patch426: xen-ioemu-hvm-pv-support.diff
Patch427: qemu-dm-segfault.patch
Patch428: shadow.patch
# Jim's domain lock patch
Patch450: xend-domain-lock.patch
# Hypervisor and PV driver Patches
@ -571,6 +572,7 @@ Authors:
%patch425 -p1
%patch426 -p1
%patch427 -p1
%patch428 -p1
%patch450 -p1
%patch500 -p1
%patch501 -p1