SHA256
1
0
forked from pool/xen
xen/shadow.patch

67 lines
2.9 KiB
Diff

In domain_create, previously we reserve 1M memory for domain creation (as
described in xend comment), and these memory SHOULD NOT related with vcpu
number. And later, shadow_mem_control() will modify the shadow size to 256
pages per vcpu (also plus some other values related with guest memory size...).
Therefore the C/S 20389 which modifies 1M to 4M to fit more vcpu number is
wrong. I'm sorry for that.
Following is the reason why currently 1M doesn't work for big number vcpus,
as we mentioned, it caused Xen crash.
Each time when sh_set_allocation() is called, it checks whether
shadow_min_acceptable_pages() has been allocated, if not, it will allocate
them. That is to say, it is 128 pages per vcpu. But before we define
d->max_vcpu, guest vcpu hasn't been initialized, so
shadow_min_acceptable_pages() always returns 0. Therefore we only allocated 1M
shadow memory for domain_create, and didn't satisfy 128 pages per vcpu for
alloc_vcpu().
As we know, vcpu allocation is done in the hypercall of
XEN_DOMCTL_max_vcpus. However, at this point we haven't called
shadow_mem_control() and are still using the pre-allocated 1M shadow memory to
allocate so many vcpus. So it should be a BUG. Therefore when vcpu number
increases, 1M is not enough and causes Xen crash. C/S 20389 exposes this issue.
So I think the right process should be, after d->max_vcpu is set and before
alloc_vcpu(), we should call sh_set_allocation() to satisfy 128 pages per vcpu.
The following patch does this work. Is it work for you? Thanks!
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Index: xen-4.0.0-testing/xen/arch/x86/mm/shadow/common.c
===================================================================
--- xen-4.0.0-testing.orig/xen/arch/x86/mm/shadow/common.c
+++ xen-4.0.0-testing/xen/arch/x86/mm/shadow/common.c
@@ -41,6 +41,9 @@
DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
+static unsigned int sh_set_allocation(struct domain *d,
+ unsigned int pages,
+ int *preempted);
/* Set up the shadow-specific parts of a domain struct at start of day.
* Called for every domain from arch_domain_create() */
void shadow_domain_init(struct domain *d, unsigned int domcr_flags)
@@ -82,6 +85,12 @@ void shadow_vcpu_init(struct vcpu *v)
}
#endif
+ if ( !is_idle_domain(v->domain) )
+ {
+ shadow_lock(v->domain);
+ sh_set_allocation(v->domain, 128, NULL);
+ shadow_unlock(v->domain);
+ }
v->arch.paging.mode = &SHADOW_INTERNAL_NAME(sh_paging_mode, 3);
}
@@ -3102,7 +3111,7 @@ int shadow_enable(struct domain *d, u32
{
unsigned int r;
shadow_lock(d);
- r = sh_set_allocation(d, 1024, NULL); /* Use at least 4MB */
+ r = sh_set_allocation(d, 256, NULL); /* Use at least 1MB */
if ( r != 0 )
{
sh_set_allocation(d, 0, NULL);