705 lines
32 KiB
Plaintext
705 lines
32 KiB
Plaintext
|
README for the Xen packages
|
||
|
===========================
|
||
|
|
||
|
This file contains SUSE-specific instructions and suggestions for using Xen.
|
||
|
|
||
|
For more in-depth documentation of using Xen on SUSE, consult the
|
||
|
virtualization chapter in the SLES or SUSE Linux manual, or read up-to-date
|
||
|
virtualization information, at
|
||
|
https://www.suse.com/documentation/sles11/singlehtml/book_xen/book_xen.html
|
||
|
|
||
|
For more complete documentation on Xen itself, please install the xen-doc-html
|
||
|
package and read the documentation installed into /usr/share/doc/packages/xen/.
|
||
|
|
||
|
|
||
|
About
|
||
|
-----
|
||
|
Xen allows you to run multiple virtual machines on a single physical machine.
|
||
|
|
||
|
See the Xen homepage for more information:
|
||
|
http://www.xenproject.org/
|
||
|
|
||
|
If you want to use Xen, you need to install the Xen hypervisor and a number of
|
||
|
supporting packages. During the initial SUSE installation (or when installing
|
||
|
from YaST) check-mark the "Xen Virtual Machine Host Server" pattern. If,
|
||
|
instead, you wish to install Xen manually later, click on the "Install
|
||
|
Hypervisor and Tools" icon in YaST.
|
||
|
|
||
|
If you want to install and manage VMs graphically, be sure to install a
|
||
|
graphical desktop environment like KDE or GNOME. The following optional
|
||
|
packages are needed to manage VMs graphically. Note that "Install Hypervisor
|
||
|
and Tools" installs all the packages below:
|
||
|
virt-install (Optional, to install VMs)
|
||
|
virt-manager (Optional, to manage VMs graphically)
|
||
|
virt-viewer (Optional, to view VMs outside virt-manager)
|
||
|
vm-install (Optional, to install VMs with xl only)
|
||
|
|
||
|
You then need to reboot your machine. Instead of booting a normal Linux
|
||
|
kernel, you will boot the Xen hypervisor and a slightly changed Linux kernel.
|
||
|
This Linux kernel runs in the first virtual machine and will drive most of
|
||
|
your hardware.
|
||
|
|
||
|
This approach is called paravirtualization, since it is a partial
|
||
|
virtualization (the Linux kernel needs to be changed slightly, to make the
|
||
|
virtualization easier). It results in very good performance (consult
|
||
|
http://www.cl.cam.ac.uk/research/srg/netos/xen/performance.html) but has the
|
||
|
downside of unchanged operating systems not being supported. However, new
|
||
|
hardware features (e.g., Intel VT and AMD-V) are overcoming this limitation.
|
||
|
|
||
|
|
||
|
Terminology
|
||
|
-----------
|
||
|
The Xen open-source community has a number of terms that you should be
|
||
|
familiar with.
|
||
|
|
||
|
A "domain" is Xen's term for a virtual machine.
|
||
|
|
||
|
"Domain 0" is the first virtual machine. It can control all other virtual
|
||
|
machines. It also (usually) controls the physical hardware. A kernel used in
|
||
|
domain 0 may sometimes be referred to as a dom0 kernel.
|
||
|
|
||
|
"Domain U" is any virtual machine other than domain 0. The "U" indicates it
|
||
|
is unprivileged (that is, it cannot control other domains). A kernel used in
|
||
|
an unprivileged domain may be referred to as a domU kernel.
|
||
|
|
||
|
SUSE documentation will use the more industry-standard term "virtual
|
||
|
machine", or "VM", rather than "domain" where possible. And to that end,
|
||
|
domain 0 will be called the "virtual machine server", since it essentially the
|
||
|
server on which the other VMs run. All other domains are simply "virtual
|
||
|
machines".
|
||
|
|
||
|
The acronym "HVM" refers to a hardware-assisted virtual machine. These are
|
||
|
VMs that have not been modified (e.g., Windows) and therefore need hardware
|
||
|
support such as Intel VT or AMD-V to run on Xen.
|
||
|
|
||
|
|
||
|
Kernels
|
||
|
-------
|
||
|
Xen supports two kinds of kernels: A privileged kernel (which boots the
|
||
|
machine, controls other VMs, and usually controls all your physical hardware)
|
||
|
and unprivileged kernels (which can't control other VMs, and usually don't need
|
||
|
drivers for physical hardware). The privileged kernel boots first (as the VM
|
||
|
server); an unprivileged kernel is used in all subsequent VMs.
|
||
|
|
||
|
The VM server takes control of the boot process after Xen has initialized the
|
||
|
CPU and the memory. This VM contains a privileged kernel and all the hardware
|
||
|
drivers.
|
||
|
|
||
|
For the other virtual machines, you usually don't need the hardware drivers.
|
||
|
(It is possible to hide a PCI device from the VM server and re-assign it to
|
||
|
another VM for direct access, but that is a more advanced topic.) Instead you
|
||
|
use virtual network and block device drivers in the unprivileged VMs to access
|
||
|
the physical network and block drivers in the VM server.
|
||
|
|
||
|
For simplicity, SUSE ships a single Xen-enabled Linux kernel, rather than
|
||
|
separate privileged and unprivileged kernels. As most of the hardware drivers
|
||
|
are modules anyway, using this kernel as an unprivileged kernel has very
|
||
|
little extra overhead.
|
||
|
|
||
|
The kernel is contained in the kernel-xen package, which you need to install to
|
||
|
use Xen.
|
||
|
|
||
|
|
||
|
Booting
|
||
|
-------
|
||
|
If you installed Xen during the initial SUSE installation, or installed one
|
||
|
of the kernel-xen* packages later, a "XEN" option should exist in your Grub
|
||
|
bootloader. Select that to boot SUSE on top of Xen.
|
||
|
|
||
|
If you want to add additional entries, or modify the existing ones, you may
|
||
|
run the YaST2 Boot Loader program.
|
||
|
|
||
|
Once you have booted this configuration successfully, you are running Xen with
|
||
|
a privileged kernel on top of it.
|
||
|
|
||
|
|
||
|
Xen Boot Parameters
|
||
|
-------------------
|
||
|
Normally, xen.gz requires no parameters. However, in special cases (such as
|
||
|
debugging or a dedicated VM server) you may wish to pass it parameters.
|
||
|
|
||
|
Adding parameters to xen.gz can be done by editing the /etc/default/grub file.
|
||
|
Add the following line to this file; GRUB_CMDLINE_XEN_DEFAULT="<parameters>". The
|
||
|
parameters may be valid options passed to xen.gz (the hypervisor). After
|
||
|
editing this file, you must first run 'grub2-mkconfig -o /boot/grub2/grub.cfg'
|
||
|
and then reboot for the changes to take effect.
|
||
|
|
||
|
For more information on how to add options to the hypervisor, see the sections
|
||
|
below called; "Dom0 Memory Ballooning" and "Troubleshooting".
|
||
|
|
||
|
For a more complete discussion of possible parameters, see the user
|
||
|
documentation in the xen-doc-html package.
|
||
|
|
||
|
|
||
|
Creating a VM with virt-install
|
||
|
-------------------------------
|
||
|
The virt-install program (part of the virt-install package, and accessible
|
||
|
through YaST's Control Center) is the recommended method to create VMs. This
|
||
|
program handles creating both the VM's libvirt XML definition and disk(s).
|
||
|
It can help install any operating system, not just SUSE. virt-install has both
|
||
|
a command line only mode and a graphical wizard mode that may be used to define
|
||
|
and start VM installations.
|
||
|
|
||
|
virt-install may be launched from the virt-manager VM management tool. Start
|
||
|
virt-manager either from the YaST Control Center or from the command line.
|
||
|
The installation icon from the main virt-manager screen may be selected to
|
||
|
begin the virt-install installation wizard.
|
||
|
|
||
|
The use of virt-install or virt-manager requires the installation of the
|
||
|
libvirt packages and the libvirt daemon must be running on the host unless
|
||
|
you are managing a remote host.
|
||
|
|
||
|
Each VM needs to have its own root filesystem. The root filesystem can live
|
||
|
on a block device (e.g., a hard disk partition, or an LVM2 or EVMS volume) or
|
||
|
in a file that holds the filesystem image.
|
||
|
|
||
|
VMs can share filesystems, such as /usr or /opt, that are mounted read-only
|
||
|
from _all_ VMs. Never try to share a filesystem that is mounted read-write;
|
||
|
filesystem corruption will result. For sharing writable data between VMs, use
|
||
|
NFS or other networked or cluster filesystems.
|
||
|
|
||
|
When defining the virtual network adapter(s), we recommend using a static MAC
|
||
|
for the VM rather than allowing Xen to randomly select one each time the VM
|
||
|
boots. (See "Network Troubleshooting" below.) The Xen Project has been
|
||
|
allocated a range of MAC addresses with the OUI of 00-16-3E. By using MACs
|
||
|
from this range you can be sure they will not conflict with any physical
|
||
|
adapters.
|
||
|
|
||
|
When the VM shuts down (because the installation -- or at least the first
|
||
|
stage of it -- is done), the wizard finalizes the VM's configuration and
|
||
|
restarts the VM.
|
||
|
|
||
|
The creation of VMs can be automated; read the virt-install man page for more
|
||
|
details. The installation of an OS within the VM can be automated if the OS
|
||
|
supports it.
|
||
|
|
||
|
|
||
|
Creating a VM with vm-install
|
||
|
-----------------------------
|
||
|
The vm-install program is also provided to create VMs. Like virt-install,
|
||
|
this optional program handles creating both the VM's libvirt XML definition
|
||
|
and disk(s). It also creates a legacy configuration file for use with 'xl'.
|
||
|
It can help install any operating system, not just SUSE.
|
||
|
|
||
|
From the command line, run "vm-install". If the DISPLAY environment variable
|
||
|
is set and the supporting packages (python-gtk) are installed, a graphical
|
||
|
wizard will start. Otherwise, a text wizard will start. If vm-install is
|
||
|
started with the '--use-xl' flag, it will not require libvirt nor attempt
|
||
|
to communicate with libvirt when creating a VM and instead will only use the
|
||
|
'xl' toolstack to start VM installations.
|
||
|
|
||
|
Once you have the VM configured, click "OK". The wizard will now create a
|
||
|
configuration file for the VM, and create a disk image. The disk image will
|
||
|
exist in /var/lib/xen/images, and a corresponding configuration file will exist
|
||
|
in /etc/xen/vm. The operating system's installation program will then run
|
||
|
within the VM.
|
||
|
|
||
|
When the VM shuts down (because the installation -- or at least the first
|
||
|
stage of it -- is done), the wizard finalizes the VM's configuration and
|
||
|
restarts the VM.
|
||
|
|
||
|
The creation of VMs can be automated; read the vm-install man page for more
|
||
|
details. The installation of an OS within the VM can be automated if the OS
|
||
|
supports it.
|
||
|
|
||
|
|
||
|
Creating a VM Manually
|
||
|
----------------------
|
||
|
If you create a VM manually (as opposed to using virt-install, which is the
|
||
|
recommended way), you will need to create a disk (or reuse an existing one)
|
||
|
and a configuration file.
|
||
|
|
||
|
If you are using a disk or disk image that is already installed with an
|
||
|
operating system and you want the VM to run in paravirtual mode, you'll
|
||
|
probably need to replace its kernel with a Xen-enabled kernel.
|
||
|
|
||
|
The kernel and ramdisk used to bootstrap the VM must match any kernel modules
|
||
|
that might be present in the VM's disk. It is possible to manually copy the
|
||
|
kernel and ramdisk from the VM's disk (for example, after updating the kernel
|
||
|
within that VM) to the VM server's filesystem. However, an easier (and less
|
||
|
error-prone) method is to use /usr/lib/grub2/x86_64-xen/grub.xen as the VM
|
||
|
kernel. When the new VM is started, it runs grub.xen to read the grub
|
||
|
configuration from the VM disk, selecting the configured kernel and ramdisk
|
||
|
so that it can be used to bootstrap the new VM.
|
||
|
|
||
|
Next, make a copy of one of the /etc/xen/examples/* files, and modify it to
|
||
|
suit your needs. You'll need to change (at very least) the "name" and "disk"
|
||
|
parameters. See /etc/xen/examples/ for example configuration files.
|
||
|
|
||
|
|
||
|
Managing Virtual Machines
|
||
|
-------------------------
|
||
|
VMs can be managed from the command line using 'virsh' or from virt-manager.
|
||
|
|
||
|
VMs created by virt-install or vm-install (without vm-install's --use-xl flag)
|
||
|
will automatically be defined in libvirt. VMs defined in libvirt may be managed
|
||
|
by virt-manager or from the command line using the 'virsh' command. However,
|
||
|
if you copy a VM from another machine and manually create a VM XML configuration
|
||
|
file, you will need to import it into libvirt with a command like:
|
||
|
virsh define <path to>/my-vm.xml
|
||
|
This imports the configuration into libvirt (and therefore virt-manager becomes
|
||
|
aware of it, also).
|
||
|
|
||
|
Now to start the VM:
|
||
|
virsh start my-vm
|
||
|
or start it from virt-manager's graphical menu.
|
||
|
|
||
|
Have a look at running VMs with "virsh list". Attach to the VM's text console
|
||
|
with "virsh console <vm-name>". Attaching to multiple VM consoles is most
|
||
|
conveniently done with the terminal multiplexer "screen".
|
||
|
|
||
|
Have a look at the other virsh commands by typing "virsh help". Note that most
|
||
|
virsh commands must be done as root.
|
||
|
|
||
|
|
||
|
Changes in the Xen VM Management Toolstack
|
||
|
------------------------------------------
|
||
|
With SUSE Linux Enterprise Server 12, the way VMs are managed has changed
|
||
|
when compared with older SLES versions. Users familiar with the 'xm' command
|
||
|
and the xend management daemon will notice that these are absent. The xm/xend
|
||
|
toolstack has been replaced with the xl toolstack. The xl toolstack is
|
||
|
intended to remain backwards compatible with existing xm domain configuration
|
||
|
files. Most 'xm' commands can simply be replaced with 'xl'. One significant
|
||
|
difference is that xl does not support the concept of Managed Domains. The xl
|
||
|
command can only modify running VMs. Once the VM is shutdown, there is no
|
||
|
preserved state information other than what is saved in the configuration
|
||
|
file used to start the VM. In order to provide Managed Domains, users are
|
||
|
encouraged to use libvirt and it's tools to create and modify VMs. These
|
||
|
tools include the command line tool 'virsh' and the graphical tools
|
||
|
virt-manager and virt-install.
|
||
|
|
||
|
Warning: Using xl commands to modify libvirt managed domains will result in
|
||
|
errors when virsh or virt-manager is used. Please use only virsh or
|
||
|
virt-manager to manage libvirt managed domains. If you are not using libvirt
|
||
|
managed domains then using xl commands is the correct way to modify running
|
||
|
domains.
|
||
|
|
||
|
|
||
|
Using the Mouse via VNC in Fully Virtual Mode
|
||
|
---------------------------------------------
|
||
|
In a fully virtualized VM, the mouse may be emulated as a PS/2 mouse, USB
|
||
|
mouse, or USB tablet. The virt-install tool selects the best emulation that is
|
||
|
known to be automatically detected and supported by the operating system.
|
||
|
|
||
|
However, when accessing some fully virtualized operating systems via VNC, the
|
||
|
mouse may be difficult to control if the VM is emulating a PS/2 mouse. PS/2
|
||
|
provides mouse deltas, but VNC only provides absolute coordinates. In such
|
||
|
cases, you may want to manually switch the operating system and VM to use a
|
||
|
USB tablet.
|
||
|
|
||
|
Emulation of a SummaSketch graphics tablet is provided for this reason. To
|
||
|
use the Summa emulation, you will need to configure your fully virtualized OS.
|
||
|
Note that the virtual tablet is connected to the second virtual serial port
|
||
|
(/dev/ttyS1 or COM2).
|
||
|
|
||
|
Most Linux distributions ship with appropriate drivers, and only need to be
|
||
|
configured. To configure gpm, edit /etc/sysconfig/mouse and add these lines:
|
||
|
MOUSETYPE="summa"
|
||
|
XMOUSETYPE="SUMMA"
|
||
|
DEVICE=/dev/ttyS1
|
||
|
The format and location of your configuration file could vary depending upon
|
||
|
your Linux distribution. The goal is to run the gpm daemon as follows:
|
||
|
gpm -t summa -m /dev/ttyS1
|
||
|
X also needs to be configured to use the Summa emulation. Add the following
|
||
|
stanza to /etc/X11/xorg.conf, or use your distribution's tools to add these
|
||
|
settings:
|
||
|
Section "InputDevice"
|
||
|
Identifier "Mouse0"
|
||
|
Driver "summa"
|
||
|
Option "Device" "/dev/ttyS1"
|
||
|
Option "InputFashion" "Tablet"
|
||
|
Option "Mode" "Absolute"
|
||
|
Option "Name" "EasyPen"
|
||
|
Option "Compatible" "True"
|
||
|
Option "Protocol" "Auto"
|
||
|
Option "SendCoreEvents" "on"
|
||
|
Option "Vendor" "GENIUS"
|
||
|
EndSection
|
||
|
After making these changes, restart gpm and X.
|
||
|
|
||
|
|
||
|
HVM Console in Fully Virtual Mode
|
||
|
---------------------------------
|
||
|
When running a VM in fully virtual mode, a special console is available that
|
||
|
provides some additional ways to control the VM. Press Ctrl-Alt-2 to access
|
||
|
the console; press Ctrl-Alt-1 to return to the VM. While at the console,
|
||
|
type "help" for help.
|
||
|
|
||
|
The two most important commands are "send-key" and "change". The "send-key"
|
||
|
command allows you to send any key sequence to the VM, which might otherwise
|
||
|
be intercepted by your local window manager.
|
||
|
|
||
|
The "change" command allows the target of a block device to be changed; for
|
||
|
example, use it to change from one CD ISO to another. Some versions of Xen
|
||
|
have this command disabled for security reasons. Consult the online
|
||
|
documentation for workarounds.
|
||
|
|
||
|
|
||
|
Networking
|
||
|
----------
|
||
|
Your virtual machines become much more useful if you can reach them via the
|
||
|
network. Starting with openSUSE11.1 and SLE11, networking in domain 0 is
|
||
|
configured and managed via YaST. The yast2-networking module can be used
|
||
|
to create and manage bridged networks. During initial installation, a bridged
|
||
|
networking proposal will be presented if the "Xen Virtual Machine Host Server"
|
||
|
pattern is selected. The proposal will also be presented if you install Xen
|
||
|
after initial installation using the "Install Hypervisor and Tools" module in
|
||
|
YaST.
|
||
|
|
||
|
The default proposal creates a virtual bridge in domain 0 for each active
|
||
|
ethernet device, enslaving the device to the bridge. Consider a machine
|
||
|
containing two ethernet devices (eth0 and eth1), both with active carriers.
|
||
|
YaST will create br0 and br1, enslaving the eth0 and eth1 devices repectively.
|
||
|
|
||
|
VMs get a virtual network interface (e.g. eth0), which is visible in domain 0
|
||
|
as vifN.0 and connected to the bridge. This means that if you set up an IP
|
||
|
address in the VMs belonging to the same subnet as br0 from your domain 0,
|
||
|
you'll be able to communicate not only with the other slave VMs, but also with
|
||
|
domain 0 and with the external network. If you have a DHCP server running in
|
||
|
your network, your VMs should succeed in getting an IP address.
|
||
|
|
||
|
Be aware that this may have unwanted security implications. You may want to
|
||
|
opt for routing instead of bridging, so you can set up firewalling rules in
|
||
|
domain 0.
|
||
|
|
||
|
Please read about the network configuration in the Xen manual. You can set up
|
||
|
bridging or routing for other interfaces also.
|
||
|
|
||
|
For debugging, here's what happens on bootup of a domU:
|
||
|
- xenstored saves the device setup in xenstore
|
||
|
- domU is created
|
||
|
- vifN.0 shows up in domain 0 and a hotplug event is triggered
|
||
|
- hotplug is /sbin/udev; udev looks at /etc/udev/rules.d/40-xen.rules and
|
||
|
calls /etc/xen/scripts/vif-bridge online
|
||
|
- vif-bridge set the vifN.0 device up and enslaves it to the bridge
|
||
|
- eth0 shows up in domU (hotplug event triggered)
|
||
|
Similar things happen for block devices, except that /etc/xen/scripts/block is
|
||
|
called.
|
||
|
|
||
|
It's not recommended to use ifplugd nor NetworkManager for managing the
|
||
|
interfaces if you use bridging mode. Use routing with nat or proxy-arp
|
||
|
in that case. You also need to do that in case you want to send out packets
|
||
|
on wireless; you can't bridge Xen "ethernet" packets into 802.11 packets.
|
||
|
|
||
|
|
||
|
Network Troubleshooting
|
||
|
-----------------------
|
||
|
First ensure the VM server is configured correctly and can access the network.
|
||
|
|
||
|
Do not use ifplugd or NetworkManager, neither are bridge aware.
|
||
|
|
||
|
Specify a static virtual MAC in the VM's configuration file. Random MACs can
|
||
|
be problematic, since with each boot of the VM it appears that some hardware
|
||
|
has been removed (the previous random MAC) and new hardware is present (the
|
||
|
new random MAC). This can cause network configuration files (which were
|
||
|
intended for the old MAC) to not be matched up with the new virtual hardware.
|
||
|
|
||
|
In the VM's filesystem, ensure the ifcfg-eth* files are named appropriately.
|
||
|
For example, if you do decide to use a randomly-selected MAC for the VM, the
|
||
|
ifcfg-eth* file must not include the MAC in its name; name it generically
|
||
|
("ifcfg-eth0") instead. If you use a static virtual MAC for the VM, be sure
|
||
|
that is reflected in the file's name.
|
||
|
|
||
|
|
||
|
Thread-Local Storage
|
||
|
--------------------
|
||
|
For some time now, the glibc thread library (NPTL) has used a shortcut to
|
||
|
access thread-local variables at a negative segment offset from the segment
|
||
|
selector GS instead of reading the linear address from the TDB (offset 0).
|
||
|
Unfortunately, this optimization has been made the default by the glibc and
|
||
|
gcc maintainers, as it saves one indirection. For Xen this is bad: The access
|
||
|
to these variables will trap, and Xen will need to use some tricks to make the
|
||
|
access work. It does work, but it's very slow.
|
||
|
|
||
|
SUSE Linux 9.1 and SLES 9 were prior to this change, and thus are not
|
||
|
affected. SUSE Linux 9.2 and 9.3 are affected. For SUSE Linux 10.x and SLES
|
||
|
10, we have disabled negative segment references in gcc and glibc, and so
|
||
|
these are not affected. Other non-SUSE Linux distributions may be affected.
|
||
|
|
||
|
For affected distributions, one way to work around the problem is to rename
|
||
|
the /lib/tls directory, so the pre-i686 version gets used, where no such
|
||
|
tricks are done. An example LSB-compliant init script which automates these
|
||
|
steps is installed at /usr/share/doc/packages/xen/boot.xen. This script
|
||
|
renames /lib/tls when running on Xen, and restores it when not running on Xen.
|
||
|
Modify this script to work with your specific distribution.
|
||
|
|
||
|
Mono has a similar problem, but this has been fixed in SUSE Linux 10.1 and
|
||
|
SLES 10. Older or non-SUSE versions of Mono may have a performance impact.
|
||
|
|
||
|
|
||
|
Security
|
||
|
--------
|
||
|
Domain 0 has control over all domains. This means that care should be taken to
|
||
|
keep domain 0 safe; ideally you strip it down to only do as little there as
|
||
|
possible, preferably with no local users except for the system administrator.
|
||
|
Most commands in domain 0 can only be performed as root, but this protection
|
||
|
scheme only has moderate security and might be defeated. In case domain 0 is
|
||
|
compromised, all other domains are compromised as well.
|
||
|
|
||
|
To allow relocation of VMs (migration), the receiving machine listens on TCP
|
||
|
port 8002. You might want to put firewall rules in place in domain 0 to
|
||
|
restrict this to machines which you trust. Relocating VMs with sensitive data
|
||
|
is not a good idea in untrusted networks, since the data is not sent encrypted.
|
||
|
|
||
|
The memory protections for the domUs are effective; so far no way to break out
|
||
|
of a virtual machine is known. A VM is an effective jail.
|
||
|
|
||
|
|
||
|
Limitations
|
||
|
-----------
|
||
|
When booting, Linux reserves data structures matching the amount of RAM found.
|
||
|
This has the side-effect that you can't dynamically grow the memory beyond
|
||
|
what the kernel has been booted with. But you can trick domU Linux to prepare
|
||
|
for a larger amount of RAM by passing the mem= boot parameter.
|
||
|
|
||
|
The export of virtual hard disks from files in Xen can be handled via the
|
||
|
loopback driver (although in Xen >= 3.0.4, this is can be replaced by the
|
||
|
"blktap" user-space driver.) If you are still using loopback, it may be
|
||
|
possible to run out of loopback devices, as by default only 64 are supported.
|
||
|
You can change this by inserting:
|
||
|
options loop max_loop=128
|
||
|
into /etc/modprobe.conf.local in domain 0.
|
||
|
|
||
|
|
||
|
Upgrading the Host Operating System
|
||
|
-----------------------------------
|
||
|
When upgrading the host operating system from one major release to another
|
||
|
(for example, SLES 11 to SLES 12 or openSUSE 12.3 to openSUSE 13.1) or when
|
||
|
applying a service pack like SLES 11 SP3 to SLES 11 SP2 all running VMs must
|
||
|
be shut down before the upgrade process is begun.
|
||
|
|
||
|
On versions of SLES 11 and openSUSE 12 you are using the xm/xend toolstack.
|
||
|
After upgrading to SLES 12 and newer openSUSE versions this toolstack will be
|
||
|
replaced with the xl toolstack. The xl toolstack does not support Managed
|
||
|
Domains. If you wish to continue using Managed Domains you must switch to
|
||
|
using libvirt and its command line interface 'virsh'. You may also use
|
||
|
virt-manager as a GUI interface to libvirt. After upgrading the host but
|
||
|
before you can begin using libvirt on VMs that were previously managed by
|
||
|
xm/xend, you must run a conversion tool called /usr/sbin/xen2libvirt for all
|
||
|
VMs.
|
||
|
|
||
|
For example, to convert all domains previously managed by xend:
|
||
|
xen2libvirt -r /var/lib/xend/domains/
|
||
|
|
||
|
Now typing 'virsh list --all' will show your previously xend managed domains
|
||
|
being managed by libvirt. Run 'xen2libvirt -h' to see additional options for
|
||
|
using this tool.
|
||
|
|
||
|
|
||
|
Memory Ballooning in VMs
|
||
|
------------------------
|
||
|
Setting a VMs maximum memory value greater than the initial memory value
|
||
|
requires support for memory ballooning in the VMs operating system. Modern SLES
|
||
|
and openSUSE guests have this capability built-in. Windows installation media
|
||
|
does not support memory ballooning so you must first install the VM without
|
||
|
memory ballooning (maxmem equal to initial memory). After the installation, the
|
||
|
Virtual Machine Driver Pack (vmdp) must be installed. After this, the VMs
|
||
|
maxmem value may be increased. A reboot of the VM is required for this action
|
||
|
to take effect.
|
||
|
|
||
|
|
||
|
Dom0 Memory Ballooning
|
||
|
----------------------
|
||
|
It is strongly recommended that you dedicate a fixed amount of RAM to dom0
|
||
|
rather than relying on dom0 auto ballooning. Doing so will ensure your dom0
|
||
|
has enough resources to operate well and will improve startup times for your
|
||
|
VMs. The amount of RAM dedicated to dom0 should never be less than the
|
||
|
recommended minimum amount for running your SUSE distribution in native mode.
|
||
|
The actual amount of RAM needed for dom0 depends on several factors including
|
||
|
how much physical RAM is on the host, the number of physical CPUs, and the
|
||
|
number of VMs running simultaneously where each VM has a specific requirement
|
||
|
for RAM. The following example shows the syntax for doing this. This would be
|
||
|
added to your grub1 or grub2 configuration;
|
||
|
|
||
|
Grub2 Example:
|
||
|
Edit /etc/default/grub and add,
|
||
|
GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=1024M,max:1024M"
|
||
|
and then run
|
||
|
grub2-mkconfig -o /boot/grub2/grub.cfg
|
||
|
|
||
|
Grub1 Example:
|
||
|
Edit /boot/grub/menu.lst and edit the line containing xen.gz
|
||
|
kernel /boot/xen.gz dom0_mem=1024M,max:1024M
|
||
|
|
||
|
After modifying your grub configuration, you will need to edit /etc/xen/xl.conf
|
||
|
and set autoballoon="off". This will prevent xl from automatically adjusting
|
||
|
the amount of memory assigned to dom0. Reboot the host for these changes to
|
||
|
take effect.
|
||
|
|
||
|
|
||
|
Adjusting LIBXL_HOTPLUG_TIMEOUT at runtime
|
||
|
------------------------------------------
|
||
|
A domU with a large amount of disks may run into the hardcoded
|
||
|
LIBXL_HOTPLUG_TIMEOUT limit, which is 40 seconds. This happens if the
|
||
|
preparation for each disk takes an unexpected large amount of time. Then
|
||
|
the sum of all configured disks and the individual preparation time will
|
||
|
be larger than 40 seconds. The hotplug script which does the preparation
|
||
|
takes a lock before doing the actual preparation. Since the hotplug
|
||
|
scripts for each disk are spawned at nearly the same time, each one has
|
||
|
to wait for the lock. Due to this contention, the total execution time
|
||
|
of a script can easily exceed the timeout. In this case libxl will
|
||
|
terminate the script because it has to assume an error condition.
|
||
|
|
||
|
Example:
|
||
|
10 configured disks, each one takes 3 seconds within the critital
|
||
|
section. The total execution time will be 30 seconds, which is still
|
||
|
within the limit. With 5 additional configured disks, the total
|
||
|
execution time will be 45 seconds, which would trigger the timeout.
|
||
|
|
||
|
To handle such setup without a recompile of libxl, a special key/value
|
||
|
has to be created in xenstore prior domain creation. This can be done
|
||
|
either manually, or at system startup. A dedicated systemd service file
|
||
|
exists to set the required value. To enable it, run these commands:
|
||
|
|
||
|
/etc/systemd/system # systemctl enable xen-LIBXL_HOTPLUG_TIMEOUT.service
|
||
|
/etc/systemd/system # systemctl start xen-LIBXL_HOTPLUG_TIMEOUT.service
|
||
|
|
||
|
|
||
|
In case the value in this service file needs to be changed, a copy with
|
||
|
the exact same name must be created in the /etc/systemd/system directory:
|
||
|
|
||
|
/etc/systemd/system # cat xen-LIBXL_HOTPLUG_TIMEOUT.service
|
||
|
[Unit]
|
||
|
Description=set global LIBXL_HOTPLUG_TIMEOUT
|
||
|
ConditionPathExists=/proc/xen/capabilities
|
||
|
|
||
|
Requires=xenstored.service
|
||
|
After=xenstored.service
|
||
|
Requires=xen-init-dom0.service
|
||
|
After=xen-init-dom0.service
|
||
|
Before=xencommons.service
|
||
|
|
||
|
[Service]
|
||
|
Type=oneshot
|
||
|
RemainAfterExit=true
|
||
|
ExecStartPre=/bin/grep -q control_d /proc/xen/capabilities
|
||
|
ExecStart=/usr/bin/xenstore-write /libxl/suse/per-device-LIBXL_HOTPLUG_TIMEOUT 10
|
||
|
|
||
|
[Install]
|
||
|
WantedBy=multi-user.target
|
||
|
|
||
|
In this example the per-device value will be set to 10 seconds.
|
||
|
|
||
|
The change for libxl which handles this xenstore value will enable
|
||
|
additional logging if the key is found. That extra logging will show how
|
||
|
the execution time of each script.
|
||
|
|
||
|
|
||
|
Troubleshooting
|
||
|
---------------
|
||
|
First try to get Linux running on bare metal before trying with Xen.
|
||
|
|
||
|
Be sure your Xen hypervisor (xen) and VM kernels (kernel-xen) are compatible.
|
||
|
The hypervisor and domain 0 kernel are a matched set, and usually must be
|
||
|
upgraded together. Consult the online documentation for a matrix of supported
|
||
|
32- and 64-bit combinations
|
||
|
|
||
|
If you have trouble early in the boot, try passing pnpacpi=off to the Linux
|
||
|
kernel. If you have trouble with interrupts or timers, passing lapic to Xen
|
||
|
may help. Xen and Linux understand similar ACPI boot parameters. Try the
|
||
|
options acpi=off,force,ht,noirq or acpi_skip_timer_override.
|
||
|
|
||
|
Other useful debugging options to Xen may be nosmp, noreboot, mem=4096M,
|
||
|
sync_console, noirqbalance (Dell). For a complete list of Xen boot options,
|
||
|
consult the "Xen Hypervisor Command Line Options" documentation.
|
||
|
|
||
|
If domain 0 Linux crashes on X11 startup, please try to boot into runlevel 3.
|
||
|
|
||
|
1) As a first step in debugging Xen you should add the following hypervisor
|
||
|
options to the xen.gz line in your grub configuration file. After rebooting,
|
||
|
the 'xl dmesg' command will produce more output to better analyze problems.
|
||
|
|
||
|
Grub2 Example:
|
||
|
Edit /etc/default/grub and add,
|
||
|
GRUB_CMDLINE_XEN_DEFAULT="loglvl=all guest_loglvl=all"
|
||
|
and then run,
|
||
|
grub2-mkconfig -o /boot/grub2/grub.cfg
|
||
|
|
||
|
Grub1 Example:
|
||
|
Edit /boot/grub/menu.lst and edit the line containing xen.gz
|
||
|
kernel /boot/xen.gz loglvl=all guest_loglvl=all
|
||
|
|
||
|
2) With the log levels specified above and the host rebooted, more useful
|
||
|
information about domain 0 and running VMs can be obtained using the
|
||
|
'xl dmesg' and 'xl debug-keys' commands. For example, from the command line
|
||
|
run:
|
||
|
xl debug-keys h
|
||
|
and then run:
|
||
|
xl dmesg
|
||
|
Note that at the end of the output from 'xl dmesg' it includes help on a
|
||
|
series of commands that may be passed to 'xl debug-keys'. For example, by
|
||
|
passing the letter 'q' to 'xl debug-keys' it will "dump domain (and guest
|
||
|
debug) info".
|
||
|
xl debug-keys q
|
||
|
Now you can again run 'xl dmesg' to see the domain and guest debug info.
|
||
|
|
||
|
3) Sometimes it is useful to attach a serial terminal and direct Xen to send
|
||
|
its output not only to the screen, but also to that terminal. First you need
|
||
|
to attach a serial cable from the serial port on the server to a second
|
||
|
machine's serial port. That second machine could be running minicom (or some
|
||
|
other program that can be setup to read from the serial port). Do the
|
||
|
following to prepare Xen to send its output over this serial line.
|
||
|
|
||
|
Grub2 Example:
|
||
|
Edit /etc/default/grub and add,
|
||
|
GRUB_CMDLINE_XEN_DEFAULT="loglvl=all guest_loglvl=all console=com1 com1=115200,8n1"
|
||
|
Also append additional serial flags to the option below such that it appears as,
|
||
|
GRUB_CMDLINE_LINUX_DEFAULT="<pre-existing flags> console=ttyS0, 115200"
|
||
|
where pre-existing flags are those options already present and then run,
|
||
|
grub2-mkconfig -o /boot/grub2/grub.cfg
|
||
|
|
||
|
Grub1 Example:
|
||
|
Edit the /etc/grub/menu.lst file and add the following to the Xen entry,
|
||
|
kernel /boot/xen.gz loglvl=all guest_loglvl=all console=com1 com1=115200,8n1
|
||
|
module /boot/vmlinuz-xen <pre-existing flags> console=ttyS0, 115200
|
||
|
|
||
|
Once the hardware and software are configured correctly the server is rebooted
|
||
|
and its output should appear on the other terminal as the server boots up.
|
||
|
|
||
|
4) To further debug Xen or domain 0 Linux crashes or hangs, it may be useful to
|
||
|
use the debug-enabled hypervisor, and/or to prevent automatic rebooting.
|
||
|
|
||
|
Grub2 Example:
|
||
|
Edit /etc/default/grub and add,
|
||
|
GRUB_CMDLINE_XEN_DEFAULT="noreboot loglvl=all guest_loglvl=all"
|
||
|
Edit /boot/grub2/grub.cfg and look for these lines:
|
||
|
multiboot /boot/xen-<version>.gz ...
|
||
|
and replace them with:
|
||
|
multiboot /boot/xen-dbg-<version>.gz' ... Replace <version> with the
|
||
|
appropriate version string contained in the filename. Note that running
|
||
|
grub2-mkconfig -o /boot/grub2/grub.cfg will overwrite all manual changes
|
||
|
made to grub.cfg.
|
||
|
|
||
|
Grub1 Example:
|
||
|
Edit your menu.lst configuration from something like this:
|
||
|
kernel (hd0,5)/xen.gz
|
||
|
To something like this:
|
||
|
kernel (hd0,5)/xen-dbg.gz noreboot loglvl=all guest_loglvl=all
|
||
|
|
||
|
All hypervisor options require a reboot to take effect. After rebooting, the
|
||
|
Xen hypervisor will write any error messages to the log file (viewable with
|
||
|
the "xl dmesg" command).
|
||
|
|
||
|
If problems persist, check if a newer version is available. Well-tested
|
||
|
versions will be shipped with SUSE and via YaST Online Update.
|
||
|
|
||
|
|
||
|
Resources
|
||
|
---------
|
||
|
https://www.suse.com/documentation/sles11/singlehtml/book_xen/book_xen.html
|
||
|
http://doc.opensuse.org/products/draft/SLES/SLES-xen_sd_draft/cha.xen.basics.html
|
||
|
|
||
|
|
||
|
Feedback
|
||
|
--------
|
||
|
In case you have remarks about, problems with, ideas for, or praise for Xen,
|
||
|
please report it back to the xen-devel list:
|
||
|
xen-devel@lists.xen.org
|
||
|
If you find issues with the packaging or setup done by SUSE, please report
|
||
|
it through bugzilla:
|
||
|
https://bugzilla.suse.com
|
||
|
|
||
|
|
||
|
ENJOY!
|
||
|
Your SUSE Team.
|