SHA256
1
0
forked from pool/xen
xen/snapshot-xend.patch
Charles Arnold f196fa2c00 - bnc#573376 - OS reboot while create DomU with Windows CD
- bnc#573881 - /usr/lib64/xen/bin/qemu-dm is a broken link 

- Update to changeset 20840 RC1+ for sle11-sp1 beta3. 

- bnc#569581 - SuSEfirewall2 should handle rules.  Disable
  handle_iptable in vif-bridge script
  vif-bridge-no-iptables.patch

- bnc#569577 - /etc/modprove.d/xen_pvdrivers, installed by 
  xen-kmp-default, to ../xen_pvdrivers.conf 
- bnc#536176 - Xen panic when using iommu after updating hypervisor 
  19380-vtd-feature-check.patch

- bnc#530959 - virsh autostart doesn't work
  Fixing this libvirt bug also required fixing xend's op_pincpu
  method with upstream c/s 19580
  19580-xend-pincpu.patch

- bnc#534146 - Xen: Fix SRAT check for discontig memory
  20120-x86-srat-check-discontig.patch

- bnc#491081 - Xen time goes backwards x3950M2 
- disable module build for ec2 correctly to fix build
  (at the suse_kernel_module_package macro)
               runs
- Upstream bugfixes from Jan.
  19896-32on64-arg-xlat.patch
  19960-show-page-walk.patch
  19945-pae-xen-l2-entries.patch
  19953-x86-fsgs-base.patch
  19931-gnttblop-preempt.patch
  19885-kexec-gdt-switch.patch
  19894-shadow-resync-fastpath-race.patch
- hvperv shim patches no longer require being applied conditionally

- bnc#520234 - npiv does not work with XEN in SLE11
  Update block-npiv
- bnc#496033 - Support for creating NPIV ports without starting vm
  block-npiv-common.sh
  block-npiv-vport
  Update block-npiv
- bnc#500043 - Fix access to NPIV disk from HVM vm
  Update xen-qemu-iscsi-fix.patch

- Don't build the KMPs for the ec2 kernel. 

- Upstream fixes from Jan Beulich
  19606-hvm-x2apic-cpuid.patch
  19734-vtd-gcmd-submit.patch
  19752-vtd-srtp-sirtp-flush.patch
  19753-vtd-reg-write-lock.patch
  19764-hvm-domain-lock-leak.patch
  19765-hvm-post-restore-vcpu-state.patch
  19767-hvm-port80-inhibit.patch
  19768-x86-dom0-stack-dump.patch
  19770-x86-amd-s3-resume.patch
  19801-x86-p2m-2mb-hap-only.patch
  19815-vtd-kill-correct-timer.patch
- Patch from Jan Beulich to aid in debugging bnc#509911
  gnttblop-preempt.patch

- bnc#515220 - qemu-img-xen snapshot Segmentation fault
  qemu-img-snapshot.patch update
- Upstream fixes from Jan Beulich.
  19474-32on64-S3.patch
  19490-log-dirty.patch
  19492-sched-timer-non-idle.patch
  19493-hvm-io-intercept-count.patch
  19505-x86_64-clear-cr1.patch
  19519-domctl-deadlock.patch
  19523-32on64-restore-p2m.patch
  19555-ept-live-migration.patch
  19557-amd-iommu-ioapic-remap.patch
  19560-x86-flush-tlb-empty-mask.patch
  19571-x86-numa-shift.patch
  19578-hvm-load-ldt-first.patch
  19592-vmx-exit-reason-perfc-size.patch
  19595-hvm-set-callback-irq-level.patch
  19597-x86-ioport-quirks-BL2xx.patch
  19602-vtd-multi-ioapic-remap.patch
  19631-x86-frametable-map.patch
  19653-hvm-vcpuid-range-checks.patch

- bnc#382112 - Caps lock not being passed to vm correctly.
  capslock_enable.patch

- bnc#506833 - Use pidof in xend and xendomains init scripts

- bnc#484778 - XEN: PXE boot of FV domU using non-Realtek NIC fails
  enable_more_nic_pxe.patch

cross-migrate.patch
- bnc#390961 - cross-migration of a VM causes it to become
  unresponsive (remains paused after migration)

- Patches taken to fix the xenctx tool. The fixed version of this
  tool is needed to debug bnc#502735. 
  18962-xc_translate_foreign_address.patch
  18963-xenctx.patch
  19168-hvm-domctl.patch
  19169-remove-declare-bitmap.patch
  19170-libxc.patch
  19171-xenctx.patch
  19450-xc_translate_foreign_address.patch

 

-bnc#503782 - Using converted vmdk image does not work
 ioemu-tapdisk-compat-QEMU_IMG.patch 


- bnc#474738 - adding CD drive to VM guest makes it unbootable.
  parse_boot_disk.patch
- bnc#495300 - L3: Xen unable to PXE boot Windows based DomU's
  18545-hvm-gpxe-rom.patch, 18548-hvm-gpxe-rom.patch 

- bnc#459836 - Fix rtc_timeoffset when localtime=0
  xend-timeoffset.patch

- bnc#497440 - xmclone.sh script incorrectly handles networking for
  SLE11.

- bnc#477890 - VM becomes unresponsive after applying snapshot

- bnc#494892 - Update xend-domain-lock.patch to flock the lock
               file.

- bnc#439639 - SVVP Test 273 System - Sleep Stress With IO" fails
Turned off s3/s4 sleep states for HVM guests.

- bnc#468169 - fix domUloader to umount the mounted device mapper target in dom0 
               when install a sles10 guest with disk = /dev/disk/by_path

- bnc#488490 - domUloader can't handle block device names with ':'
- bnc#486244 - vms fail to start after reboot when using qcow2

- bnc#490835 - VTd errata on Cantiga chipset
  19230-vtd-mobile-series4-chipset.patch

- bnc#482515 - Missing dependency in xen.spec 

- Additional upstream bug fix patches from Jan Beulich.
  19132-page-list-mfn-links.patch
  19134-fold-shadow-page-info.patch
  19135-next-shadow-mfn.patch
  19136-page-info-rearrange.patch
  19156-page-list-simplify.patch
  19161-pv-ldt-handling.patch
  19162-page-info-no-cpumask.patch
  19216-msix-fixmap.patch
  19268-page-get-owner.patch
  19293-vcpu-migration-delay.patch
  19391-vpmu-double-free.patch
  19415-vtd-dom0-s3.patch

- Imported numerous upstream bug fix patches.
  19083-memory-is-conventional-fix.patch
  19097-M2P-table-1G-page-mappings.patch
  19137-lock-domain-page-list.patch
  19140-init-heap-pages-max-order.patch
  19167-recover-pat-value-s3-resume.patch
  19172-irq-to-vector.patch
  19173-pci-passthrough-fix.patch
  19176-free-irq-shutdown-fix.patch
  19190-pciif-typo-fix.patch
  19204-allow-old-images-restore.patch
  19232-xend-exception-fix.patch
  19239-ioapic-s3-suspend-fix.patch
  19240-ioapic-s3-suspend-fix.patch
  19242-xenstored-use-after-free-fix.patch
  19259-ignore-shutdown-deferrals.patch
  19266-19365-event-channel-access-fix.patch
  19275-19296-schedular-deadlock-fixes.patch
  19276-cpu-selection-allocation-fix.patch
  19302-passthrough-pt-irq-time-out.patch
  19313-hvmemul-read-msr-fix.patch
  19317-vram-tracking-fix.patch
  19335-apic-s3-resume-error-fix.patch
  19353-amd-migration-fix.patch
  19354-amd-migration-fix.patch
  19371-in-sync-L1s-writable.patch
  19372-2-on-3-shadow-mode-fix.patch
  19377-xend-vnclisten.patch
  19400-ensure-ltr-execute.patch
  19410-virt-to-maddr-fix.patch

- bnc#483565 - Fix block-iscsi script.
  Updated block-iscsi and xen-domUloader.diff

- bnc#465814 - Mouse stops responding when wheel is used in Windows
  VM.
  mouse-wheel-roll.patch (James Song)
- bnc#470704 - save/restore of windows VM throws off the mouse 
  tracking. 
  usb-save-restore.patch (James Song)

- bnc#436629 - Use global vnc-listen setting specified in xend
  configuration file.
  xend-vnclisten.patch
- bnc#482623 - Fix pygrub to append user-supplied 'extra' args
  to kernel args.
  19234_pygrub.patch

- bnc#481161 upgrade - sles10sp2 to sles11 upgrade keeps
  xen-tools-ioemu

OBS-URL: https://build.opensuse.org/package/show/Virtualization/xen?expand=0&rev=28
2010-01-29 20:39:04 +00:00

661 lines
25 KiB
Diff

Index: xen-4.0.0-testing/tools/python/xen/xend/image.py
===================================================================
--- xen-4.0.0-testing.orig/tools/python/xen/xend/image.py
+++ xen-4.0.0-testing/tools/python/xen/xend/image.py
@@ -490,7 +490,7 @@ class ImageHandler:
domains.domains_lock.acquire()
- def signalDeviceModel(self, cmd, ret, par = None):
+ def signalDeviceModel(self, cmd, ret, par = None, timeout = True):
if self.device_model is None:
return
# Signal the device model to for action
@@ -527,10 +527,17 @@ class ImageHandler:
while state != ret:
state = xstransact.Read("/local/domain/0/device-model/%i/state"
% self.vm.getDomid())
+ if state == 'error':
+ msg = ("The device model returned an error: %s"
+ % xstransact.Read("/local/domain/0/device-model/%i/error"
+ % self.vm.getDomid()))
+ raise VmError(msg)
+
time.sleep(0.1)
- count += 1
- if count > 100:
- raise VmError('Timed out waiting for device model action')
+ if timeout:
+ count += 1
+ if count > 100:
+ raise VmError('Timed out waiting for device model action')
#resotre orig state
xstransact.Store("/local/domain/0/device-model/%i"
@@ -555,6 +562,10 @@ class ImageHandler:
except:
pass
+ def snapshotDeviceModel(self, name):
+ # Signal the device model to perform snapshot operation
+ self.signalDeviceModel('snapshot', 'paused', name, False)
+
def recreate(self):
if self.device_model is None:
return
Index: xen-4.0.0-testing/tools/python/xen/xend/server/blkif.py
===================================================================
--- xen-4.0.0-testing.orig/tools/python/xen/xend/server/blkif.py
+++ xen-4.0.0-testing/tools/python/xen/xend/server/blkif.py
@@ -88,6 +88,9 @@ class BlkifController(DevController):
if bootable != None:
back['bootable'] = str(bootable)
+ if 'snapshotname' in self.vm.info:
+ back['snapshot'] = self.vm.info['snapshotname']
+
if security.on() == xsconstants.XS_POLICY_USE:
self.do_access_control(config, uname)
Index: xen-4.0.0-testing/tools/python/xen/xend/server/SrvDomain.py
===================================================================
--- xen-4.0.0-testing.orig/tools/python/xen/xend/server/SrvDomain.py
+++ xen-4.0.0-testing/tools/python/xen/xend/server/SrvDomain.py
@@ -95,6 +95,31 @@ class SrvDomain(SrvDir):
def do_save(self, _, req):
return self.xd.domain_save(self.dom.domid, req.args['file'][0])
+ def op_snapshot_create(self, op, req):
+ self.acceptCommand(req)
+ return req.threadRequest(self.do_snapshot_create, op, req)
+
+ def do_snapshot_create(self, _, req):
+ return self.xd.domain_snapshot_create(self.dom.domid, req.args['name'][0])
+
+ def op_snapshot_list(self, op, req):
+ self.acceptCommand(req)
+ return self.xd.domain_snapshot_list(self.dom.getName())
+
+ def op_snapshot_apply(self, op, req):
+ self.acceptCommand(req)
+ return req.threadRequest(self.do_snapshot_apply, op, req)
+
+ def do_snapshot_apply(self, _, req):
+ return self.xd.domain_snapshot_apply(self.dom.getName(), req.args['name'][0])
+
+ def op_snapshot_delete(self, op, req):
+ self.acceptCommand(req)
+ return req.threadRequest(self.do_snapshot_delete, op, req)
+
+ def do_snapshot_delete(self, _, req):
+ return self.xd.domain_snapshot_delete(self.dom.getName(), req.args['name'][0])
+
def op_dump(self, op, req):
self.acceptCommand(req)
return req.threadRequest(self.do_dump, op, req)
@@ -245,7 +270,7 @@ class SrvDomain(SrvDir):
def render_GET(self, req):
op = req.args.get('op')
- if op and op[0] in ['vcpuinfo']:
+ if op and op[0] in ['vcpuinfo', 'snapshot_list']:
return self.perform(req)
#
Index: xen-4.0.0-testing/tools/python/xen/xend/XendCheckpoint.py
===================================================================
--- xen-4.0.0-testing.orig/tools/python/xen/xend/XendCheckpoint.py
+++ xen-4.0.0-testing/tools/python/xen/xend/XendCheckpoint.py
@@ -65,7 +65,7 @@ def insert_after(list, pred, value):
return
-def save(fd, dominfo, network, live, dst, checkpoint=False, node=-1):
+def save(fd, dominfo, network, live, dst, checkpoint=False, node=-1, name=None, diskonly=False):
from xen.xend import XendDomain
try:
@@ -112,52 +112,61 @@ def save(fd, dominfo, network, live, dst
image_cfg = dominfo.info.get('image', {})
hvm = dominfo.info.is_hvm()
- # xc_save takes three customization parameters: maxit, max_f, and
- # flags the last controls whether or not save is 'live', while the
- # first two further customize behaviour when 'live' save is
- # enabled. Passing "0" simply uses the defaults compiled into
- # libxenguest; see the comments and/or code in xc_linux_save() for
- # more information.
- cmd = [xen.util.auxbin.pathTo(XC_SAVE), str(fd),
- str(dominfo.getDomid()), "0", "0",
- str(int(live) | (int(hvm) << 2)) ]
- log.debug("[xc_save]: %s", string.join(cmd))
-
- def saveInputHandler(line, tochild):
- log.debug("In saveInputHandler %s", line)
- if line == "suspend":
- log.debug("Suspending %d ...", dominfo.getDomid())
- dominfo.shutdown('suspend')
- dominfo.waitForSuspend()
- if line in ('suspend', 'suspended'):
- dominfo.migrateDevices(network, dst, DEV_MIGRATE_STEP2,
- domain_name)
- log.info("Domain %d suspended.", dominfo.getDomid())
- dominfo.migrateDevices(network, dst, DEV_MIGRATE_STEP3,
- domain_name)
- if hvm:
- dominfo.image.saveDeviceModel()
-
- if line == "suspend":
- tochild.write("done\n")
- tochild.flush()
- log.debug('Written done')
-
- forkHelper(cmd, fd, saveInputHandler, False)
-
- # put qemu device model state
- if os.path.exists("/var/lib/xen/qemu-save.%d" % dominfo.getDomid()):
- write_exact(fd, QEMU_SIGNATURE, "could not write qemu signature")
- qemu_fd = os.open("/var/lib/xen/qemu-save.%d" % dominfo.getDomid(),
- os.O_RDONLY)
- while True:
- buf = os.read(qemu_fd, dm_batch)
- if len(buf):
- write_exact(fd, buf, "could not write device model state")
- else:
- break
- os.close(qemu_fd)
- os.remove("/var/lib/xen/qemu-save.%d" % dominfo.getDomid())
+ if not diskonly:
+ # xc_save takes three customization parameters: maxit, max_f, and
+ # flags the last controls whether or not save is 'live', while the
+ # first two further customize behaviour when 'live' save is
+ # enabled. Passing "0" simply uses the defaults compiled into
+ # libxenguest; see the comments and/or code in xc_linux_save() for
+ # more information.
+ cmd = [xen.util.auxbin.pathTo(XC_SAVE), str(fd),
+ str(dominfo.getDomid()), "0", "0",
+ str(int(live) | (int(hvm) << 2)) ]
+ log.debug("[xc_save]: %s", string.join(cmd))
+
+ def saveInputHandler(line, tochild):
+ log.debug("In saveInputHandler %s", line)
+ if line == "suspend":
+ log.debug("Suspending %d ...", dominfo.getDomid())
+ dominfo.shutdown('suspend')
+ dominfo.waitForSuspend()
+ if line in ('suspend', 'suspended'):
+ dominfo.migrateDevices(network, dst, DEV_MIGRATE_STEP2,
+ domain_name)
+ log.info("Domain %d suspended.", dominfo.getDomid())
+ dominfo.migrateDevices(network, dst, DEV_MIGRATE_STEP3,
+ domain_name)
+ if hvm:
+ dominfo.image.saveDeviceModel()
+ if name:
+ dominfo.image.resumeDeviceModel()
+
+ if line == "suspend":
+ tochild.write("done\n")
+ tochild.flush()
+ log.debug('Written done')
+
+ forkHelper(cmd, fd, saveInputHandler, False)
+
+ # put qemu device model state
+ if os.path.exists("/var/lib/xen/qemu-save.%d" % dominfo.getDomid()):
+ write_exact(fd, QEMU_SIGNATURE, "could not write qemu signature")
+ qemu_fd = os.open("/var/lib/xen/qemu-save.%d" % dominfo.getDomid(),
+ os.O_RDONLY)
+ while True:
+ buf = os.read(qemu_fd, dm_batch)
+ if len(buf):
+ write_exact(fd, buf, "could not write device model state")
+ else:
+ break
+ os.close(qemu_fd)
+ os.remove("/var/lib/xen/qemu-save.%d" % dominfo.getDomid())
+ else:
+ dominfo.shutdown('suspend')
+ dominfo.waitForShutdown()
+
+ if name:
+ dominfo.image.snapshotDeviceModel(name)
if checkpoint:
dominfo.resumeDomain()
@@ -221,6 +230,71 @@ def restore(xd, fd, dominfo = None, paus
if othervm is not None and othervm.domid is not None:
raise VmError("Domain '%s' already exists with ID '%d'" % (domconfig["name_label"], othervm.domid))
+ def contains_state(fd):
+ try:
+ cur = os.lseek(fd, 0, 1)
+ end = os.lseek(fd, 0, 2)
+
+ ret = False
+ if cur < end:
+ ret = True
+
+ os.lseek(fd, cur, 0)
+ return ret
+ except OSError, (errno, strerr):
+ # lseek failed <==> socket <==> state
+ return True
+
+ #
+ # We shouldn't hold the domains_lock over a waitForDevices
+ # As this function sometime gets called holding this lock,
+ # we must release it and re-acquire it appropriately
+ #
+ def wait_devs(dominfo):
+ from xen.xend import XendDomain
+
+ lock = True;
+ try:
+ XendDomain.instance().domains_lock.release()
+ except:
+ lock = False;
+
+ try:
+ dominfo.waitForDevices() # Wait for backends to set up
+ except Exception, exn:
+ log.exception(exn)
+ if lock:
+ XendDomain.instance().domains_lock.acquire()
+ raise
+
+ if lock:
+ XendDomain.instance().domains_lock.acquire()
+
+
+ if not contains_state(fd):
+ # Disk-only snapshot. Just start the vm from config (which should
+ # contain snapshotname.
+ if dominfo:
+ log.debug("### starting domain directly through XendDomainInfo")
+ dominfo.start()
+ else:
+ # Warning! Do we need to call into XendDomain to get domain
+ # lock? Similar to the xd.restore_() call below?
+ # We'll try XendDomain.domain_create()
+ log.debug("### starting domain through XendDomain.create()")
+ dominfo = xd.domain_create(vmconfig)
+
+ try:
+ wait_devs(dominfo)
+ except:
+ dominfo.destroy()
+ raise
+
+ dominfo.unpause()
+
+ # Done if disk only snapshot
+ return dominfo
+
if dominfo:
dominfo.resume()
else:
@@ -329,24 +403,7 @@ def restore(xd, fd, dominfo = None, paus
dominfo.completeRestore(handler.store_mfn, handler.console_mfn)
- #
- # We shouldn't hold the domains_lock over a waitForDevices
- # As this function sometime gets called holding this lock,
- # we must release it and re-acquire it appropriately
- #
- from xen.xend import XendDomain
-
- lock = True;
- try:
- XendDomain.instance().domains_lock.release()
- except:
- lock = False;
-
- try:
- dominfo.waitForDevices() # Wait for backends to set up
- finally:
- if lock:
- XendDomain.instance().domains_lock.acquire()
+ wait_devs(dominfo)
if not paused:
dominfo.unpause()
Index: xen-4.0.0-testing/tools/python/xen/xend/XendConfig.py
===================================================================
--- xen-4.0.0-testing.orig/tools/python/xen/xend/XendConfig.py
+++ xen-4.0.0-testing/tools/python/xen/xend/XendConfig.py
@@ -233,6 +233,7 @@ XENAPI_CFG_TYPES = {
's3_integrity' : int,
'superpages' : int,
'memory_sharing': int,
+ 'snapshotname': str,
}
# List of legacy configuration keys that have no equivalent in the
Index: xen-4.0.0-testing/tools/python/xen/xend/XendDomain.py
===================================================================
--- xen-4.0.0-testing.orig/tools/python/xen/xend/XendDomain.py
+++ xen-4.0.0-testing/tools/python/xen/xend/XendDomain.py
@@ -53,6 +53,7 @@ from xen.xend.xenstore.xstransact import
from xen.xend.xenstore.xswatch import xswatch
from xen.util import mkdir, rwlock
from xen.xend import uuid
+from xen.xend import sxp
xc = xen.lowlevel.xc.xc()
xoptions = XendOptions.instance()
@@ -1564,6 +1565,187 @@ class XendDomain:
else:
log.debug("error: Domain is not running!")
+ def domain_snapshot_create(self, domid, name, diskonly=False):
+ """Snapshot a running domain.
+
+ @param domid: Domain ID or Name
+ @type domid: int or string.
+ @param name: Snapshot name
+ @type dst: string
+ @param diskonly: Snapshot disk only - exclude machine state
+ @type dst: bool
+ @rtype: None
+ @raise XendError: Failed to snapshot domain
+ @raise XendInvalidDomain: Domain is not valid
+ """
+ try:
+ dominfo = self.domain_lookup_nr(domid)
+ if not dominfo:
+ raise XendInvalidDomain(str(domid))
+
+ snap_file = os.path.join(xoptions.get_xend_domains_path(),
+ dominfo.get_uuid(), "snapshots", name)
+
+ if os.access(snap_file, os.F_OK):
+ raise XendError("Snapshot:%s exist for domain %s\n" % (name, str(domid)))
+
+ if dominfo.getDomid() == DOM0_ID:
+ raise XendError("Cannot snapshot privileged domain %s" % str(domid))
+ if dominfo._stateGet() != DOM_STATE_RUNNING:
+ raise VMBadState("Domain is not running",
+ POWER_STATE_NAMES[DOM_STATE_RUNNING],
+ POWER_STATE_NAMES[dominfo._stateGet()])
+
+ if not os.path.exists(self._managed_config_path(dominfo.get_uuid())):
+ raise XendError("Domain is not managed by Xend lifecycle " +
+ "support.")
+
+ # Check if all images support snapshots
+ for dev_type, dev_info in dominfo.info.all_devices_sxpr():
+ mode = sxp.child_value(dev_info, 'mode')
+ if mode == 'r':
+ continue;
+ if dev_type == 'vbd':
+ raise XendError("All writable images need to use the " +
+ "tap:qcow2 protocol for snapshot support")
+ if dev_type == 'tap':
+ # Fetch the protocol name from tap:xyz:filename
+ type = sxp.child_value(dev_info, 'uname')
+ type = type.split(':')[1]
+ if type != 'qcow2':
+ raise XendError("All writable images need to use the " +
+ "tap:qcow2 protocol for snapshot support")
+
+ snap_path = os.path.join(xoptions.get_xend_domains_path(),
+ dominfo.get_uuid(), "snapshots")
+ mkdir.parents(snap_path, stat.S_IRWXU)
+ snap_file = os.path.join(snap_path, name)
+
+
+ oflags = os.O_WRONLY | os.O_CREAT | os.O_TRUNC
+ if hasattr(os, "O_LARGEFILE"):
+ oflags |= os.O_LARGEFILE
+ fd = os.open(snap_file, oflags)
+ try:
+ XendCheckpoint.save(fd, dominfo, False, False, snap_file,
+ True, name=name, diskonly=diskonly)
+ except Exception, e:
+ os.close(fd)
+ os.unlink(snap_file)
+ raise e
+ os.close(fd)
+ except OSError, ex:
+ raise XendError("can't write guest state file %s: %s" %
+ (snap_file, ex[1]))
+
+ def domain_snapshot_list(self, domid):
+ """List available snapshots for a domain.
+
+ @param domid: Domain ID or Name
+ @type domid: int or string.
+ @rtype: list of snapshot names
+ @raise XendInvalidDomain: Domain is not valid
+ """
+ try:
+ dominfo = self.domain_lookup_nr(domid)
+ if not dominfo:
+ raise XendInvalidDomain(str(domid))
+
+ snap_path = os.path.join(xoptions.get_xend_domains_path(),
+ dominfo.get_uuid(), "snapshots")
+
+ if not os.access(snap_path, os.R_OK):
+ return []
+
+ return os.listdir(snap_path)
+
+ except:
+ return []
+
+ def domain_snapshot_apply(self, domid, name):
+ """Start a domain from snapshot
+
+ @param domid: Domain ID or Name
+ @type domid: int or string.
+ @param name: Snapshot name
+ @type dst: string
+ @rtype: None
+ @raise XendError: Failed to apply snapshot
+ @raise XendInvalidDomain: Domain is not valid
+ """
+ try:
+ dominfo = self.domain_lookup_nr(domid)
+ if not dominfo:
+ log.debug("## no dominfo")
+ raise XendInvalidDomain(str(domid))
+
+ if dominfo.getDomid() == DOM0_ID:
+ raise XendError("Cannot apply snapshots to privileged domain %s" % str(domid))
+ if dominfo._stateGet() != DOM_STATE_HALTED:
+ raise VMBadState("Domain is not halted",
+ POWER_STATE_NAMES[DOM_STATE_HALTED],
+ POWER_STATE_NAMES[dominfo._stateGet()])
+
+ snap_file = os.path.join(xoptions.get_xend_domains_path(),
+ dominfo.get_uuid(), "snapshots", name)
+ if not os.access(snap_file, os.R_OK):
+ raise XendError("Unable to access snapshot %s for domain %s" %
+ (name, str(domid)))
+
+ oflags = os.O_RDONLY
+ if hasattr(os, "O_LARGEFILE"):
+ oflags |= os.O_LARGEFILE
+ fd = os.open(snap_file, oflags)
+ try:
+ self.domain_restore_fd(fd)
+ finally:
+ os.close(fd)
+ except OSError, ex:
+ raise XendError("Unable to read snapshot file file %s: %s" %
+ (snap_file, ex[1]))
+
+ def domain_snapshot_delete(self, domid, name):
+ """Delete domain snapshot
+
+ @param domid: Domain ID or Name
+ @type domid: int or string.
+ @param name: Snapshot name
+ @type domid: string
+ @rtype: None
+ @raise XendInvalidDomain: Domain is not valid
+ """
+ dominfo = self.domain_lookup_nr(domid)
+ if not dominfo:
+ raise XendInvalidDomain(str(domid))
+
+ snap_file = os.path.join(xoptions.get_xend_domains_path(),
+ dominfo.get_uuid(), "snapshots", name)
+
+ if not os.access(snap_file, os.F_OK):
+ raise XendError("Snapshot %s does not exist for domain %s" %
+ (name, str(domid)))
+
+ # Need to "remove" snapshot from qcow2 image file.
+ # For running domains, this is left to ioemu. For stopped domains
+ # we must invoke qemu-img for all devices ourselves
+ if dominfo._stateGet() != DOM_STATE_HALTED:
+ dominfo.image.signalDeviceModel("snapshot-delete",
+ "snapshot-deleted", name)
+ else:
+ for dev_type, dev_info in dominfo.info.all_devices_sxpr():
+ if dev_type != 'tap':
+ continue
+
+ # Fetch the filename and strip off tap:xyz:
+ image_file = sxp.child_value(dev_info, 'uname')
+ image_file = image_file.split(':')[2]
+
+ os.system("qemu-img-xen snapshot -d %s %s" %
+ (name, image_file));
+
+
+ os.unlink(snap_file)
+
def domain_pincpu(self, domid, vcpu, cpumap):
"""Set which cpus vcpu can use
Index: xen-4.0.0-testing/tools/python/xen/xm/main.py
===================================================================
--- xen-4.0.0-testing.orig/tools/python/xen/xm/main.py
+++ xen-4.0.0-testing/tools/python/xen/xm/main.py
@@ -122,6 +122,14 @@ SUBCOMMAND_HELP = {
'Restore a domain from a saved state.'),
'save' : ('[-c|-f] <Domain> <CheckpointFile>',
'Save a domain state to restore later.'),
+ 'snapshot-create' : ('[-d] <Domain> <SnapshotName>',
+ 'Snapshot a running domain.'),
+ 'snapshot-list' : ('<Domain>',
+ 'List available snapshots for a domain.'),
+ 'snapshot-apply' : ('<Domain> <SnapshotName>',
+ 'Apply previous snapshot to domain.'),
+ 'snapshot-delete' : ('<Domain> <SnapshotName>',
+ 'Delete snapshot of domain.'),
'shutdown' : ('<Domain> [-waRH]', 'Shutdown a domain.'),
'top' : ('', 'Monitor a host and the domains in real time.'),
'unpause' : ('<Domain>', 'Unpause a paused domain.'),
@@ -316,6 +324,9 @@ SUBCOMMAND_OPTIONS = {
('-c', '--checkpoint', 'Leave domain running after creating snapshot'),
('-f', '--force', 'Force to overwrite exist file'),
),
+ 'snapshot-create': (
+ ('-d', '--diskonly', 'Perform disk only snapshot of domain'),
+ ),
'restore': (
('-p', '--paused', 'Do not unpause domain after restoring it'),
),
@@ -362,6 +373,10 @@ common_commands = [
"restore",
"resume",
"save",
+ "snapshot-create",
+ "snapshot-list",
+ "snapshot-apply",
+ "snapshot-delete",
"shell",
"shutdown",
"start",
@@ -395,6 +410,10 @@ domain_commands = [
"restore",
"resume",
"save",
+ "snapshot-create",
+ "snapshot-list",
+ "snapshot-apply",
+ "snapshot-delete",
"shutdown",
"start",
"suspend",
@@ -815,6 +834,62 @@ def xm_event_monitor(args):
#
#########################################################################
+def xm_snapshot_create(args):
+
+ arg_check(args, "snapshot-create", 2, 3)
+
+ try:
+ (options, params) = getopt.gnu_getopt(args, 'd', ['diskonly'])
+ except getopt.GetoptError, opterr:
+ err(opterr)
+ sys.exit(1)
+
+ diskonly = False
+ for (k, v) in options:
+ if k in ['-d', '--diskonly']:
+ diskonly = True
+
+ if len(params) != 2:
+ err("Wrong number of parameters")
+ usage('snapshot-create')
+
+ if serverType == SERVER_XEN_API:
+ server.xenapi.VM.snapshot_create(get_single_vm(params[0]), params[1], diskonly)
+ else:
+ server.xend.domain.snapshot_create(params[0], params[1], diskonly)
+
+def xm_snapshot_list(args):
+ arg_check(args, "snapshot-list", 1, 2)
+
+ snapshots = None
+ if serverType == SERVER_XEN_API:
+ snapshots = server.xenapi.VM.snapshot_list(get_single_vm(args[0]))
+ else:
+ snapshots = server.xend.domain.snapshot_list(args[0])
+
+ if snapshots:
+ print "Available snapshots for domain %s" % args[0]
+ for snapshot in snapshots:
+ print " %s" % snapshot
+ else:
+ print "No snapshot available for domain %s" % args[0]
+
+def xm_snapshot_apply(args):
+ arg_check(args, "snapshot-apply", 2, 3)
+
+ if serverType == SERVER_XEN_API:
+ server.xenapi.VM.snapshot_apply(get_single_vm(args[0]), args[1])
+ else:
+ server.xend.domain.snapshot_apply(args[0], args[1])
+
+def xm_snapshot_delete(args):
+ arg_check(args, "snapshot-delete", 2, 3)
+
+ if serverType == SERVER_XEN_API:
+ server.xenapi.VM.snapshot_delete(get_single_vm(args[0]), args[1])
+ else:
+ server.xend.domain.snapshot_delete(args[0], args[1])
+
def xm_save(args):
arg_check(args, "save", 2, 4)
@@ -3467,6 +3542,10 @@ commands = {
"restore": xm_restore,
"resume": xm_resume,
"save": xm_save,
+ "snapshot-create": xm_snapshot_create,
+ "snapshot-list": xm_snapshot_list,
+ "snapshot-apply": xm_snapshot_apply,
+ "snapshot-delete": xm_snapshot_delete,
"shutdown": xm_shutdown,
"start": xm_start,
"sysrq": xm_sysrq,