virt-manager/virtinst-refresh_before_fetch_pool.patch
Charles Arnold 92d0e93e9f - bnc#888251 - sles 12 Xen PV guest fails to install using network
NFS install method 
  virtinst-nfs-install-sanitize.patch

- bnc#887868 - libvirt: shouldn't detect pool's status while
  connecting to hypervisor 
  virtinst-refresh_before_fetch_pool.patch (Chun Yan Liu)

- bnc#888173 - KVM: Unable to install: no console output from
  virt-install
  virtman-add-s390x-arch-support.patch

OBS-URL: https://build.opensuse.org/package/show/Virtualization/virt-manager?expand=0&rev=190
2014-07-24 19:44:55 +00:00

39 lines
1.6 KiB
Diff

Refresh pools status before fetch_pools.
Currently, when connecting to hypervisor, if there are pools active
but in fact target path already deleted (or for other reasons the
pool is not working), libvirtd not refresh status yet, fetch_pools
will fail, that will cause "connecting to hypervisor" process
reporting error and exit. The whole connection work failed.
With the patch, always refresh pool status before fetch pools. Let
the libvirtd pool status reflect the reality, avoid the non-synced
status affects the hypervisor connection.
Signed-off-by: Chunyan Liu <cyliu@suse.com>
Index: virt-manager-1.0.1/virtinst/pollhelpers.py
===================================================================
--- virt-manager-1.0.1.orig/virtinst/pollhelpers.py
+++ virt-manager-1.0.1/virtinst/pollhelpers.py
@@ -138,6 +138,19 @@ def fetch_pools(backend, origmap, build_
if backend.check_support(
backend.SUPPORT_CONN_LISTALLSTORAGEPOOLS):
+
+ # Refresh pools before poll_helper. For those
+ # 'active' but target path not exist (or other reasons
+ # causing the pool not working), but libvirtd not
+ # refresh the status, this will make it refreshed
+ # and mark that pool as 'inactive'.
+ objs = backend.listAllStoragePools()
+ for obj in objs:
+ try:
+ obj.refresh(0)
+ except Exception, e:
+ pass
+
return _new_poll_helper(origmap, name,
backend.listAllStoragePools,
"UUIDString", build_func)