Compare commits

..

119 Commits

Author SHA1 Message Date
Eduardo Habkost
9975e9916e pc: Remove PCLMULQDQ from Westmere on pc-*-1.4 and older
Commit 41cb383f42 made a guest-visible
change by adding the PCLMULQDQ bit to Westmere without adding
compatibility code to keep the ABI for older machine-types.
Fix it by adding the missing compat code.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit 56383703c0)

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-08-13 19:18:02 +02:00
Michael S. Tsirkin
58ef8c530c vhost: clear signalled_used_valid on vhost stop
When vhost device stops, its implementation synchronizes kernel state
back to virtio.c so we can continue emulating the device
in userspace.

This patch ensures that virtio.c's signalled_used_valid flag is reset so
that userspace does not suppress guest notifications due to stale
signalled_used values.

Cc: qemu-stable@nongnu.org
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 3561ba1418)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 10:04:40 -05:00
Stefan Hajnoczi
8d676daf6d virtio: clear signalled_used_valid when switching from dataplane
When the dataplane thread stops, its vring.c implementation synchronizes
vring state back to virtio.c so we can continue emulating the virtio
device.

This patch ensures that virtio.c's signalled_used_valid flag is reset so
that we do not suppress guest notifications due to stale signalled_used
values.

Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 6793dfd1b6)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 10:04:25 -05:00
Stefan Hajnoczi
6bf6fcd181 dataplane: sync virtio.c and vring.c virtqueue state
Load the virtio.c state into vring.c when we start dataplane mode and
vice versa when stopping dataplane mode.  This patch makes it possible
to start and stop dataplane any time while the guest is running.

This will eventually allow us to go back to QEMU main loop for
bdrv_drain_all() and live migration.  In the meantime, this patch makes
the dataplane lifecycle more robust but should make no visible
difference.  It may be useful in the virtio-net dataplane effort.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 9154b02c53)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 10:03:19 -05:00
Gerd Hoffmann
ccf279824c i82801b11: Fix i82801b11 PCI host bridge config space
pci_bridge_write_config() was not being used.

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 4965b7f056)

Conflicts:

	hw/pci-bridge/i82801b11.c

* modified to avoid dependency on 125ee0ed

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:50 -05:00
Martijn van den Broek
30c2463271 Bugfix for loading multiboot kernels
This patch fixes a bug in rom_copy introduced by
commit d60fa42e8b.

rom_copy failed to load roms with a "datasize" of 0.
As a result, multiboot kernels were not loaded correctly
when they contain a segment with a "file size" of 0.

https://bugs.launchpad.net/qemu/+bug/1208944

Signed-off-by: Martijn van den Broek <martijn.vdbrk@gmail.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: CAG1x_oET1u3TMPu3r_zzd3ZXsTWQLiaM0zAc+RkHFCwvJjGOvg@mail.gmail.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 0dd5ce38fb)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:50 -05:00
Izumi Tsutsui
d55fc47517 semaphore: fix a hangup problem under load on NetBSD hosts.
Fix following bugs in "fallback implementation of counting semaphores
with mutex+condvar" added in c166cb72f1:
 - waiting threads are not restarted properly if more than one threads
   are waiting unblock signals in qemu_sem_timedwait()
 - possible missing pthread_cond_signal(3) calls when waiting threads
   are returned by ETIMEDOUT
 - fix an uninitialized variable
The problem is analyzed by and fix is provided by Noriyuki Soda.

Also put additional cleanup suggested by Laszlo Ersek:
 - make QemuSemaphore.count unsigned (it won't be negative)
 - check a return value of in pthread_cond_wait() in qemu_sem_wait()

Signed-off-by: Izumi Tsutsui <tsutsui@ceres.dti.ne.jp>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Message-id: 1372841894-10634-1-git-send-email-tsutsui@ceres.dti.ne.jp
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 79761c6681)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:50 -05:00
MORITA Kazutaka
82487399a4 ignore SIGPIPE in qemu-img and qemu-io
This prevents the tools from being stopped when they write data to a
closed connection in the other side.

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 526eda14a6)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:50 -05:00
Andreas Färber
4055390051 target-i386: Fix X86CPU error handling
Error **errp argument is not for emitting warnings, it means an error
has occurred and the caller should not make any assumptions about the
state of other return values (unless otherwise documented).

Therefore cpu_x86_create() must unref the new X86CPU itself, and
pc_new_cpu() must check for an Error rather than NULL return value.

While at it, clean up a superfluous NULL check.

Reported-by: Jan Kiszka <jan.kiszka@siemens.com>
Cc: qemu-stable@nongnu.org
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit cd7b87ffe9)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:50 -05:00
MORITA Kazutaka
ca73e42f6d iov: handle EOF in iov_send_recv
Without this patch, iov_send_recv() never returns when do_send_recv()
returns zero.

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 8400429017)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:50 -05:00
Paul Moore
7f91e37c5a seccomp: add additional asynchronous I/O syscalls
A previous commit, "seccomp: add the asynchronous I/O syscalls to the
whitelist", added several asynchronous I/O syscalls but left out the
io_submit() and io_cancel() syscalls.  This patch corrects this by
adding the two missing asynchronous I/O syscalls.

Signed-off-by: Paul Moore <pmoore@redhat.com>
Reviewed-by: Eduardo Otubo <otubo@linux.vnet.ibm.com>
Message-id: 20130715193201.943.4913.stgit@localhost
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 94113bd8a1)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:50 -05:00
Paul Moore
0b85017dfc seccomp: add arch_prctl() to the syscall whitelist
It appears that even a very simple /etc/qemu-ifup configuration can
require the arch_prctl() syscall, see the example below:

	#!/bin/sh
	/sbin/ifconfig $1 0.0.0.0 up
	/usr/sbin/brctl addif <switch> $1

Signed-off-by: Paul Moore <pmoore@redhat.com>
Reviewed-by: Eduardo Otubo <otubo@linux.vnet.ibm.com>
Message-id: 20130718135703.8247.19213.stgit@localhost
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit d2509b667c)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:50 -05:00
Michael Roth
6499aa6dcc chardev: fix CHR_EVENT_OPENED events for mux chardevs
As of bd5c51ee6c, chardevs no longer use
bottom-halves to issue CHR_EVENT_OPENED events. To maintain past
semantics, we instead defer the CHR_EVENT_OPENED events toward the end
of chardev initialization.

For muxes, this isn't good enough, since a range of FEs must be able
to attach to the mux prior to any CHR_EVENT_OPENED being issued, else
each FE will immediately print it's initial output (prompts, banners,
etc.) just prior to us switching to the next FE as part of
initialization.

The is new and confusing behavior for users, as they'll see output for
things like the HMP monitor, even though their the current mux focus
may be a guest serial port with potentially no output.

We fix this by further deferring CHR_EVENT_OPENED events for FEs
associated with muxes until after machine init by flagging mux chardevs
with 'explicit_be_open', which suppresses emission of CHR_EVENT_OPENED
events until we explicitly set the mux as opened later.

Currently, we must defer till after machine init since we potentially
associate FEs with muxes as part of realize (for instance,
serial_isa_realizefn).

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Message-id: 1375207462-8141-1-git-send-email-mdroth@linux.vnet.ibm.com
Cc: qemu-stable@nongnu.org
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 7b7ab18d0b)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:50 -05:00
Gerd Hoffmann
283d8f93e5 xhci: fix segfault
Guest trying to reset a endpoint of a disconnected device resulted in
xhci trying to dereference uport while being NULL, thereby crashing
qemu.  Fix that by adding a check.  Drop unused dev variable while
touching that code bit.

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 75cc1c1fcb)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:50 -05:00
Don Koch
a3ea885abd pci-bridge: update mappings for migration/restore
Fix for LP#1187529: Devices on PCI bridge stop working when
live-migrated. Update bridge mappings for all PCI bridge
devices in get_pci_config_device().

Signed-off-by: Don Koch <dkoch@verizon.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit e78e9ae4a9)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:49 -05:00
Andreas Färber
bb4d73c44b virtio-console: Use exitfn for virtserialport, too
virtconsole and virtserialport are identical in every other aspect
except for the distinguishing VirtIOSerialPortClass::is_console field.

Cc: qemu-stable@nongnu.org
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Message-id: 1375313326-14966-1-git-send-email-afaerber@suse.de
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 203439ce0a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:49 -05:00
Markus Armbruster
8707cd1ec0 qapi: Rename ChardevBackend member "memory" to "ringbuf"
Commit 1da48c6 called the new member "memory" after commit 3949e59
standardized "ringbuf".  Rename for consistency.

However, member name "memory" is visible in QMP since 1.5.  It's
undocumented just like the driver name.  Keep it working anyway.

Cc: qemu-stable@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1374849874-25531-4-git-send-email-armbru@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 3a1da42eb3)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:49 -05:00
Markus Armbruster
2b7e5f19dc qemu-char: Register ring buffer driver with correct name "ringbuf"
The driver is new in 1.4, with the documented name "ringbuf".
However, it's actual name is the completely undocumented "memory".
Screwed up in commit 3949e59.  Fix code to match documentation.

Keep the undocumented name working as an alias for compatibility.

Cc: qemu-stable@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1374849874-25531-3-git-send-email-armbru@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit c11ed9666d)

Conflicts:

	qemu-char.c

* removed dependency on command-line specifiable mux (bb6fb7c0)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:49 -05:00
Gerd Hoffmann
27c59dad11 xhci: handle USB_RET_IOERROR
https://bugzilla.redhat.com/show_bug.cgi?id=980377

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit ed60ff024f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:49 -05:00
Stefan Hajnoczi
390880f3d3 dataplane: refuse to start if device is already in use
Dataplane must check whether a block device is in use before launching
the dataplane thread.  This is necessary since the thread does not
synchronize with the main loop and I/O requests could cause corruption.

One example is when a drive is added and a block job is started before
hotplugging the virtio-blk-pci adapter.  In this case we must not use
dataplane mode.

Cc: qemu-stable@nongnu.org
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit b0f2027cde)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:49 -05:00
Stefan Weil
f3249bf62c gtk: Fix compiler warning (GTK 3 deprecated function)
With GTK 3, the function gdk_cursor_unref is deprecated:

qemu/ui/gtk.c: In function ‘gd_cursor_define’:
qemu/ui/gtk.c:380:5: error:
 ‘gdk_cursor_unref’ is deprecated (declared at /usr/include/gtk-3.0/gdk/gdkcursor.h:233): Use 'g_object_unref' instead [-Werror=deprecated-declarations]

Fix the gcc compiler warning by using conditional compilation.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Message-id: 1371391987-10795-1-git-send-email-sw@weilnetz.de
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 030b4b7deb)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:49 -05:00
Anthony Liguori
a95bc779f2 gtk: don't use g_object_unref on GdkCursor
It's not a GObject.

Cc: Gerd Hoffman <kraxel@redhat.com>
Reported-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
---
v1 -> v2
 - Fix summary to agree with code (Peter)
(cherry picked from commit 171392406d)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:49 -05:00
Andreas Färber
a561fcfed6 megasas: Legacy command line handling fix
Only apply legacy command line handling when the device has not been
hot-plugged. Propagate failure of legacy command line handling.

Cc: qemu-stable@nongnu.org
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit 22d6aa03fd)

Conflicts:

	hw/scsi/megasas.c

* modified to avoid dependency on fancy new upcast macros

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:49 -05:00
Kevin Wolf
685803fcf7 cpus: Let vm_stop[_force_state]() always flush block devices
Even if the VM is already stopped, we cannot assume that all data has
already been successfully flushed to disk. The flush during the previous
vm_stop() could have failed.

Run bdrv_flush_all() unconditionally so that we get an error each time
if the block device isn't really flushed.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 594a45ce64)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:49 -05:00
Kevin Wolf
32d1d7ff51 cpus: Add return value for vm_stop()
If flushing the block devices fails, return an error. The VM is stopped
anyway.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 5698346391)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:49 -05:00
Kevin Wolf
f823845309 block: Add return value for bdrv_flush_all()
bdrv_flush() can fail, and bdrv_flush_all() should return an error as
well if this happens for a block device. It returns the first error
return now, but still at least tries to flush the remaining devices even
in error cases.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit f0f0fdfeec)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-13 09:30:49 -05:00
Peter Lieven
7fbd2301a4 iscsi: assert that sectors are aligned to LUN blocksize
if the blocksize of an iSCSI LUN is bigger than the BDRV_SECTOR_SIZE
it is possible that sector_num or nb_sectors are not correctly
aligned.

to avoid corruption we fail requests which are misaligned.

Signed-off-by: Peter Lieven <pl@kamp.de>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 91bea4e2bb)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 19:15:26 -05:00
Peter Lieven
404fbe4743 iscsi: remove support for misaligned nb_sectors in aio_readv
this hask is not working (anymore). support for misaligned offsets should
be handled at the block layer.

Signed-off-by: Peter Lieven <pl@kamp.de>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 7e4d5a9f94)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 19:14:58 -05:00
Peter Lieven
ff57f145c1 iscsi: fix -ENOSPC in iscsi_create()
the -ENOPSC case did not work due to the missing goto.

Reported-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Peter Lieven <pl@kamp.de>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit d3bda7bc16)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 19:14:35 -05:00
Kevin Wolf
e6f5128dcf ahci: Fix FLUSH command
AHCI couldn't cope with asynchronous commands that aren't doing DMA, it
simply wouldn't complete them. Due to the bug fixed in commit f68ec837,
FLUSH commands would seem to have completed immediately even if they
were still running on the host. After the commit, they would simply hang
and never unset the BSY bit, rendering AHCI unusable on any OS sending
flushes.

This patch adds another callback for the completion of asynchronous
commands. This is what AHCI really wants to use for its command
completion logic rather than an DMA completion callback.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit a62eaa26c1)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 19:09:52 -05:00
Luiz Capitulino
aa83f2e427 qapi: qapi-commands: fix possible leaks on visitor dealloc
In qmp-marshal.c the dealloc visitor calls use the same errp
pointer of the input visitor calls. This means that if any of
the input visitor calls fails, then the dealloc visitor will
return early, before freeing the object's memory.

Here's an example, consider this code:

int qmp_marshal_input_block_passwd(Monitor *mon, const QDict *qdict, QObject **ret)
{
	[...]

    char * device = NULL;
    char * password = NULL;

    mi = qmp_input_visitor_new_strict(QOBJECT(args));
    v = qmp_input_get_visitor(mi);
    visit_type_str(v, &device, "device", errp);
    visit_type_str(v, &password, "password", errp);
    qmp_input_visitor_cleanup(mi);

    if (error_is_set(errp)) {
        goto out;
    }
    qmp_block_passwd(device, password, errp);

out:
    md = qapi_dealloc_visitor_new();
    v = qapi_dealloc_get_visitor(md);
    visit_type_str(v, &device, "device", errp);
    visit_type_str(v, &password, "password", errp);
    qapi_dealloc_visitor_cleanup(md);

	[...]

    return 0;
}

Consider errp != NULL when the out label is reached, we're going
to leak device and password.

This patch fixes this by always passing errp=NULL for dealloc
visitors, meaning that we always try to free them regardless of
any previous failure. The above example would then be:

out:
    md = qapi_dealloc_visitor_new();
    v = qapi_dealloc_get_visitor(md);
    visit_type_str(v, &device, "device", NULL);
    visit_type_str(v, &password, "password", NULL);
    qapi_dealloc_visitor_cleanup(md);

Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
(cherry picked from commit 8f91ad8a1b)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 19:07:43 -05:00
Paul Moore
a5d14facb1 seccomp: add the asynchronous I/O syscalls to the whitelist
In order to enable the asynchronous I/O functionality when using the
seccomp sandbox we need to add the associated syscalls to the
whitelist.

Signed-off-by: Paul Moore <pmoore@redhat.com>
Reviewed-by: Corey Bryant <coreyb@linux.vnet.ibm.com>
Message-id: 20130529203001.20939.83322.stgit@localhost
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit fd21faadb1)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 19:06:49 -05:00
Peter Crosthwaite
9f8daa796b qom: Fix class cast of NULL classes
Its clear from the implementation that class casting is supposed to work
with a NULL class argument. Guard all dereferences of the class argument
against NULL accordingly.

Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 94cd5ba46b74eea289a7e582635820c1c54e66fa.1371546907.git.peter.crosthwaite@xilinx.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 9d6a3d58e4)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 19:05:26 -05:00
Dongxue Zhang
ed448b82bd target-openrisc: Fix typename in openrisc_cpu_class_by_name()
Commit 478032a93d (target-openrisc:
Rename CPU subtypes) suffixed CPU sub-types with "-or32-cpu" but forgot
to update openrisc_cpu_class_by_name(), so that it was still looking for
the types without suffix.

Make target-openrisc running OK by adding the suffix to the model name.

This means it is no longer possible to use -cpu or1200-or32-cpu or
-cpu any-or32-cpu though.

Cc: qemu-stable@nongnu.org
Signed-off-by: Dongxue Zhang <elta.era@gmail.com>
Tested-by: Jia Liu <proljc@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit 071b3364e7)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 19:01:47 -05:00
Stefan Hajnoczi
f8cd6dfdd8 block: fix bdrv_flush() ordering in bdrv_close()
Since 80ccf93b we flush the block device during close.  The
bdrv_drain_all() call should come before bdrv_flush() to ensure guest
write requests have completed.  Otherwise we may miss pending writes
when flushing.

Call bdrv_drain_all() again for safety as the final step after
bdrv_flush().  This should not be necessary but we can be paranoid here
in case bdrv_flush() left I/O pending.

Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 58fda173e1)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 17:32:23 -05:00
Andreas Färber
b5bfb026e4 target-xtensa: gen_intermediate_code_internal() should be inlined
Cc: qemu-stable@nongnu.org
Reported-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit ae06d4988d)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 17:31:09 -05:00
Andreas Färber
cbf70c2f27 target-moxie: gen_intermediate_code_internal() should be inlined
Cc: qemu-stable@nongnu.org
Reported-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit 13cccc6928)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 17:30:50 -05:00
Andreas Färber
0320902109 target-microblaze: gen_intermediate_code_internal() should be inlined
Cc: qemu-stable@nongnu.org
Reported-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit fd327f48f7)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 17:30:41 -05:00
Andreas Färber
7d4d902a59 target-lm32: gen_intermediate_code_internal() should be inlined
Cc: qemu-stable@nongnu.org
Reported-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Acked-by: Michael Walle <michael@walle.cc>
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit 28014bcab2)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 17:30:30 -05:00
Andreas Färber
91d66fb4b9 target-cris: gen_intermediate_code_internal() should be inlined
Cc: qemu-stable@nongnu.org
Reported-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit 6f47ec50db)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 17:30:18 -05:00
Markus Armbruster
39b04be6ad qemu-char: Fix ID reuse after chardev-remove for qapi-based init
Commit 2c5f488 introduced qapi-based character device initialization
as a new code path in qemu_chr_new_from_opts().  Unfortunately, it
failed to store parameter opts in the new chardev.  Therefore,
qemu_chr_delete() doesn't delete it.  Even though the device is gone,
its options linger, and any attempt to create another one with the
same ID fails.

Cc: qemu-stable@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 1372339512-28149-1-git-send-email-armbru@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 2ea3e2c1e8)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 17:28:07 -05:00
Marcelo Tosatti
1eeacd413a kvmclock: clock should count only if vm is running
kvmclock should not count while vm is paused, because:

1) if the vm is paused for long periods, timekeeping
math can overflow while converting the (large) clocksource
delta to nanoseconds.

2) Users rely on CLOCK_MONOTONIC to count run time, that is,
time which OS has been in a runnable state (see CLOCK_BOOTTIME).

Change kvmclock driver so as to save clock value when vm transitions
from runnable to stopped state, and to restore clock value from stopped
to runnable transition.

Cc: qemu-stable@nongnu.org
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 00f4d64ee7)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 17:21:01 -05:00
Kevin Wolf
f7fe3d2f77 raw-posix: Fix /dev/cdrom magic on OS X
The raw-posix driver has code to provide a /dev/cdrom on OS X even
though it doesn't really exist. However, since commit c66a6157 the real
filename is dismissed after finding it, so opening /dev/cdrom fails.
Put the filename back into the options QDict to make this work again.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit a5c5ea3f60)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 17:17:02 -05:00
Peter Lieven
9f60383b41 migration: do not overwrite zero pages
on incoming migration do not memset pages to zero if they already read as zero.
this will allocate a new zero page and consume memory unnecessarily. even
if we madvise a MADV_DONTNEED later this will only deallocate the memory
asynchronously.

Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 211ea74022)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 17:16:09 -05:00
Peter Lieven
64a72fa71f Revert "migration: do not sent zero pages in bulk stage"
Not sending zero pages breaks migration if a page is zero
at the source but not at the destination. This can e.g. happen
if different BIOS versions are used at source and destination.
It has also been reported that migration on pseries is completely
broken with this patch.

This effectively reverts commit f1c72795af.

Conflicts:

	arch_init.c

Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 9ef051e553)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 17:15:51 -05:00
Fam Zheng
d306fd5f4a vmdk: remove wrong calculation of relative path
When creating image with backing file, the driver tries to calculate the
relative path from created image file to backing file, but the path
computation is incorrect. e.g.:

    $ qemu-img create -f vmdk -b vmdk-data-disk.vmdk vmdk-data-snapshot1
    Formatting 'vmdk-data-snapshot1', fmt=vmdk size=10737418240
    backing_file='vmdk-data-disk.vmdk' compat6=off zeroed_grain=off

    $ qemu-img info vmdk-data-snapshot1
    image: vmdk-data-snapshot1
    file format: vmdk
    virtual size: 10G (10737418240 bytes)
    disk size: 12K
->  backing file: disk.vmdk

The common part in file names, "vmdk-data-", is incorrectly forgotten by
relative_path(). As the VMDK specification has no restriction on
parentNameHint to be relative path, we simply remove this by using the
backing_file option.

Cc: qemu-stable@nongnu.org
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 8ed610a1c9)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 17:11:32 -05:00
Kevin Wolf
eedc9f46cf gluster: Return bdrv_has_zero_init = 0
GlusterFS volumes can be backed by block devices, in which case
bdrv_create() doesn't make sure that the image is zeroed out. It is
currently not possibly to detect whether a given image is backed by a
file or a block device, and incorrectly assuming that it is zeroed
corrupts images during qemu-img convert, so let's err on the side of
caution and always return 0.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 8ab6feec2c)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 15:26:48 -05:00
Richard W.M. Jones
90ce84993a block/ssh: Set bdrv_has_zero_init according to the file type.
If the remote is a regular file, set it to true (ie. reads of
uninitialized areas in a newly created file will return zeroes).
If we can't prove that, return false (a safe default).

Tested by adding a debugging print statement [not part of this commit]
and creating a remote file and a remote block device:

  $ ./qemu-img create ssh://localhost/tmp/new 100M
  Formatting 'ssh://localhost/tmp/new', fmt=raw size=104857600
  filename ssh://localhost/tmp/new: has_zero_init = 1
  $ sudo lvcreate -L 1G -n tmp /dev/fedora
    Logical volume "tmp" created
  $ ./qemu-img create ssh://localhost/dev/fedora/tmp 1G
  Formatting 'ssh://localhost/dev/fedora/tmp', fmt=raw size=1073741824
  filename ssh://localhost/dev/fedora/tmp: has_zero_init = 0

Cc: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 0b3f21e6a9)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 15:25:26 -05:00
Ronnie Sahlberg
89ca606d49 Fix iSCSI crash on SG_IO with an iovector
Don't assume that SG_IO is always invoked with a simple buffer,
check the iovec_count and if it is >= 1 then we need to pass an array
of iovectors to libiscsi instead of just a plain buffer.

Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 0a53f01074)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 15:21:16 -05:00
Christian Borntraeger
5e2053dd15 s390/ipl: Fix boot order
The latest ipl code adaptions collided with some of the virtio
refactoring rework. This resulted in always booting the first
disk. Let's fix booting from a given ID.
The new code also checks for command lines without bootindex to
avoid random behaviour when accessing dev_st (==0).

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
(cherry picked from commit 5c8ded6ef5)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 15:18:30 -05:00
Gerd Hoffmann
045ccf7056 usb-host-libusb: set USB_DEV_FLAG_IS_HOST
... like host-{linux,bsd}.c do.

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 628e54857a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 15:15:49 -05:00
Markus Armbruster
820508eea6 acl: acl_add can't insert before last list element, fix
Watch this:

    $ upstream-qemu -nodefaults -S -vnc :0,acl,sasl -monitor stdio
    QEMU 1.5.50 monitor - type 'help' for more information
    (qemu) acl_add vnc.username drei allow
    acl: added rule at position 1
    (qemu) acl_show vnc.username
    policy: deny
    1: allow drei
    (qemu) acl_add vnc.username zwei allow 1
    acl: added rule at position 2
    (qemu) acl_show vnc.username
    policy: deny
    1: allow drei
    2: allow zwei
    (qemu) acl_add vnc.username eins allow 1
    acl: added rule at position 1
    (qemu) acl_show vnc.username
    policy: deny
    1: allow eins
    2: allow drei
    3: allow zwei

The second acl_add inserts at position 2 instead of 1.

Root cause is an off-by-one in qemu_acl_insert(): when index ==
acl->nentries, it appends instead of inserting before the last list
element.

Cc: qemu-stable@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 4999f3a8a6)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 15:14:38 -05:00
KONRAD Frederic
1e3043a702 virtio-scsi: forward scsibus for virtio-scsi-pci.
This fix a bug with scsi hotplug on virtio-scsi-pci:

As virtio-scsi-pci doesn't have any scsi bus, we need to forward scsi-hot-add
to the virtio-scsi-device plugged on the virtio-bus.

Cc: qemu-stable@nongnu.org
Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 15:11:49 -05:00
Anthony PERARD
a678b16bb3 qxl: Fix QXLRam initialisation.
The qxl driver expect NULL for QXLRam.memory_configs, but this is never
initialized.

If memory is set to 0xc2c2.., it leads to a spice-critical error when
trying to start qxl.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 329f97fc4f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 14:58:42 -05:00
Hervé Poussineau
208ddea6b5 ppc: do not register IABR SPR twice for 603e
IABR SPR is already registered in gen_spr_603(), called from init_proc_603E().

Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
(cherry picked from commit 9fea2ae250)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 14:48:39 -05:00
Peter Maydell
4bf0901ff8 arm/boot: Free dtb blob memory after use
The dtb blob returned by load_device_tree() is in memory allocated
with g_malloc(). Free it accordingly once we have copied its
contents into the guest memory. To make this easy, we need also to
clean up the error handling in load_dtb() so that we consistently
handle errors in the same way (by printing a message and then
returning -1, rather than either plowing on or exiting immediately).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Message-id: 1371209256-11408-1-git-send-email-peter.maydell@linaro.org
(cherry picked from commit c23045ded7)

Conflicts:

	hw/arm/boot.c

* updated to include #ifdef for CONFIG_FDT

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 14:44:19 -05:00
Christian Borntraeger
717086d0e8 s390/virtio-ccw: Fix virtio reset
On virtio reset we must reset the indicator to avoid stale interrupts,
e.g. after a reset.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
(cherry picked from commit 6504a93011)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-08-12 14:36:08 -05:00
Michael Roth
ff4be47d1b Update VERSION for 1.5.2 release
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-07-25 14:52:08 -05:00
Laszlo Ersek
be161ae35d qga: escape cmdline args when registering win32 service (CVE-2013-2231)
Reported-by: Lev Veyde <lveyde@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-07-23 12:02:47 -05:00
Laszlo Ersek
bb31546dfa ga_install_service(): nest error paths more idiomatically
Acked-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-07-23 12:02:47 -05:00
Laszlo Ersek
af0bbf8eb8 qga/service-win32.c: diagnostic output should go to stderr
Acked-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-07-23 12:02:47 -05:00
Laszlo Ersek
31c6ed2077 qga: save state directory in ga_install_service()
If the user selects a non-default state directory at service installation
time, we should remember it in the registered service.

Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
(cherry picked from commit a839ee77c7)

* modified to save state_dir unconditionally an avoid reliance on
  uncommitted CSIDL_COMMON_APPDATA dependencies

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-07-23 12:01:16 -05:00
Laszlo Ersek
c432c7d85a qga: remove undefined behavior in ga_install_service()
We shouldn't snprintf() from a buffer to the same buffer.

Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
(cherry picked from commit a880845f3d)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-07-23 11:40:26 -05:00
Anthony Liguori
295d81c624 Update VERSION for 1.5.1 release
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-06-26 16:46:50 -05:00
Michael Roth
cc0bd7ec83 wdt_i6300esb: fix vmstate versioning
When this VMSD was introduced it's version fields were set to
sizeof(I6300State), making them essentially random from build to build,
version to version.

To fix this, we lock in a high version id and low minimum version id to
support old->new migration from all prior versions of this device's
state. This should work since the device state has not changed since
its introduction.

The potentially breaks migration from 1.5+ to 1.5, but since the
versioning was essentially random prior to this patch, new->old
migration was not consistently functional to begin with.

Reported-by: Nicholas Thomas <nick@bytemark.co.uk>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit c1990468d5)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-18 14:38:30 -05:00
Cole Robinson
12e5b2b5da virtio-rng: Fix crash with non-default backend
'default_backend' isn't always set, but 'rng' is, so use that.

$ ./x86_64-softmmu/qemu-system-x86_64 -object rng-random,id=rng0,filename=/dev/random -device virtio-rng-pci,rng=rng0
Segmentation fault (core dumped)

Regressed with virtio refactoring in 59ccd20a9a

CC: qemu-stable@nongnu.org
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Acked-by: Amit Shah <amit.shah@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Message-id: bf4505014a0a941dbd3c62068f3cf2c496b69e6a.1370023944.git.crobinso@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 5b456438f5)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-18 13:03:12 -05:00
Paolo Bonzini
cb55efe6db iscsi: reorganize iscsi_readcapacity_sync
Avoid the goto, and use the same retry logic for the 10- and 16-
byte versions.

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 1288844e7c)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-18 13:00:50 -05:00
Paolo Bonzini
1b94fc4b9a iscsi: simplify freeing of tasks
Always free them in the iscsi_aio_*_acb functions and remove the
checks in their callers.  Remove ifs when the task struct was
previously dereferenced (spotted by Coverity).

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit f0d2a4d4d6)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-18 13:00:25 -05:00
Stefan Hajnoczi
5e690bb974 vhost-scsi: fix k->set_guest_notifiers() NULL dereference
Coverity picked up a copy-paste bug.  In vhost_scsi_start() we check for
!k->set_guest_notifiers and error out.  The check probably got copied
but instead of erroring we actually use the function pointer!

Cc: Nicholas Bellinger <nab@linux-iscsi.org>
Cc: Asias He <asias@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 0e22a2d189)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-18 12:59:44 -05:00
Pavel Hrdina
129db3679c scsi-disk: scsi-block device for scsi pass-through should not be removable
This patch adds a new SCSI_DISK_F_NO_REMOVABLE_DEVOPS feature. By this
feature we can set that the scsi-block (scsi pass-through) device will still
be removable from the guest side, but from monitor it cannot be removed.

Cc: qemu-stable@nongnu.org
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 18e673b8f3)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-18 12:58:11 -05:00
Pavel Hrdina
637d640fbb scsi-generic: check the return value of bdrv_aio_ioctl in execute_command
This fixes the bug introduced by this commit ad54ae80c7.
The bdrv_aio_ioctl() still could return null and we should return an error
in that case.

Cc: qemu-stable@nongnu.org
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit d836f8d35d)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-18 12:57:30 -05:00
Paolo Bonzini
9c4f5dd03a scsi-generic: fix sign extension of READ CAPACITY(10) data
Issuing the READ CAPACITY(10) command in the guest will cause QEMU
to update its knowledge of the maximum accessible LBA in the disk.
The recorded maximum LBA will be wrong if the disk is bigger than
1TB, because ldl_be_p returns a signed int.

When this is fixed, a latent bug will be unmasked.  If the READ
CAPACITY(10) command reported an overflow (0xFFFFFFFF), we must
not overwrite the previously-known maximum accessible LBA, or the guest
will fail to access the disk above the first 2TB.

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 53254e569f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-18 12:56:37 -05:00
Pavel Hrdina
3abd71cd98 scsi: reset cdrom tray statuses on scsi_disk_reset
Tray statuses should be also reset. Some guests may lock the tray and
right after resetting the guest it should be unlocked and closed. This
is done on power-on, reset and resume from suspend/hibernate on bare-metal.

This fix is already committed for IDE CD.
Check the commit a7f3d65b65.

Test results on bare-metal:
  - on reset/power-on the CD-ROM tray is closed even before the monitor
    is turned on
  - on resume from suspend/hibernate the tray is also closed before
    the monitor is turned on

From test results it seems that this behavior is OS and probably BIOS
independent.

Cc: qemu-stable@nongnu.org
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 7721c7f7c2)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-18 12:56:05 -05:00
Ján Tomko
5fcb9bf746 nbd: strip braces from literal IPv6 address in URI
Otherwise they would get passed to getaddrinfo and fail with:
address resolution failed for [::1]🔢 Name or service not known

(Broken by commit v1.4.0-736-gf17c90b)

Signed-off-by: Ján Tomko <jtomko@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 2330790879)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-18 12:53:09 -05:00
Ján Tomko
6c8cf5fd02 qemu-socket: allow hostnames starting with a digit
According to RFC 1123 [1], hostnames can start with a digit too.

[1] http://tools.ietf.org/html/rfc1123#page-13

Signed-off-by: Ján Tomko <jtomko@redhat.com>
Cc: qemu-stable@nongnu.org
[Use strspn, not strcspn. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 391b7b9701)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-18 12:51:46 -05:00
Stefan Hajnoczi
ce4e8f0d4c vmdk: byteswap VMDK4Header.desc_offset field
Remember to byteswap VMDK4Header.desc_offset on big-endian machines.

Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 5a394b9e96)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 18:01:42 -05:00
Igor Mammedov
c683f1b934 target-i386: cpu: Fix potential buffer overrun in get_register_name_32()
Spotted by Coverity,
x86_reg_info_32[] is CPU_NB_REGS32 elements long, so accessing
x86_reg_info_32[CPU_NB_REGS32] will be one element off array.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: liguang <lig.fnst@cn.fujitsu.com>
Reviewed by: Jesse Larrew <jlarrew@linux.vnet.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit 31ccdde298)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 18:01:42 -05:00
Igor Mammedov
75e4aa9405 pc: Fix crash when attempting to hotplug CPU with negative ID
QMP command "{ 'execute': 'cpu-add', 'arguments': { 'id': -1 }}" may cause
QEMU SIGSEGV at:
 piix4_cpu_hotplug_req ()
    ...
    g->sts[cpu_id / 8] |= (1 << (cpu_id % 8));
    ...

Since for PC in current implementation id should be in range [0...maxcpus)
and maxcpus is already checked, add check for lower bound and error out
on incorrect value.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit 8de433cb08)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 18:01:42 -05:00
Markus Armbruster
055a7fce65 smbios: Check R in -smbios type=0, release=R parses okay
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Laszlo "ever the optimist" Ersek <lersek@redhat.com>
Message-id: 1370610036-10577-7-git-send-email-armbru@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 6e5c4540d1)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 18:01:42 -05:00
Markus Armbruster
93bc624384 smbios: Fix -smbios type=0, release=... for big endian hosts
Classic endianness bug due to careless dirty coding: assuming reading
a byte from an int variable gets the least significant byte.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Laszlo "ever the optimist" Ersek <lersek@redhat.com>
Message-id: 1370610036-10577-6-git-send-email-armbru@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 527cd96f15)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 18:01:42 -05:00
Markus Armbruster
61fbaee1a7 smbios: Clean up smbios_add_field() parameters
Having size precede the associated pointer is odd.  Swap them, and fix
up the types.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Laszlo "ever the optimist" Ersek <lersek@redhat.com>
Message-id: 1370610036-10577-5-git-send-email-armbru@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit ebc85e3f72)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 18:01:42 -05:00
Markus Armbruster
685ee2d940 smbios: Convert to error_report()
Improves diagnistics from ad hoc messages like

    Invalid SMBIOS UUID string

to

    qemu-system-x86_64: -smbios type=1,uuid=gaga: Invalid UUID

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Laszlo "ever the optimist" Ersek <lersek@redhat.com>
Message-id: 1370610036-10577-4-git-send-email-armbru@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 5bb95e4186)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 18:01:42 -05:00
Markus Armbruster
fa0f47d390 log.h: Supply missing includes
<stdio.h> has always been missing.  Rest missed in commit eeacee4.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Laszlo "ever the optimist" Ersek <lersek@redhat.com>
Message-id: 1370610036-10577-3-git-send-email-armbru@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit f3eededb2f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 18:01:42 -05:00
Markus Armbruster
75525693cc error-report.h: Supply missing include
Missed in commit e5924d8.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Laszlo "ever the optimist" Ersek <lersek@redhat.com>
Message-id: 1370610036-10577-2-git-send-email-armbru@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit b293796fd7)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 18:01:42 -05:00
Anton Blanchard
02d26729ea tcg-ppc64: rotr_i32 rotates wrong amount
rotr_i32 calculates the amount to left shift and puts it into a
temporary, but then doesn't use it when doing the shift.

Cc: qemu-stable@nongnu.org
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
(cherry picked from commit d1bdd3af49)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 18:01:42 -05:00
Anton Blanchard
2917f6bcd0 tcg-ppc64: Fix add2_i64
add2_i64 was adding the lower double word to the upper double word
of each input. Fix this so we add the lower double words, then the
upper double words with carry propagation.

Cc: qemu-stable@nongnu.org
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
(cherry picked from commit 8424735710)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 18:01:42 -05:00
Anton Blanchard
9534f66ac1 tcg-ppc64: bswap64 rotates output 32 bits
If our input and output is in the same register, bswap64 tries to
undo a rotate of the input. This just ends up rotating the output.

Cc: qemu-stable@nongnu.org
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
(cherry picked from commit 82e0f9170a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 18:01:41 -05:00
Anton Blanchard
d208f05fd9 tcg-ppc64: Fix RLDCL opcode
The rldcl instruction doesn't have an sh field, so the minor opcode
is shifted 1 bit. We were using the XO30 macro which shifted the
minor opcode 2 bits.

Remove XO30 and add MD30 and MDS30 macros which match the
Power ISA categories.

Cc: qemu-stable@nongnu.org
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
(cherry picked from commit 8a94cfb05e)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 18:01:41 -05:00
Stefan Hajnoczi
6b6f105349 ivshmem: add missing error exit(2)
If the user fails to specify 'chardev' or 'shm' then we cannot continue.
Exit right away so that we don't invoke shm_open(3) with a NULL pointer.

It would be nice to replace exit(1) with error returns in the PCI device
.init() function, but leave that for another patch since exit(1) is
currently used elsewhere.

Spotted by Coverity.

Cc: Cam Macdonell <cam@cs.ualberta.ca>
Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit baefb8bf8e)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 16:40:54 -05:00
Andreas Färber
3202c02817 Makefile: Install qemu-img and qemu-nbd man pages only if built
When splitting openSUSE's qemu and qemu-linux-user packages we noticed
that for linux-user-only builds unrelated man pages got installed.
It's surely possible to delete them before packaging, but not installing
them in the first place seems more logical.

Cc: qemu-stable@nongnu.org
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 8a3e8f7fd8)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 16:40:17 -05:00
Jason Wang
5a893b0d37 tap: fix NULL dereference when passing invalid parameters to tap
This patch forbid the following invalid parameters to tap:

1) fd and vhostfds were specified but vhostfd were not specified
2) vhostfds were specified but fds were not specified
3) fds and vhostfd were specified

For 1 and 2, net_init_tap_one() will still pass NULL as vhostfdname to
monitor_handle_fd_param(), which may crash the qemu.

Also remove the unnecessary has_fd check.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Stefan Hajnoczi <shajnocz@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit c87826a878)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-17 16:35:39 -05:00
Michael Tokarev
0817fa9767 create qemu_openpty_raw() helper function and move it to a separate file
In two places qemu uses openpty() which is very system-dependent,
and in both places the pty is switched to raw mode as well.
Make a wrapper function which does both steps, and move all the
system-dependent complexity into a separate file, together
with static/local implementations of openpty() and cfmakeraw()
from qemu-char.c.

It is in a separate file, not part of oslib-posix.c, because
openpty() often resides in -lutil which is not linked to
every program qemu builds.

This change removes #including of <pty.h>, <termios.h>
and other rather specific system headers out of qemu-common.h,
which isn't a place for such specific headers really.

This version has been verified to build correctly on Linux,
OpenBSD, FreeBSD and OpenIndiana.  On the latter it lets qemu
to be built with gtk gui which were not possible there due to
missing openpty() and cfmakeraw().

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Tested-by: Andreas Färber <andreas.faerber@web.de>
(cherry picked from commit 4efeabbbe8)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-14 18:58:27 -05:00
Stefan Hajnoczi
5810174865 blockdev: reset werror/rerror on drive_del
Paolo Bonzini <pbonzini@redhat.com> suggested the following test case:

1. Launch a guest and wait at the GRUB boot menu:

  qemu-system-x86_64 -enable-kvm -m 1024 \
   -drive if=none,cache=none,file=test.img,id=foo,werror=stop,rerror=stop
   -device virtio-blk-pci,drive=foo,id=virtio0,addr=4

2. Hot unplug the device:

  (qemu) drive_del foo

3. Select the first boot menu entry

Without this patch the guest pauses due to ENOMEDIUM.  The guest is
stuck in a continuous pause loop since the I/O request is retried and
fails immediately again when the guest is resumed.

With this patch the error is reported to the guest.

Note that this scenario actually happens sometimes during libvirt disk
hot unplug, where device_del is followed by drive_del.  I/O may still be
submitted to the drive after drive_del if the guest does not process the
PCI hot unplug notification.

Reported-by: Dafna Ron <dron@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 293c51a6ee)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-13 10:32:41 -05:00
Michael S. Tsirkin
eeaa8d33e4 q35: set fw_name
PCI host bridges need to set fw_name to be discoverable
by bios for boot device selection.

In particular, seabios expects root device to be called
"/pci/@i0cf8", so let's set it up like that for Q35.

Cc: qemu-stable@nongnu.org
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Amos Kong <akong@redhat.com>
(cherry picked from commit 68c0e134a0)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-12 15:39:40 -05:00
Richard Henderson
c1270701c0 target-i386: Fix aflag logic for CODE64 and the 0x67 prefix
The code reorganization in commit 4a6fd938 broke handling of PREFIX_ADR.
While fixing this, tidy and comment the code so that it's more obvious
what's going on in setting both aflag and dflag.

The TARGET_X86_64 ifdef can be eliminated because CODE64 expands to the
constant zero when TARGET_X86_64 is undefined.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Reported-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1369855851-21400-1-git-send-email-rth@twiddle.net
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit dec3fc9657)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-12 15:37:51 -05:00
Michael Roth
252a7c6e11 qemu-char: don't issue CHR_EVENT_OPEN in a BH
When CHR_EVENT_OPENED was initially added, it was CHR_EVENT_RESET,
and it was issued as a bottom-half:

86e94dea5b

Which we basically used to print out a greeting/prompt for the
monitor.

AFAICT the only reason this was ever done in a BH was because in
some cases we'd modify the chr_write handler for a new chardev
backend *after* the site where we issued the reset (see:
86e94d:qemu_chr_open_stdio())

At some point this event was renamed to CHR_EVENT_OPENED, and we've
maintained the use of this BH ever since.

However, due to 9f939df955, we schedule
the BH via g_idle_add(), which is causing events to sometimes be
delivered after we've already begun processing data from backends,
leading to:

 known bugs:

  QMP:
    session negotation resets with OPENED event, in some cases this
    is causing new sessions to get sporadically reset

 potential bugs:

  hw/usb/redirect.c:
    can_read handler checks for dev->parser != NULL, which may be
    true if CLOSED BH has not been executed yet. In the past, OPENED
    quiesced outstanding CLOSED events prior to us reading client
    data. If it's delayed, our check may allow reads to occur even
    though we haven't processed the OPENED event yet, and when we
    do finally get the OPENED event, our state may get reset.

  qtest.c:
    can begin session before OPENED event is processed, leading to
    a spurious reset of the system and irq_levels

  gdbstub.c:
    may start a gdb session prior to the machine being paused

To fix these, let's just drop the BH.

Since the initial reasoning for using it still applies to an extent,
work around that by deferring the delivery of CHR_EVENT_OPENED until
after the chardevs have been fully initialized, toward the end of
qmp_chardev_add() (or some cases, qemu_chr_new_from_opts()). This
defers delivery long enough that we can be assured a CharDriverState
is fully initialized before CHR_EVENT_OPENED is sent.

Also, rather than requiring each chardev to do an explicit open, do it
automatically, and allow the small few who don't desire such behavior to
suppress the OPENED-on-init behavior by setting a 'explicit_be_open'
flag.

We additionally add missing OPENED events for stdio backends on w32,
which were previously not being issued, causing us to not recieve the
banner and initial prompts for qmp/hmp.

Reported-by: Stefan Priebe <s.priebe@profihost.ag>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Message-id: 1370636393-21044-1-git-send-email-mdroth@linux.vnet.ibm.com
Cc: qemu-stable@nongnu.org
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit bd5c51ee6c)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-12 15:18:12 -05:00
Wendy Liang
6f3718c73b xilinx_axidma: Do not set DMA .notify to NULL after notify
If a stream notify function is not ready, it may re-populate the notify call-
back to indicate it should be re-polled later. This break in this usage, as
immediately following the notify() call, .notify is set to NULL. reverse the
ordering of the notify call and NULL assignment accordingly.

[PC: Reworked commit message]

Signed-off-by: Wendy Liang <jliang@xilinx.com>
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
(cherry picked from commit 4f293bd6e5)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 18:22:30 -05:00
Cornelia Huck
1fb147f443 virtio-ccw: Fix unsetting of indicators.
Interpretation of the ccws to register (configuration) indicators contained
a thinko: We want to disallow reading from 0, but setting the indicator
pointer to 0 is fine.

Let's fix the handling for CCW_CMD_SET{,_CONF}_IND.

Cc: qemu-stable@nongnu.org
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
(cherry picked from commit d1db1fa8df)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 18:17:25 -05:00
Cornelia Huck
72762f2811 s390x/css: Fix concurrent sense.
Fix an off-by-one error when indicating availablity of concurrent
sense data.

Cc: qemu-stable@nongnu.org
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
(cherry picked from commit 8312976e73)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 18:11:30 -05:00
Brad Smith
31ba7016d4 ui/gtk.c: Fix *BSD build of Gtk+ UI
Fix the build of the Gtk+ UI on *BSD systems.

Signed-off-by: Brad Smith <brad@comstyle.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Message-id: 20130521161324.GA29977@rox.home.comstyle.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 17bf9735dd)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 18:10:52 -05:00
Stefan Hajnoczi
9ca80c7f29 vmxnet3: fix NICState cleanup
Use qemu_del_nic() instead of qemu_del_net_client() to correctly free
the entire NICState.

Cc: qemu-stable@nongnu.org
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 3ffee3cd5f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 18:03:56 -05:00
Michael Marineau
a548bacfba Fix usage of USB_DEV_FLAG_IS_HOST flag.
USB_DEV_FLAG_IS_HOST is the bit number, not value. Booting with a
"Fitbit Base Station" USB dongle was triggering this assert.

Signed-off-by: Michael Marineau <mike@marineau.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 756335292f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 17:35:19 -05:00
Ed Maste
9b5751ec09 host-libusb: Correct test for USB packet state
USB_RET_ASYNC is -6, so inflight was always false.

Signed-off-by: Ed Maste <emaste@freebsd.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 45ec267160)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 17:34:11 -05:00
Amos Kong
032ce1baac qdev: fix get_fw_dev_path to support to add nothing to fw_dev_path
Recent virtio refactoring in QEMU made virtio-bus become the parent bus
of scsi-bus, and virtio-bus doesn't have get_fw_dev_path implementation,
typename will be added to fw_dev_path by default, the new fw_dev_path
could not be identified by seabios. It causes that bootindex parameter
of scsi device doesn't work.

This patch implements get_fw_dev_path() in BusClass, it will be called
if bus doesn't implement the method, tyename will be added to
fw_dev_path. If the implemented method returns NULL, nothing will be
added to fw_dev_path.

It also implements virtio_bus_get_fw_dev_path() to return NULL. Then
QEMU will still pass original style of fw_dev_path to seabios.

Signed-off-by: Amos Kong <akong@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Message-id: 1369814202-10346-1-git-send-email-akong@redhat.com
--
v2: only add nothing to fw_dev_path when get_fw_dev_path() is
    implemented and returns NULL. then it will not effect other devices
    don't have get_fw_dev_path() implementation.
v3: implement default get_fw_dev_path() in BusClass
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit bbfa18fca4)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 17:30:29 -05:00
Paolo Bonzini
baa8a8b444 do not check pointers after dereferencing them
Two instances, both spotted by Coverity.  In one, two blocks were
swapped.  In the other, the check is not needed anymore.

Cc: qemu-stable@nongnu.org
Cc: qemu-trivial@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit a4cc73d629)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 17:28:24 -05:00
Stefano Stabellini
327e75b537 xen: start PCI hole at 0xe0000000 (same as pc_init1 and qemu-xen-traditional)
We are currently setting the PCI hole to start at HVM_BELOW_4G_RAM_END,
that is 0xf0000000.
Start the PCI hole at 0xe0000000 instead, that is the same value used by
pc_init1 and qemu-xen-traditional.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: qemu-stable@nongnu.org
(cherry picked from commit 9f24a8030a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 17:25:03 -05:00
Brad Smith
9e7fdafc65 Remove OSS support for OpenBSD
Remove the OSS support for OpenBSD. The OSS API has not been usable
for quite some time.

Signed-off-by: Brad Smith <brad@comstyle.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 4f6ab397b6)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 17:21:47 -05:00
Luiz Capitulino
d503afb28d target-i386: fix abort on bad PML4E/PDPTE/PDE/PTE addresses
The code used to walk IA-32e page-tables, and possibly PAE page-tables,
uses the bit mask ~0xfff to get the next PML4E/PDPTE/PDE/PTE address.

However, as we use a uint64_t to store the resulting address, that mask
gets expanded to 0xfffffffffffff000 which not only ends up selecting
reserved bits but also selects the XD bit (execute-disable) which
happens to be enabled by Windows 8, causing qemu_get_ram_ptr() to abort.

This commit fixes that problem by replacing ~0xfff by a correct mask
that only selects the address bit range (ie. bits 51:12).

Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
(cherry picked from commit fbc2ed9518)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 17:19:47 -05:00
Gerd Hoffmann
5b3ca29b95 update seabios to release 1.7.2.2
git shortlog from 1.7.2.1

Asias He (2):
      virtio-scsi: Pack struct virtio_scsi_{req_cmd,resp_cmd}
      virtio-scsi: Set _DRIVER_OK flag before scsi target scanning

Kevin O'Connor (1):
      Cache boot-fail-wait to avoid romfile access after POST.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 6683d7bc27)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 17:12:44 -05:00
Gerd Hoffmann
7b9cdc5bba Revert "roms: switch oldnoconfig to olddefconfig"
This reverts commit a5519b42cf.

Breaks "make bios" in roms/ as the kconfig version in seabios doesn't
support olddefconfig.  Must have been be totally untested.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 19cd090e17)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 17:12:18 -05:00
Andreas Färber
0565700d78 ide: Set BSY bit during FLUSH
The implementation of the ATA FLUSH command invokes a flush at the block
layer, which may on raw files on POSIX entail a synchronous fdatasync().
This may in some cases take so long that the SLES 11 SP1 guest driver
reports I/O errors and filesystems get corrupted or remounted read-only.

Avoid this by setting BUSY_STAT, so that the guest is made aware we are
in the middle of an operation and no ATA commands are attempted to be
processed concurrently.

Addresses BNC#637297.

Suggested-by: Gonglei (Arei) <arei.gonglei@huawei.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit f68ec8379e)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 17:01:20 -05:00
Gerd Hoffmann
ddaa83eebe chardev: fix "info chardev" output
Fill unset CharDriverState->filename with the backend name, so
'info chardev' will return at least the chardev type.  Don't
touch it in case the chardev init function filled it already,
like the socket+pty chardevs do for example.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 60d95386ab)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 16:59:23 -05:00
Stefano Stabellini
38ec6c1071 xen_machine_pv: do not create a dummy CPU in machine->init
This fixes a regression introduced by:

commit 62fc403f11
Author: Igor Mammedov <imammedo@redhat.com>
Date:   Mon Apr 29 18:54:13 2013 +0200

    target-i386: Attach ICC bus to CPU on its creation

    X86CPU should have parent bus so it could provide bus for child APIC.

The commit makes it mandatory to pass a valid ICC bus to cpu_x86_create,
but cpu_x86_init just passes NULL to it.
xen_machine_pv uses cpu_x86_init, therefore it has been broken.

This patch fixes the problem by removing the dummy CPU creation
altogether from xen_init_pv, relying on the fact that QEMU can now cope
with a machine without an emulated CPU.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
CC: imammedo@redhat.com
CC: qemu-stable@nongnu.org
(cherry picked from commit 58ee9b0ae0)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 16:57:59 -05:00
Stefano Stabellini
951411fa36 main_loop: do not set nonblocking if xen_enabled()
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: qemu-stable@nongnu.org
(cherry picked from commit a7d4207d37)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 16:57:46 -05:00
Stefano Stabellini
5c26608027 xen: simplify xen_enabled
No need for preprocessor conditionals in xen_enabled: xen_allowed is
always defined.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: qemu-stable@nongnu.org
(cherry picked from commit 49fa9881b2)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 16:57:38 -05:00
Peter Crosthwaite
3541912190 qom/object: Don't poll cast cache for NULL objects
object_dynamic_cast_assert used to be tolerant of NULL objects and not
assert. It's clear from the implementation that this is the expected
behavior.

The preceding check of the cast cache dereferences obj however causing
a segfault. Fix by conditionalizing the cast cache logic on obj being
non-null.

Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Message-id: 8e2bef6a55753869c50bfa32226f7fcf0439ca62.1369183592.git.peter.crosthwaite@xilinx.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 95916abcf4)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 16:47:52 -05:00
Stefan Hajnoczi
749806d1a7 rtl8139: flush queued packets when RxBufPtr is written
Net queues support efficient "receive disable".  For example, tap's file
descriptor will not be polled while its peer has receive disabled.  This
saves CPU cycles for needlessly copying and then dropping packets which
the peer cannot receive.

rtl8139 is missing the qemu_flush_queued_packets() call that wakes the
queue up when receive becomes possible again.

As a result, the Windows 7 guest driver reaches a state where the
rtl8139 cannot receive packets.  The driver has actually refilled the
receive buffer but we never resume reception.

The bug can be reproduced by running a large FTP 'get' inside a Windows
7 guest:

  $ qemu -netdev tap,id=tap0,...
         -device rtl8139,netdev=tap0

The Linux guest driver does not trigger the bug, probably due to a
different buffer management strategy.

Reported-by: Oliver Francke <oliver.francke@filoo.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 00b7ade807)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 16:46:51 -05:00
Aneesh Kumar K.V
a6fc2cd986 hw/9pfs: use O_NOFOLLOW for mapped readlink operation
With mapped security models like mapped-xattr and mapped-file, we save the
symlink target as file contents. Now if we ever expose a normal directory
with mapped security model and find real symlinks in export path, never
follow them and return proper error.

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
(cherry picked from commit aed858ce10)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 16:45:59 -05:00
Aneesh Kumar K.V
eabdf85d86 hw/9pfs: Fix segfault with 9p2000.u
When guest tries to chmod a block or char device file over 9pfs,
the qemu process segfaults. With 9p2000.u protocol we use wstat to
change mode bits and client don't send extension information for
chmod. We need to check for size field to check whether extension
info is present or not.

Reported-by: Michael Tokarev <mjt@tls.msk.ru>
Acked-by: Michael Tokarev <mjt@tls.msk.ru>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
(cherry picked from commit c7e587b73e)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-06-11 16:39:23 -05:00
2201 changed files with 149545 additions and 281715 deletions

153
.gitignore vendored
View File

@@ -1,66 +1,65 @@
/config-devices.*
/config-all-devices.*
/config-all-disas.*
/config-host.*
/config-target.*
/config.status
/config-temp
/trace/generated-tracers.h
/trace/generated-tracers.c
/trace/generated-tracers-dtrace.h
/trace/generated-tracers.dtrace
/trace/generated-events.h
/trace/generated-events.c
/trace/generated-ust-provider.h
/trace/generated-ust.c
/libcacard/trace/generated-tracers.c
config-devices.*
config-all-devices.*
config-all-disas.*
config-host.*
config-target.*
trace/generated-tracers.h
trace/generated-tracers.c
trace/generated-tracers-dtrace.h
trace/generated-tracers.dtrace
trace/generated-events.h
trace/generated-events.c
libcacard/trace/generated-tracers.c
*-timestamp
/*-softmmu
/*-darwin-user
/*-linux-user
/*-bsd-user
/libdis*
/libuser
/linux-headers/asm
/qga/qapi-generated
/qapi-generated
/qapi-types.[ch]
/qapi-visit.[ch]
/qmp-commands.h
/qmp-marshal.c
/qemu-doc.html
/qemu-tech.html
/qemu-doc.info
/qemu-tech.info
/qemu.1
/qemu.pod
/qemu-img.1
/qemu-img.pod
/qemu-img
/qemu-nbd
/qemu-nbd.8
/qemu-nbd.pod
/qemu-options.def
/qemu-options.texi
/qemu-img-cmds.texi
/qemu-img-cmds.h
/qemu-io
/qemu-ga
/qemu-bridge-helper
/qemu-monitor.texi
/qmp-commands.txt
/vscclient
/fsdev/virtfs-proxy-helper
/fsdev/virtfs-proxy-helper.1
/fsdev/virtfs-proxy-helper.pod
*-softmmu
*-darwin-user
*-linux-user
*-bsd-user
libdis*
libuser
linux-headers/asm
qapi-generated
qapi-types.[ch]
qapi-visit.[ch]
qmp-commands.h
qmp-marshal.c
qemu-doc.html
qemu-tech.html
qemu-doc.info
qemu-tech.info
qemu.1
qemu.pod
qemu-img.1
qemu-img.pod
qemu-img
qemu-nbd
qemu-nbd.8
qemu-nbd.pod
qemu-options.def
qemu-options.texi
qemu-img-cmds.texi
qemu-img-cmds.h
qemu-io
qemu-ga
qemu-bridge-helper
qemu-monitor.texi
vscclient
QMP/qmp-commands.txt
test-coroutine
test-qmp-input-visitor
test-qmp-output-visitor
test-string-input-visitor
test-string-output-visitor
test-visitor-serialization
fsdev/virtfs-proxy-helper
fsdev/virtfs-proxy-helper.1
fsdev/virtfs-proxy-helper.pod
.gdbinit
*.a
*.aux
*.cp
*.dvi
*.exe
*.dll
*.so
*.mo
*.fn
*.ky
*.log
@@ -74,31 +73,35 @@
*.tp
*.vr
*.d
!/scripts/qemu-guest-agent/fsfreeze-hook.d
!scripts/qemu-guest-agent/fsfreeze-hook.d
*.o
*.lo
*.la
*.pc
.libs
.sdk
*.swp
*.orig
.pc
*.patch
*.gcda
*.gcno
/pc-bios/bios-pq/status
/pc-bios/vgabios-pq/status
/pc-bios/optionrom/linuxboot.asm
/pc-bios/optionrom/linuxboot.bin
/pc-bios/optionrom/linuxboot.raw
/pc-bios/optionrom/linuxboot.img
/pc-bios/optionrom/multiboot.asm
/pc-bios/optionrom/multiboot.bin
/pc-bios/optionrom/multiboot.raw
/pc-bios/optionrom/multiboot.img
/pc-bios/optionrom/kvmvapic.asm
/pc-bios/optionrom/kvmvapic.bin
/pc-bios/optionrom/kvmvapic.raw
/pc-bios/optionrom/kvmvapic.img
/pc-bios/s390-ccw/s390-ccw.elf
/pc-bios/s390-ccw/s390-ccw.img
patches
pc-bios/bios-pq/status
pc-bios/vgabios-pq/status
pc-bios/optionrom/linuxboot.asm
pc-bios/optionrom/linuxboot.bin
pc-bios/optionrom/linuxboot.raw
pc-bios/optionrom/linuxboot.img
pc-bios/optionrom/multiboot.asm
pc-bios/optionrom/multiboot.bin
pc-bios/optionrom/multiboot.raw
pc-bios/optionrom/multiboot.img
pc-bios/optionrom/kvmvapic.asm
pc-bios/optionrom/kvmvapic.bin
pc-bios/optionrom/kvmvapic.raw
pc-bios/optionrom/kvmvapic.img
pc-bios/s390-ccw/s390-ccw.elf
pc-bios/s390-ccw/s390-ccw.img
.stgit-*
cscope.*
tags

19
.gitmodules vendored
View File

@@ -1,30 +1,27 @@
[submodule "roms/vgabios"]
path = roms/vgabios
url = git://git.qemu-project.org/vgabios.git/
url = git://git.qemu.org/vgabios.git/
[submodule "roms/seabios"]
path = roms/seabios
url = git://git.qemu-project.org/seabios.git/
url = git://git.qemu.org/seabios.git/
[submodule "roms/SLOF"]
path = roms/SLOF
url = git://git.qemu-project.org/SLOF.git
url = git://git.qemu.org/SLOF.git
[submodule "roms/ipxe"]
path = roms/ipxe
url = git://git.qemu-project.org/ipxe.git
url = git://git.qemu.org/ipxe.git
[submodule "roms/openbios"]
path = roms/openbios
url = git://git.qemu-project.org/openbios.git
[submodule "roms/openhackware"]
path = roms/openhackware
url = git://git.qemu-project.org/openhackware.git
url = git://git.qemu.org/openbios.git
[submodule "roms/qemu-palcode"]
path = roms/qemu-palcode
url = git://github.com/rth7680/qemu-palcode.git
url = git://repo.or.cz/qemu-palcode.git
[submodule "roms/sgabios"]
path = roms/sgabios
url = git://git.qemu-project.org/sgabios.git
url = git://git.qemu.org/sgabios.git
[submodule "pixman"]
path = pixman
url = git://anongit.freedesktop.org/pixman
[submodule "dtc"]
path = dtc
url = git://git.qemu-project.org/dtc.git
url = git://git.qemu.org/dtc.git

View File

@@ -2,8 +2,7 @@
# into proper addresses so that they are counted properly in git shortlog output.
#
Andrzej Zaborowski <balrogg@gmail.com> balrog <balrog@c046a42c-6fe2-441c-8c8c-71466251a162>
Anthony Liguori <anthony@codemonkey.ws> aliguori <aliguori@c046a42c-6fe2-441c-8c8c-71466251a162>
Anthony Liguori <anthony@codemonkey.ws> Anthony Liguori <aliguori@us.ibm.com>
Anthony Liguori <aliguori@us.ibm.com> aliguori <aliguori@c046a42c-6fe2-441c-8c8c-71466251a162>
Aurelien Jarno <aurelien@aurel32.net> aurel32 <aurel32@c046a42c-6fe2-441c-8c8c-71466251a162>
Blue Swirl <blauwirbel@gmail.com> blueswir1 <blueswir1@c046a42c-6fe2-441c-8c8c-71466251a162>
Edgar E. Iglesias <edgar.iglesias@gmail.com> edgar_igl <edgar_igl@c046a42c-6fe2-441c-8c8c-71466251a162>

View File

@@ -1,81 +0,0 @@
language: c
python:
- "2.4"
compiler:
- gcc
- clang
notifications:
irc:
channels:
- "irc.oftc.net#qemu"
on_success: change
on_failure: always
env:
global:
- TEST_CMD="make check"
- EXTRA_CONFIG=""
# Development packages, EXTRA_PKGS saved for additional builds
- CORE_PKGS="libusb-1.0-0-dev libiscsi-dev librados-dev libncurses5-dev"
- NET_PKGS="libseccomp-dev libgnutls-dev libssh2-1-dev libspice-server-dev libspice-protocol-dev libnss3-dev"
- GUI_PKGS="libgtk-3-dev libvte-2.90-dev libsdl1.2-dev libpng12-dev libpixman-1-dev"
- EXTRA_PKGS=""
matrix:
- TARGETS=alpha-softmmu,alpha-linux-user
- TARGETS=arm-softmmu,arm-linux-user
- TARGETS=aarch64-softmmu,aarch64-linux-user
- TARGETS=cris-softmmu
- TARGETS=i386-softmmu,x86_64-softmmu
- TARGETS=lm32-softmmu
- TARGETS=m68k-softmmu
- TARGETS=microblaze-softmmu,microblazeel-softmmu
- TARGETS=mips-softmmu,mips64-softmmu,mips64el-softmmu,mipsel-softmmu
- TARGETS=moxie-softmmu
- TARGETS=or32-softmmu,
- TARGETS=ppc-softmmu,ppc64-softmmu,ppcemb-softmmu
- TARGETS=s390x-softmmu
- TARGETS=sh4-softmmu,sh4eb-softmmu
- TARGETS=sparc-softmmu,sparc64-softmmu
- TARGETS=unicore32-softmmu
- TARGETS=xtensa-softmmu,xtensaeb-softmmu
before_install:
- git submodule update --init --recursive
- sudo apt-get update -qq
- sudo apt-get install -qq ${CORE_PKGS} ${NET_PKGS} ${GUI_PKGS} ${EXTRA_PKGS}
script: "./configure --target-list=${TARGETS} ${EXTRA_CONFIG} && make && ${TEST_CMD}"
matrix:
# We manually include a number of additional build for non-standard bits
include:
# Debug related options
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-debug"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-debug --enable-tcg-interpreter"
compiler: gcc
# All the extra -dev packages
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="libaio-dev libcap-ng-dev libattr1-dev libbrlapi-dev uuid-dev libusb-1.0.0-dev"
compiler: gcc
# Currently configure doesn't force --disable-pie
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-gprof --enable-gcov --disable-pie"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="sparse"
EXTRA_CONFIG="--enable-sparse"
compiler: gcc
# All the trace backends (apart from dtrace)
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backend=stderr"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backend=simple"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backend=ftrace"
TEST_CMD=""
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="liblttng-ust-dev liburcu-dev"
EXTRA_CONFIG="--enable-trace-backend=ust"
compiler: gcc

View File

@@ -84,10 +84,3 @@ and clarity it comes on a line by itself:
Rationale: a consistent (except for functions...) bracing style reduces
ambiguity and avoids needless churn when lines are added or removed.
Furthermore, it is the QEMU coding style.
5. Declarations
Mixed declarations (interleaving statements and declarations within blocks)
are not allowed; declarations should be at the beginning of blocks. In other
words, the code should not generate warnings if using GCC's
-Wdeclaration-after-statement option.

View File

@@ -1,6 +1,6 @@
This file documents changes for QEMU releases 0.12 and earlier.
For changelog information for later releases, see
http://wiki.qemu-project.org/ChangeLog or look at the git history for
http://wiki.qemu.org/ChangeLog or look at the git history for
more detailed information.

19
HACKING
View File

@@ -40,23 +40,8 @@ speaking, the size of guest memory can always fit into ram_addr_t but
it would not be correct to store an actual guest physical address in a
ram_addr_t.
For CPU virtual addresses there are several possible types.
vaddr is the best type to use to hold a CPU virtual address in
target-independent code. It is guaranteed to be large enough to hold a
virtual address for any target, and it does not change size from target
to target. It is always unsigned.
target_ulong is a type the size of a virtual address on the CPU; this means
it may be 32 or 64 bits depending on which target is being built. It should
therefore be used only in target-specific code, and in some
performance-critical built-per-target core code such as the TLB code.
There is also a signed version, target_long.
abi_ulong is for the *-user targets, and represents a type the size of
'void *' in that target's ABI. (This may not be the same as the size of a
full CPU virtual address in the case of target ABIs which use 32 bit pointers
on 64 bit CPUs, like sparc32plus.) Definitions of structures that must match
the target's ABI must use this type for anything that on the target is defined
to be an 'unsigned long' or a pointer type.
There is also a signed version, abi_long.
Use target_ulong (or abi_ulong) for CPU virtual addresses, however
devices should not need to use target_ulong.
Of course, take all of the above with a grain of salt. If you're about
to use some system interface that requires a type like size_t, pid_t or

15
LICENSE
View File

@@ -1,21 +1,16 @@
The following points clarify the QEMU license:
1) QEMU as a whole is released under the GNU General Public License,
version 2.
1) QEMU as a whole is released under the GNU General Public License
2) Parts of QEMU have specific licenses which are compatible with the
GNU General Public License, version 2. Hence each source file contains
its own licensing information. Source files with no licensing information
are released under the GNU General Public License, version 2 or (at your
option) any later version.
GNU General Public License. Hence each source file contains its own
licensing information.
As of July 2013, contributions under version 2 of the GNU General Public
License (and no later version) are only accepted for the following files
or directories: bsd-user/, linux-user/, hw/misc/vfio.c, hw/xen/xen_pt*.
Many hardware device emulation sources are released under the BSD license.
3) The Tiny Code Generator (TCG) is released under the BSD license
(see license headers in files).
4) QEMU is a trademark of Fabrice Bellard.
Fabrice Bellard and the QEMU team
Fabrice Bellard.

View File

@@ -50,14 +50,8 @@ Descriptions of section entries:
General Project Administration
------------------------------
M: Anthony Liguori <aliguori@amazon.com>
Responsible Disclosure, Reporting Security Issues
------------------------------
W: http://wiki.qemu.org/SecurityProcess
M: Michael S. Tsirkin <mst@redhat.com>
M: Anthony Liguori <aliguori@amazon.com>
L: secalert@redhat.com
M: Anthony Liguori <aliguori@us.ibm.com>
M: Paul Brook <paul@codesourcery.com>
Guest CPU cores (TCG):
----------------------
@@ -68,6 +62,7 @@ F: target-alpha/
F: hw/alpha/
ARM
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: target-arm/
@@ -85,10 +80,10 @@ M: Michael Walle <michael@walle.cc>
S: Maintained
F: target-lm32/
F: hw/lm32/
F: hw/char/lm32_*
M68K
S: Orphan
M: Paul Brook <paul@codesourcery.com>
S: Odd Fixes
F: target-m68k/
F: hw/m68k/
@@ -109,12 +104,6 @@ M: Anthony Green <green@moxielogic.com>
S: Maintained
F: target-moxie/
OpenRISC
M: Jia Liu <proljc@gmail.com>
S: Maintained
F: target-openrisc/
F: hw/openrisc/
PowerPC
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
@@ -165,7 +154,8 @@ Guest CPU Cores (KVM):
----------------------
Overall
M: Paolo Bonzini <pbonzini@redhat.com>
M: Gleb Natapov <gleb@redhat.com>
M: Marcelo Tosatti <mtosatti@redhat.com>
L: kvm@vger.kernel.org
S: Supported
F: kvm-*
@@ -182,14 +172,12 @@ S: Maintained
F: target-ppc/kvm.c
S390
M: Christian Borntraeger <borntraeger@de.ibm.com>
M: Cornelia Huck <cornelia.huck@de.ibm.com>
M: Alexander Graf <agraf@suse.de>
S: Maintained
F: target-s390x/kvm.c
F: hw/intc/s390_flic.[hc]
X86
M: Gleb Natapov <gleb@redhat.com>
M: Marcelo Tosatti <mtosatti@redhat.com>
L: kvm@vger.kernel.org
S: Supported
@@ -227,33 +215,20 @@ F: *win32*
ARM Machines
------------
Allwinner-a10
M: Li Guang <lig.fnst@cn.fujitsu.com>
S: Maintained
F: hw/*/allwinner-a10*
F: include/hw/*/allwinner-a10*
F: hw/arm/cubieboard.c
Exynos
M: Evgeny Voevodin <e.voevodin@samsung.com>
M: Maksim Kozlov <m.kozlov@samsung.com>
M: Igor Mitsyanko <i.mitsyanko@gmail.com>
M: Igor Mitsyanko <i.mitsyanko@samsung.com>
M: Dmitry Solodkiy <d.solodkiy@samsung.com>
S: Maintained
F: hw/*/exynos*
Calxeda Highbank
M: Rob Herring <robh@kernel.org>
S: Maintained
M: Mark Langsdorf <mark.langsdorf@calxeda.com>
S: Supported
F: hw/arm/highbank.c
F: hw/net/xgmac.c
Canon DIGIC
M: Antony Pavlov <antonynpavlov@gmail.com>
S: Maintained
F: include/hw/arm/digic.h
F: hw/*/digic*
Gumstix
M: qemu-devel@nongnu.org
S: Orphan
@@ -266,6 +241,7 @@ F: hw/*/imx*
F: hw/arm/kzm.c
Integrator CP
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/arm/integratorcp.c
@@ -291,6 +267,7 @@ S: Maintained
F: hw/arm/palm.c
Real View
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/arm/realview*
@@ -301,17 +278,19 @@ S: Maintained
F: hw/arm/spitz.c
Stellaris
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/*/stellaris*
Versatile PB
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/*/versatile*
Xilinx Zynq
M: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
S: Maintained
F: hw/arm/xilinx_zynq.c
F: hw/misc/zynq_slcr.c
@@ -324,7 +303,11 @@ Axis Dev88
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
S: Maintained
F: hw/cris/axis_dev88.c
F: hw/*/etraxfs_*.c
etraxfs
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
S: Maintained
F: hw/cris/etraxfs.c
LM32 Machines
-------------
@@ -341,15 +324,18 @@ F: hw/lm32/milkymist.c
M68K Machines
-------------
an5206
S: Orphan
M: Paul Brook <paul@codesourcery.com>
S: Maintained
F: hw/m68k/an5206.c
dummy_m68k
S: Orphan
M: Paul Brook <paul@codesourcery.com>
S: Maintained
F: hw/m68k/dummy_m68k.c
mcf5208
S: Orphan
M: Paul Brook <paul@codesourcery.com>
S: Maintained
F: hw/m68k/mcf5208.c
MicroBlaze Machines
@@ -357,10 +343,10 @@ MicroBlaze Machines
petalogix_s3adsp1800
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
S: Maintained
F: hw/microblaze/petalogix_s3adsp1800_mmu.c
F: hw/microblaze/petalogix_s3adsp1800.c
petalogix_ml605
M: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
S: Maintained
F: hw/microblaze/petalogix_ml605_mmu.c
@@ -386,13 +372,6 @@ M: Aurelien Jarno <aurelien@aurel32.net>
S: Maintained
F: hw/mips/mips_r4k.c
OpenRISC Machines
-----------------
or1k-sim
M: Jia Liu <proljc@gmail.com>
S: Maintained
F: hw/openrisc/openrisc_sim.c
PowerPC Machines
----------------
405
@@ -428,8 +407,8 @@ M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Maintained
F: hw/ppc/mac_newworld.c
F: hw/pci-host/uninorth.c
F: hw/pci-bridge/dec.[hc]
F: hw/pci/devices/host-uninorth.c
F: hw/pci/devices/host-dec.[hc]
F: hw/misc/macio/
Old World
@@ -437,7 +416,7 @@ M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Maintained
F: hw/ppc/mac_oldworld.c
F: hw/pci-host/grackle.c
F: hw/pci/devices/host-grackle.c
F: hw/misc/macio/
PReP
@@ -445,36 +424,33 @@ M: Andreas Färber <andreas.faerber@web.de>
L: qemu-ppc@nongnu.org
S: Odd Fixes
F: hw/ppc/prep.c
F: hw/pci-host/prep.[hc]
F: hw/pci/devices/host-prep.[hc]
F: hw/isa/pc87312.[hc]
sPAPR
M: David Gibson <david@gibson.dropbear.id.au>
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Supported
F: hw/*/spapr*
F: include/hw/*/spapr*
F: hw/*/xics*
F: include/hw/*/xics*
F: pc-bios/spapr-rtas/*
virtex_ml507
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
L: qemu-ppc@nongnu.org
S: Odd Fixes
F: hw/ppc/virtex_ml507.c
F: hw/pci/virtex_ml507.c
SH4 Machines
------------
R2D
M: Magnus Damm <magnus.damm@gmail.com>
S: Maintained
F: hw/sh4/r2d.c
F: hw/sh/r2d.c
Shix
M: Magnus Damm <magnus.damm@gmail.com>
S: Orphan
F: hw/sh4/shix.c
F: hw/sh/shix.c
SPARC Machines
--------------
@@ -499,17 +475,14 @@ S390 Machines
S390 Virtio
M: Alexander Graf <agraf@suse.de>
S: Maintained
F: hw/s390x/s390-*.c
F: hw/s390/s390-*.c
S390 Virtio-ccw
M: Cornelia Huck <cornelia.huck@de.ibm.com>
M: Christian Borntraeger <borntraeger@de.ibm.com>
M: Alexander Graf <agraf@suse.de>
S: Supported
F: hw/s390x/s390-virtio-ccw.c
F: hw/s390x/css.[hc]
F: hw/s390x/sclp*.[hc]
F: hw/s390x/ipl*.[hc]
T: git git://github.com/cohuck/qemu virtio-ccw-upstr
UniCore32 Machines
@@ -523,24 +496,10 @@ F: hw/unicore32/
X86 Machines
------------
PC
M: Anthony Liguori <aliguori@amazon.com>
M: Michael S. Tsirkin <mst@redhat.com>
M: Anthony Liguori <aliguori@us.ibm.com>
S: Supported
F: include/hw/i386/
F: hw/i386/
F: hw/pci-host/piix.c
F: hw/pci-host/q35.c
F: hw/pci-host/pam.c
F: include/hw/pci-host/q35.h
F: include/hw/pci-host/pam.h
F: hw/isa/piix4.c
F: hw/isa/lpc_ich9.c
F: hw/i2c/smbus_ich9.c
F: hw/acpi/piix4.c
F: hw/acpi/ich9.c
F: include/hw/acpi/ich9.h
F: include/hw/acpi/piix.h
F: hw/i386/pc.[ch]
F: hw/i386/pc_piix.c
Xtensa Machines
---------------
@@ -585,7 +544,7 @@ M: Alexander Graf <agraf@suse.de>
M: Scott Wood <scottwood@freescale.com>
L: qemu-ppc@nongnu.org
S: Supported
F: hw/ppc/e500*
F: hw/ppc/e500_*
SCSI
M: Paolo Bonzini <pbonzini@redhat.com>
@@ -595,11 +554,12 @@ F: hw/scsi/*
T: git git://github.com/bonzini/qemu.git scsi-next
LSI53C895A
S: Orphan
M: Paul Brook <paul@codesourcery.com>
S: Odd Fixes
F: hw/scsi/lsi53c895a.c
SSI
M: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
S: Maintained
F: hw/ssi/*
F: hw/block/m25p80.c
@@ -608,12 +568,11 @@ USB
M: Gerd Hoffmann <kraxel@redhat.com>
S: Maintained
F: hw/usb/*
F: tests/usb-hcd-ehci-test.c
VFIO
M: Alex Williamson <alex.williamson@redhat.com>
S: Supported
F: hw/misc/vfio.c
F: hw/pci/vfio.c
vhost
M: Michael S. Tsirkin <mst@redhat.com>
@@ -621,8 +580,7 @@ S: Supported
F: hw/*/*vhost*
virtio
M: Anthony Liguori <aliguori@amazon.com>
M: Michael S. Tsirkin <mst@redhat.com>
M: Anthony Liguori <aliguori@us.ibm.com>
S: Supported
F: hw/*/virtio*
@@ -631,7 +589,6 @@ M: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
S: Supported
F: hw/9pfs/
F: fsdev/
F: tests/virtio-9p-test.c
T: git git://github.com/kvaneesh/QEMU.git
virtio-blk
@@ -642,7 +599,6 @@ F: hw/block/virtio-blk.c
virtio-ccw
M: Cornelia Huck <cornelia.huck@de.ibm.com>
M: Christian Borntraeger <borntraeger@de.ibm.com>
S: Supported
F: hw/s390x/virtio-ccw.[hc]
T: git git://github.com/cohuck/qemu virtio-ccw-upstr
@@ -653,20 +609,8 @@ S: Supported
F: hw/char/virtio-serial-bus.c
F: hw/char/virtio-console.c
nvme
M: Keith Busch <keith.busch@intel.com>
S: Supported
F: hw/block/nvme*
F: tests/nvme-test.c
megasas
M: Hannes Reinecke <hare@suse.de>
S: Supported
F: hw/scsi/megasas.c
F: hw/scsi/mfi.h
Xilinx EDK
M: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
S: Maintained
F: hw/*/xilinx_*
@@ -676,13 +620,9 @@ Subsystems
----------
Audio
M: Vassili Karpov (malc) <av1474@comtv.ru>
M: Gerd Hoffmann <kraxel@redhat.com>
S: Maintained
F: audio/
F: hw/audio/
F: tests/ac97-test.c
F: tests/es1370-test.c
F: tests/intel-hda-test.c
Block
M: Kevin Wolf <kwolf@redhat.com>
@@ -691,13 +631,9 @@ S: Supported
F: block*
F: block/
F: hw/block/
F: qemu-img*
F: qemu-io*
T: git git://repo.or.cz/qemu/kevin.git block
T: git git://github.com/stefanha/qemu.git block
Character Devices
M: Anthony Liguori <aliguori@amazon.com>
M: Anthony Liguori <aliguori@us.ibm.com>
S: Maintained
F: qemu-char.c
@@ -705,7 +641,7 @@ CPU
M: Andreas Färber <afaerber@suse.de>
S: Supported
F: qom/cpu.c
F: include/qom/cpu.h
F: include/qemu/cpu.h
F: target-i386/cpu.c
ICC Bus
@@ -715,10 +651,10 @@ F: include/hw/cpu/icc_bus.h
F: hw/cpu/icc_bus.c
Device Tree
M: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
M: Alexander Graf <agraf@suse.de>
S: Maintained
F: device_tree.[ch]
F: device-tree.[ch]
GDB stub
M: qemu-devel@nongnu.org
@@ -729,51 +665,39 @@ F: gdb-xml/
SPICE
M: Gerd Hoffmann <kraxel@redhat.com>
S: Supported
F: include/ui/qemu-spice.h
F: ui/qemu-spice.h
F: ui/spice-*.c
F: audio/spiceaudio.c
F: hw/display/qxl*
Graphics
M: Anthony Liguori <aliguori@amazon.com>
M: Gerd Hoffmann <kraxel@redhat.com>
S: Odd Fixes
M: Anthony Liguori <aliguori@us.ibm.com>
S: Maintained
F: ui/
Cocoa graphics
M: Andreas Färber <andreas.faerber@web.de>
M: Peter Maydell <peter.maydell@linaro.org>
S: Odd Fixes
F: ui/cocoa.m
Main loop
M: Anthony Liguori <aliguori@amazon.com>
M: Anthony Liguori <aliguori@us.ibm.com>
S: Supported
F: vl.c
Human Monitor (HMP)
Monitor (QMP/HMP)
M: Luiz Capitulino <lcapitulino@redhat.com>
S: Maintained
M: Markus Armbruster <armbru@redhat.com>
S: Supported
F: monitor.c
F: hmp.c
F: hmp-commands.hx
T: git git://repo.or.cz/qemu/qmp-unstable.git queue/qmp
Network device layer
M: Anthony Liguori <aliguori@amazon.com>
M: Anthony Liguori <aliguori@us.ibm.com>
M: Stefan Hajnoczi <stefanha@redhat.com>
S: Maintained
F: net/
T: git git://github.com/stefanha/qemu.git net
Netmap network backend
M: Luigi Rizzo <rizzo@iet.unipi.it>
M: Giuseppe Lettieri <g.lettieri@iet.unipi.it>
M: Vincenzo Maffione <v.maffione@gmail.com>
W: http://info.iet.unipi.it/~luigi/netmap/
S: Maintained
F: net/netmap.c
Network Block Device (NBD)
M: Paolo Bonzini <pbonzini@redhat.com>
S: Odd Fixes
@@ -782,41 +706,6 @@ F: nbd.*
F: qemu-nbd.c
T: git git://github.com/bonzini/qemu.git nbd-next
QAPI
M: Luiz Capitulino <lcapitulino@redhat.com>
M: Michael Roth <mdroth@linux.vnet.ibm.com>
S: Maintained
F: qapi/
T: git git://repo.or.cz/qemu/qmp-unstable.git queue/qmp
QAPI Schema
M: Eric Blake <eblake@redhat.com>
M: Luiz Capitulino <lcapitulino@redhat.com>
M: Markus Armbruster <armbru@redhat.com>
S: Supported
F: qapi-schema.json
T: git git://repo.or.cz/qemu/qmp-unstable.git queue/qmp
QOM
M: Anthony Liguori <aliguori@amazon.com>
M: Andreas Färber <afaerber@suse.de>
S: Supported
T: git git://github.com/afaerber/qemu-cpu.git qom-next
F: include/qom/
X: include/qom/cpu.h
F: qom/
X: qom/cpu.c
F: tests/qom-test.c
QMP
M: Luiz Capitulino <lcapitulino@redhat.com>
S: Maintained
F: qmp.c
F: monitor.c
F: qmp-commands.hx
F: QMP/
T: git git://repo.or.cz/qemu/qmp-unstable.git queue/qmp
SLIRP
M: Jan Kiszka <jan.kiszka@siemens.com>
S: Maintained
@@ -837,12 +726,6 @@ M: Blue Swirl <blauwirbel@gmail.com>
S: Odd Fixes
F: scripts/checkpatch.pl
Seccomp
M: Eduardo Otubo <otubo@linux.vnet.ibm.com>
S: Supported
F: qemu-seccomp.c
F: include/sysemu/seccomp.h
Usermode Emulation
------------------
BSD user
@@ -859,21 +742,19 @@ Tiny Code Generator (TCG)
-------------------------
Common code
M: qemu-devel@nongnu.org
M: Richard Henderson <rth@twiddle.net>
S: Maintained
F: tcg/
AArch64 target
M: Claudio Fontana <claudio.fontana@huawei.com>
M: Claudio Fontana <claudio.fontana@gmail.com>
S: Maintained
F: tcg/aarch64/
ARM target
M: Andrzej Zaborowski <balrogg@gmail.com>
S: Maintained
F: tcg/arm/
HPPA target
M: Richard Henderson <rth@twiddle.net>
S: Maintained
F: tcg/hppa/
i386 target
M: qemu-devel@nongnu.org
S: Maintained
@@ -914,73 +795,25 @@ TCI target
M: Stefan Weil <sw@weilnetz.de>
S: Maintained
F: tcg/tci/
F: tci.c
Stable branches
---------------
Stable 1.0
L: qemu-stable@nongnu.org
T: git git://git.qemu-project.org/qemu-stable-1.0.git
T: git git://git.qemu.org/qemu-stable-1.0.git
S: Orphan
Stable 0.15
L: qemu-stable@nongnu.org
M: Andreas Färber <afaerber@suse.de>
T: git git://git.qemu-project.org/qemu-stable-0.15.git
S: Supported
T: git git://git.qemu.org/qemu-stable-0.15.git
S: Orphan
Stable 0.14
L: qemu-stable@nongnu.org
T: git git://git.qemu-project.org/qemu-stable-0.14.git
T: git git://git.qemu.org/qemu-stable-0.14.git
S: Orphan
Stable 0.10
L: qemu-stable@nongnu.org
T: git git://git.qemu-project.org/qemu-stable-0.10.git
T: git git://git.qemu.org/qemu-stable-0.10.git
S: Orphan
Block drivers
-------------
VMDK
M: Fam Zheng <famz@redhat.com>
S: Supported
F: block/vmdk.c
RBD
M: Josh Durgin <josh.durgin@inktank.com>
S: Supported
F: block/rbd.c
Sheepdog
M: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
M: Liu Yuan <namei.unix@gmail.com>
L: sheepdog@lists.wpkg.org
S: Supported
F: block/sheepdog.c
VHDX
M: Jeff Cody <jcody@redhat.com>
S: Supported
F: block/vhdx*
VDI
M: Stefan Weil <sw@weilnetz.de>
S: Maintained
F: block/vdi.c
iSCSI
M: Ronnie Sahlberg <ronniesahlberg@gmail.com>
M: Paolo Bonzini <pbonzini@redhat.com>
M: Peter Lieven <pl@kamp.de>
S: Supported
F: block/iscsi.c
NFS
M: Peter Lieven <pl@kamp.de>
S: Maintained
F: block/nfs.c
SSH
M: Richard W.M. Jones <rjones@redhat.com>
S: Supported
F: block/ssh.c

187
Makefile
View File

@@ -28,14 +28,7 @@ CONFIG_ALL=y
include $(SRC_PATH)/rules.mak
config-host.mak: $(SRC_PATH)/configure
@echo $@ is out-of-date, running configure
@# TODO: The next lines include code which supports a smooth
@# transition from old configurations without config.status.
@# This code can be removed after QEMU 1.7.
@if test -x config.status; then \
./config.status; \
else \
sed -n "/.*Configured with/s/[^:]*: //p" $@ | sh; \
fi
@sed -n "/.*Configured with/s/[^:]*: //p" $@ | sh
else
config-host.mak:
ifneq ($(filter-out %clean,$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
@@ -57,11 +50,6 @@ GENERATED_HEADERS += trace/generated-tracers-dtrace.h
endif
GENERATED_SOURCES += trace/generated-tracers.c
ifeq ($(TRACE_BACKEND),ust)
GENERATED_HEADERS += trace/generated-ust-provider.h
GENERATED_SOURCES += trace/generated-ust.c
endif
# Don't try to regenerate Makefile or configure
# We don't generate any of them
Makefile: ;
@@ -77,7 +65,7 @@ LIBS+=-lz $(LIBS_TOOLS)
HELPERS-$(CONFIG_LINUX) = qemu-bridge-helper$(EXESUF)
ifdef BUILD_DOCS
DOCS=qemu-doc.html qemu-tech.html qemu.1 qemu-img.1 qemu-nbd.8 qmp-commands.txt
DOCS=qemu-doc.html qemu-tech.html qemu.1 qemu-img.1 qemu-nbd.8 QMP/qmp-commands.txt
ifdef CONFIG_VIRTFS
DOCS+=fsdev/virtfs-proxy-helper.1
endif
@@ -127,26 +115,13 @@ defconfig:
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/Makefile.objs
endif
dummy := $(call unnest-vars,, \
stub-obj-y \
util-obj-y \
qga-obj-y \
qga-vss-dll-obj-y \
block-obj-y \
block-obj-m \
common-obj-y \
common-obj-m)
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/tests/Makefile
endif
ifeq ($(CONFIG_SMARTCARD_NSS),y)
include $(SRC_PATH)/libcacard/Makefile
endif
all: $(DOCS) $(TOOLS) $(HELPERS-y) recurse-all modules
all: $(DOCS) $(TOOLS) $(HELPERS-y) recurse-all
config-host.h: config-host.h-timestamp
config-host.h-timestamp: config-host.mak
@@ -156,7 +131,6 @@ qemu-options.def: $(SRC_PATH)/qemu-options.hx
SUBDIR_RULES=$(patsubst %,subdir-%, $(TARGET_DIRS))
SOFTMMU_SUBDIR_RULES=$(filter %-softmmu,$(SUBDIR_RULES))
$(SOFTMMU_SUBDIR_RULES): $(block-obj-y)
$(SOFTMMU_SUBDIR_RULES): config-all-devices.mak
subdir-%:
@@ -172,11 +146,10 @@ $(SRC_PATH)/pixman/configure:
(cd $(SRC_PATH)/pixman; autoreconf -v --install)
DTC_MAKE_ARGS=-I$(SRC_PATH)/dtc VPATH=$(SRC_PATH)/dtc -C dtc V="$(V)" LIBFDT_srcdir=$(SRC_PATH)/dtc/libfdt
DTC_CFLAGS=$(CFLAGS) $(QEMU_CFLAGS)
DTC_CPPFLAGS=-I$(BUILD_DIR)/dtc -I$(SRC_PATH)/dtc -I$(SRC_PATH)/dtc/libfdt
DTC_CFLAGS=$(CFLAGS) $(QEMU_CFLAGS) -I$(BUILD_DIR)/dtc -I$(SRC_PATH)/dtc -I$(SRC_PATH)/dtc/libfdt
subdir-dtc:dtc/libfdt dtc/tests
$(call quiet-command,$(MAKE) $(DTC_MAKE_ARGS) CPPFLAGS="$(DTC_CPPFLAGS)" CFLAGS="$(DTC_CFLAGS)" LDFLAGS="$(LDFLAGS)" ARFLAGS="$(ARFLAGS)" CC="$(CC)" AR="$(AR)" LD="$(LD)" $(SUBDIR_MAKEFLAGS) libfdt/libfdt.a,)
$(call quiet-command,$(MAKE) $(DTC_MAKE_ARGS) CFLAGS="$(DTC_CFLAGS)" LDFLAGS="$(LDFLAGS)" ARFLAGS="$(ARFLAGS)" CC="$(CC)" AR="$(AR)" LD="$(LD)" $(SUBDIR_MAKEFLAGS) libfdt/libfdt.a,)
dtc/%:
mkdir -p $@
@@ -191,10 +164,13 @@ ALL_SUBDIRS=$(TARGET_DIRS) $(patsubst %,pc-bios/%, $(ROMS))
recurse-all: $(SUBDIR_RULES) $(ROMSUBDIR_RULES)
$(BUILD_DIR)/version.o: $(SRC_PATH)/version.rc $(BUILD_DIR)/config-host.h | $(BUILD_DIR)/version.lo
$(call quiet-command,$(WINDRES) -I$(BUILD_DIR) -o $@ $<," RC version.o")
$(BUILD_DIR)/version.lo: $(SRC_PATH)/version.rc $(BUILD_DIR)/config-host.h
$(call quiet-command,$(WINDRES) -I$(BUILD_DIR) -o $@ $<," RC version.lo")
bt-host.o: QEMU_CFLAGS += $(BLUEZ_CFLAGS)
version.o: $(SRC_PATH)/version.rc config-host.h | version.lo
version.lo: $(SRC_PATH)/version.rc config-host.h
version-obj-$(CONFIG_WIN32) += version.o
version-lobj-$(CONFIG_WIN32) += version.lo
Makefile: $(version-obj-y) $(version-lobj-y)
@@ -202,10 +178,7 @@ Makefile: $(version-obj-y) $(version-lobj-y)
# Build libraries
libqemustub.a: $(stub-obj-y)
libqemuutil.a: $(util-obj-y) qapi-types.o qapi-visit.o
block-modules = $(foreach o,$(block-obj-m),"$(basename $(subst /,-,$o))",) NULL
util/module.o-cflags = -D'CONFIG_BLOCK_MODULES=$(block-modules)'
libqemuutil.a: $(util-obj-y)
######################################################################
@@ -213,7 +186,7 @@ qemu-img.o: qemu-img-cmds.h
qemu-img$(EXESUF): qemu-img.o $(block-obj-y) libqemuutil.a libqemustub.a
qemu-nbd$(EXESUF): qemu-nbd.o $(block-obj-y) libqemuutil.a libqemustub.a
qemu-io$(EXESUF): qemu-io.o $(block-obj-y) libqemuutil.a libqemustub.a
qemu-io$(EXESUF): qemu-io.o cmd.o $(block-obj-y) libqemuutil.a libqemustub.a
qemu-bridge-helper$(EXESUF): qemu-bridge-helper.o
@@ -232,35 +205,23 @@ qapi-py = $(SRC_PATH)/scripts/qapi.py $(SRC_PATH)/scripts/ordereddict.py
qga/qapi-generated/qga-qapi-types.c qga/qapi-generated/qga-qapi-types.h :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" -i $<, \
" GEN $@")
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py $(gen-out-type) -o qga/qapi-generated -p "qga-" < $<, " GEN $@")
qga/qapi-generated/qga-qapi-visit.c qga/qapi-generated/qga-qapi-visit.h :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" -i $<, \
" GEN $@")
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py $(gen-out-type) -o qga/qapi-generated -p "qga-" < $<, " GEN $@")
qga/qapi-generated/qga-qmp-commands.h qga/qapi-generated/qga-qmp-marshal.c :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" -i $<, \
" GEN $@")
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py $(gen-out-type) -o qga/qapi-generated -p "qga-" < $<, " GEN $@")
qapi-types.c qapi-types.h :\
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py \
$(gen-out-type) -o "." -b -i $<, \
" GEN $@")
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py $(gen-out-type) -o "." < $<, " GEN $@")
qapi-visit.c qapi-visit.h :\
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py \
$(gen-out-type) -o "." -b -i $<, \
" GEN $@")
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py $(gen-out-type) -o "." < $<, " GEN $@")
qmp-commands.h qmp-marshal.c :\
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py \
$(gen-out-type) -o "." -m -i $<, \
" GEN $@")
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py $(gen-out-type) -m -o "." < $<, " GEN $@")
QGALIB_GEN=$(addprefix qga/qapi-generated/, qga-qapi-types.h qga-qapi-visit.h qga-qmp-commands.h)
$(qga-obj-y) qemu-ga.o: $(QGALIB_GEN)
@@ -272,10 +233,10 @@ clean:
# avoid old build problems by removing potentially incorrect old files
rm -f config.mak op-i386.h opc-i386.h gen-op-i386.h op-arm.h opc-arm.h gen-op-arm.h
rm -f qemu-options.def
find . \( -name '*.l[oa]' -o -name '*.so' -o -name '*.dll' -o -name '*.mo' -o -name '*.[oda]' \) -type f -exec rm {} +
rm -f $(filter-out %.tlb,$(TOOLS)) $(HELPERS-y) qemu-ga TAGS cscope.* *.pod *~ */*~
rm -f fsdev/*.pod
rm -rf .libs */.libs
find . -name '*.[oda]' -type f -exec rm -f {} +
find . -name '*.l[oa]' -type f -exec rm -f {} +
rm -f $(TOOLS) $(HELPERS-y) qemu-ga TAGS cscope.* *.pod *~ */*~
rm -Rf .libs
rm -f qemu-img-cmds.h
@# May not be present in GENERATED_HEADERS
rm -f trace/generated-tracers-dtrace.dtrace*
@@ -284,6 +245,7 @@ clean:
rm -f $(foreach f,$(GENERATED_SOURCES),$(f) $(f)-timestamp)
rm -rf qapi-generated
rm -rf qga/qapi-generated
$(MAKE) -C tests/tcg clean
for d in $(ALL_SUBDIRS); do \
if test -d $$d; then $(MAKE) -C $$d $@ || exit 1; fi; \
rm -f $$d/qemu-options.def; \
@@ -299,7 +261,6 @@ qemu-%.tar.bz2:
distclean: clean
rm -f config-host.mak config-host.h* config-host.ld $(DOCS) qemu-options.texi qemu-img-cmds.texi qemu-monitor.texi
rm -f config-all-devices.mak config-all-disas.mak
rm -f po/*.mo
rm -f roms/seabios/config.mak roms/vgabios/config.mak
rm -f qemu-doc.info qemu-doc.aux qemu-doc.cp qemu-doc.cps qemu-doc.dvi
rm -f qemu-doc.fn qemu-doc.fns qemu-doc.info qemu-doc.ky qemu-doc.kys
@@ -311,25 +272,24 @@ distclean: clean
for d in $(TARGET_DIRS); do \
rm -rf $$d || exit 1 ; \
done
rm -Rf .sdk
if test -f pixman/config.log; then make -C pixman distclean; fi
if test -f dtc/version_gen.h; then make $(DTC_MAKE_ARGS) clean; fi
KEYMAPS=da en-gb et fr fr-ch is lt modifiers no pt-br sv \
ar de en-us fi fr-be hr it lv nl pl ru th \
common de-ch es fo fr-ca hu ja mk nl-be pt sl tr \
bepo cz
bepo
ifdef INSTALL_BLOBS
BLOBS=bios.bin bios-256k.bin sgabios.bin vgabios.bin vgabios-cirrus.bin \
BLOBS=bios.bin sgabios.bin vgabios.bin vgabios-cirrus.bin \
vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin \
acpi-dsdt.aml q35-acpi-dsdt.aml \
ppc_rom.bin openbios-sparc32 openbios-sparc64 openbios-ppc QEMU,tcx.bin QEMU,cgthree.bin \
ppc_rom.bin openbios-sparc32 openbios-sparc64 openbios-ppc \
pxe-e1000.rom pxe-eepro100.rom pxe-ne2k_pci.rom \
pxe-pcnet.rom pxe-rtl8139.rom pxe-virtio.rom \
efi-e1000.rom efi-eepro100.rom efi-ne2k_pci.rom \
efi-pcnet.rom efi-rtl8139.rom efi-virtio.rom \
qemu-icon.bmp qemu_logo_no_text.svg \
qemu-icon.bmp \
bamboo.dtb petalogix-s3adsp1800.dtb petalogix-ml605.dtb \
multiboot.bin linuxboot.bin kvmvapic.bin \
s390-zipl.rom \
@@ -343,7 +303,7 @@ endif
install-doc: $(DOCS)
$(INSTALL_DIR) "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) qemu-doc.html qemu-tech.html "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) qmp-commands.txt "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) QMP/qmp-commands.txt "$(DESTDIR)$(qemu_docdir)"
ifdef CONFIG_POSIX
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DATA) qemu.1 "$(DESTDIR)$(mandir)/man1"
@@ -361,42 +321,20 @@ endif
install-datadir:
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)"
install-localstatedir:
ifdef CONFIG_POSIX
ifneq (,$(findstring qemu-ga,$(TOOLS)))
$(INSTALL_DIR) "$(DESTDIR)$(qemu_localstatedir)"/run
endif
endif
install-confdir:
$(INSTALL_DIR) "$(DESTDIR)$(qemu_confdir)"
install-sysconfig: install-datadir install-confdir
$(INSTALL_DATA) $(SRC_PATH)/sysconfigs/target/target-x86_64.conf "$(DESTDIR)$(qemu_confdir)"
install: all $(if $(BUILD_DOCS),install-doc) install-sysconfig \
install-datadir install-localstatedir
install: all $(if $(BUILD_DOCS),install-doc) install-sysconfig install-datadir
$(INSTALL_DIR) "$(DESTDIR)$(bindir)"
ifneq ($(TOOLS),)
$(INSTALL_PROG) $(TOOLS) "$(DESTDIR)$(bindir)"
ifneq ($(STRIP),)
$(STRIP) $(TOOLS:%="$(DESTDIR)$(bindir)/%")
endif
endif
ifneq ($(CONFIG_MODULES),)
$(INSTALL_DIR) "$(DESTDIR)$(qemu_moddir)"
for s in $(modules-m:.mo=$(DSOSUF)); do \
t="$(DESTDIR)$(qemu_moddir)/$$(echo $$s | tr / -)"; \
$(INSTALL_LIB) $$s "$$t"; \
test -z "$(STRIP)" || $(STRIP) "$$t"; \
done
$(INSTALL_PROG) $(STRIP_OPT) $(TOOLS) "$(DESTDIR)$(bindir)"
endif
ifneq ($(HELPERS-y),)
$(INSTALL_DIR) "$(DESTDIR)$(libexecdir)"
$(INSTALL_PROG) $(HELPERS-y) "$(DESTDIR)$(libexecdir)"
ifneq ($(STRIP),)
$(STRIP) $(HELPERS-y:%="$(DESTDIR)$(libexecdir)/%")
endif
$(INSTALL_PROG) $(STRIP_OPT) $(HELPERS-y) "$(DESTDIR)$(libexecdir)"
endif
ifneq ($(BLOBS),)
set -e; for x in $(BLOBS); do \
@@ -411,7 +349,7 @@ endif
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/keymaps/$$x "$(DESTDIR)$(qemu_datadir)/keymaps"; \
done
for d in $(TARGET_DIRS); do \
$(MAKE) $(SUBDIR_MAKEFLAGS) TARGET_DIR=$$d/ -C $$d $@ || exit 1 ; \
$(MAKE) -C $$d $@ || exit 1 ; \
done
# various test targets
@@ -451,7 +389,7 @@ qemu-options.texi: $(SRC_PATH)/qemu-options.hx
qemu-monitor.texi: $(SRC_PATH)/hmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -t < $< > $@," GEN $@")
qmp-commands.txt: $(SRC_PATH)/qmp-commands.hx
QMP/qmp-commands.txt: $(SRC_PATH)/qmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -q < $< > $@," GEN $@")
qemu-img-cmds.texi: $(SRC_PATH)/qemu-img-cmds.hx
@@ -490,61 +428,6 @@ qemu-doc.dvi qemu-doc.html qemu-doc.info qemu-doc.pdf: \
qemu-img.texi qemu-nbd.texi qemu-options.texi \
qemu-monitor.texi qemu-img-cmds.texi
ifdef CONFIG_WIN32
INSTALLER = qemu-setup-$(VERSION)$(EXESUF)
nsisflags = -V2 -NOCD
ifneq ($(wildcard $(SRC_PATH)/dll),)
ifeq ($(ARCH),x86_64)
# 64 bit executables
DLL_PATH = $(SRC_PATH)/dll/w64
nsisflags += -DW64
else
# 32 bit executables
DLL_PATH = $(SRC_PATH)/dll/w32
endif
endif
.PHONY: installer
installer: $(INSTALLER)
INSTDIR=/tmp/qemu-nsis
$(INSTALLER): $(SRC_PATH)/qemu.nsi
make install prefix=${INSTDIR}
ifdef SIGNCODE
(cd ${INSTDIR}; \
for i in *.exe; do \
$(SIGNCODE) $${i}; \
done \
)
endif # SIGNCODE
(cd ${INSTDIR}; \
for i in qemu-system-*.exe; do \
arch=$${i%.exe}; \
arch=$${arch#qemu-system-}; \
echo Section \"$$arch\" Section_$$arch; \
echo SetOutPath \"\$$INSTDIR\"; \
echo File \"\$${BINDIR}\\$$i\"; \
echo SectionEnd; \
done \
) >${INSTDIR}/system-emulations.nsh
makensis $(nsisflags) \
$(if $(BUILD_DOCS),-DCONFIG_DOCUMENTATION="y") \
$(if $(CONFIG_GTK),-DCONFIG_GTK="y") \
-DBINDIR="${INSTDIR}" \
$(if $(DLL_PATH),-DDLLDIR="$(DLL_PATH)") \
-DSRCDIR="$(SRC_PATH)" \
-DOUTFILE="$(INSTALLER)" \
$(SRC_PATH)/qemu.nsi
rm -r ${INSTDIR}
ifdef SIGNCODE
$(SIGNCODE) $(INSTALLER)
endif # SIGNCODE
endif # CONFIG_WIN
# Add a dependency on the generated files, so that they are always
# rebuilt before other object files
ifneq ($(filter-out %clean,$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))

View File

@@ -13,14 +13,16 @@ block-obj-$(CONFIG_POSIX) += aio-posix.o
block-obj-$(CONFIG_WIN32) += aio-win32.o
block-obj-y += block/
block-obj-y += qapi-types.o qapi-visit.o
block-obj-y += qemu-io-cmds.o
block-obj-y += qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
block-obj-y += qemu-coroutine-sleep.o
block-obj-y += coroutine-$(CONFIG_COROUTINE_BACKEND).o
block-obj-m = block/
ifeq ($(CONFIG_VIRTIO)$(CONFIG_VIRTFS)$(CONFIG_PCI),yyy)
# Lots of the fsdev/9pcode is pulled in by vl.c via qemu_fsdev_add.
# only pull in the actual virtio-9p device if we also enabled virtio.
CONFIG_REALLY_VIRTFS=y
endif
######################################################################
# smartcard
@@ -31,8 +33,6 @@ libcacard-y += libcacard/vcard_emul_nss.o
libcacard-y += libcacard/vcard_emul_type.o
libcacard-y += libcacard/card_7816.o
libcacard-y += libcacard/vcardt.o
libcacard/vcard_emul_nss.o-cflags := $(NSS_CFLAGS)
libcacard/vcard_emul_nss.o-libs := $(NSS_LIBS)
######################################################################
# Target independent part of system emulation. The long term path is to
@@ -40,9 +40,9 @@ libcacard/vcard_emul_nss.o-libs := $(NSS_LIBS)
# single QEMU executable should support all CPUs and machines.
ifeq ($(CONFIG_SOFTMMU),y)
common-obj-y = blockdev.o blockdev-nbd.o block/
common-obj-y += iothread.o
common-obj-y = $(block-obj-y) blockdev.o blockdev-nbd.o block/
common-obj-y += net/
common-obj-y += readline.o
common-obj-y += qdev-monitor.o device-hotplug.o
common-obj-$(CONFIG_WIN32) += os-win32.o
common-obj-$(CONFIG_POSIX) += os-posix.o
@@ -50,9 +50,6 @@ common-obj-$(CONFIG_POSIX) += os-posix.o
common-obj-$(CONFIG_LINUX) += fsdev/
common-obj-y += migration.o migration-tcp.o
common-obj-y += vmstate.o
common-obj-y += qemu-file.o
common-obj-$(CONFIG_RDMA) += migration-rdma.o
common-obj-y += qemu-char.o #aio.o
common-obj-y += block-migration.o
common-obj-y += page_cache.o xbzrle.o
@@ -66,11 +63,9 @@ common-obj-y += hw/
common-obj-y += ui/
common-obj-y += bt-host.o bt-vhci.o
bt-host.o-cflags := $(BLUEZ_CFLAGS)
common-obj-y += dma-helpers.o
common-obj-y += vl.o
vl.o-cflags := $(GPROF_CFLAGS) $(SDL_CFLAGS)
common-obj-y += tpm.o
common-obj-$(CONFIG_SLIRP) += slirp/
@@ -101,15 +96,23 @@ common-obj-y += hw/
common-obj-y += qom/
common-obj-y += disas/
######################################################################
# Resource file for Windows executables
version-obj-$(CONFIG_WIN32) += $(BUILD_DIR)/version.o
version-lobj-$(CONFIG_WIN32) += $(BUILD_DIR)/version.lo
######################################################################
# guest agent
# FIXME: a few definitions from qapi-types.o/qapi-visit.o are needed
# by libqemuutil.a. These should be moved to a separate .json schema.
qga-obj-y = qga/ qapi-types.o qapi-visit.o
qga-vss-dll-obj-y = qga/
vl.o: QEMU_CFLAGS+=$(GPROF_CFLAGS)
vl.o: QEMU_CFLAGS+=$(SDL_CFLAGS)
QEMU_CFLAGS+=$(GLIB_CFLAGS)
nested-vars += \
stub-obj-y \
util-obj-y \
qga-obj-y \
block-obj-y \
common-obj-y
dummy := $(call unnest-vars)

View File

@@ -15,30 +15,27 @@ QEMU_CFLAGS+=-I$(SRC_PATH)/include
ifdef CONFIG_USER_ONLY
# user emulator name
QEMU_PROG=qemu-$(TARGET_NAME)
QEMU_PROG_BUILD = $(QEMU_PROG)
QEMU_PROG=qemu-$(TARGET_ARCH2)
else
# system emulator name
QEMU_PROG=qemu-system-$(TARGET_NAME)$(EXESUF)
ifneq (,$(findstring -mwindows,$(libs_softmmu)))
# Terminate program name with a 'w' because the linker builds a windows executable.
QEMU_PROGW=qemu-system-$(TARGET_NAME)w$(EXESUF)
$(QEMU_PROG): $(QEMU_PROGW)
$(call quiet-command,$(OBJCOPY) --subsystem console $(QEMU_PROGW) $(QEMU_PROG)," GEN $(TARGET_DIR)$(QEMU_PROG)")
QEMU_PROG_BUILD = $(QEMU_PROGW)
else
QEMU_PROG_BUILD = $(QEMU_PROG)
endif
QEMU_PROGW=qemu-system-$(TARGET_ARCH2)w$(EXESUF)
endif # windows executable
QEMU_PROG=qemu-system-$(TARGET_ARCH2)$(EXESUF)
endif
PROGS=$(QEMU_PROG) $(QEMU_PROGW)
PROGS=$(QEMU_PROG)
ifdef QEMU_PROGW
PROGS+=$(QEMU_PROGW)
endif
STPFILES=
config-target.h: config-target.h-timestamp
config-target.h-timestamp: config-target.mak
ifdef CONFIG_TRACE_SYSTEMTAP
stap: $(QEMU_PROG).stp-installed $(QEMU_PROG).stp
stap: $(QEMU_PROG).stp
ifdef CONFIG_USER_ONLY
TARGET_TYPE=user
@@ -46,24 +43,14 @@ else
TARGET_TYPE=system
endif
$(QEMU_PROG).stp-installed: $(SRC_PATH)/trace-events
$(call quiet-command,$(TRACETOOL) \
--format=stap \
--backend=$(TRACE_BACKEND) \
--binary=$(bindir)/$(QEMU_PROG) \
--target-name=$(TARGET_NAME) \
--target-type=$(TARGET_TYPE) \
< $< > $@," GEN $(TARGET_DIR)$(QEMU_PROG).stp-installed")
$(QEMU_PROG).stp: $(SRC_PATH)/trace-events
$(call quiet-command,$(TRACETOOL) \
--format=stap \
--backend=$(TRACE_BACKEND) \
--binary=$(realpath .)/$(QEMU_PROG) \
--target-name=$(TARGET_NAME) \
--binary=$(bindir)/$(QEMU_PROG) \
--target-arch=$(TARGET_ARCH) \
--target-type=$(TARGET_TYPE) \
< $< > $@," GEN $(TARGET_DIR)$(QEMU_PROG).stp")
else
stap:
endif
@@ -73,6 +60,12 @@ all: $(PROGS) stap
# Dummy command so that make thinks it has done something
@true
CONFIG_NO_PCI = $(if $(subst n,,$(CONFIG_PCI)),n,y)
CONFIG_NO_KVM = $(if $(subst n,,$(CONFIG_KVM)),n,y)
CONFIG_NO_XEN = $(if $(subst n,,$(CONFIG_XEN)),n,y)
CONFIG_NO_GET_MEMORY_MAPPING = $(if $(subst n,,$(CONFIG_HAVE_GET_MEMORY_MAPPING)),n,y)
CONFIG_NO_CORE_DUMP = $(if $(subst n,,$(CONFIG_HAVE_CORE_DUMP)),n,y)
#########################################################
# cpu emulator library
obj-y = exec.o translate-all.o cpu-exec.o
@@ -82,8 +75,8 @@ obj-$(CONFIG_TCG_INTERPRETER) += disas/tci.o
obj-y += fpu/softfloat.o
obj-y += target-$(TARGET_BASE_ARCH)/
obj-y += disas.o
obj-$(call notempty,$(TARGET_XML_FILES)) += gdbstub-xml.o
obj-$(call lnot,$(CONFIG_KVM)) += kvm-stub.o
obj-$(CONFIG_GDBSTUB_XML) += gdbstub-xml.o
obj-$(CONFIG_NO_KVM) += kvm-stub.o
#########################################################
# Linux user emulator target
@@ -102,7 +95,7 @@ endif #CONFIG_LINUX_USER
ifdef CONFIG_BSD_USER
QEMU_CFLAGS+=-I$(SRC_PATH)/bsd-user -I$(SRC_PATH)/bsd-user/$(TARGET_ABI_DIR)
QEMU_CFLAGS+=-I$(SRC_PATH)/bsd-user -I$(SRC_PATH)/bsd-user/$(TARGET_ARCH)
obj-y += bsd-user/
obj-y += gdbstub.o user-exec.o
@@ -118,23 +111,25 @@ obj-y += hw/
obj-$(CONFIG_FDT) += device_tree.o
obj-$(CONFIG_KVM) += kvm-all.o
obj-y += memory.o savevm.o cputlb.o
obj-y += memory_mapping.o
obj-y += dump.o
obj-$(CONFIG_HAVE_GET_MEMORY_MAPPING) += memory_mapping.o
obj-$(CONFIG_HAVE_CORE_DUMP) += dump.o
obj-$(CONFIG_NO_GET_MEMORY_MAPPING) += memory_mapping-stub.o
obj-$(CONFIG_NO_CORE_DUMP) += dump-stub.o
LIBS+=$(libs_softmmu)
# xen support
obj-$(CONFIG_XEN) += xen-common.o
obj-$(CONFIG_XEN_I386) += xen-hvm.o xen-mapcache.o
obj-$(call lnot,$(CONFIG_XEN)) += xen-common-stub.o
obj-$(call lnot,$(CONFIG_XEN_I386)) += xen-hvm-stub.o
obj-$(CONFIG_XEN) += xen-all.o xen-mapcache.o
obj-$(CONFIG_NO_XEN) += xen-stub.o
# Hardware support
ifeq ($(TARGET_NAME), sparc64)
ifeq ($(TARGET_ARCH), sparc64)
obj-y += hw/sparc64/
else
obj-y += hw/$(TARGET_BASE_ARCH)/
endif
main.o: QEMU_CFLAGS+=$(GPROF_CFLAGS)
GENERATED_HEADERS += hmp-commands.h qmp-commands-old.h
endif # CONFIG_SOFTMMU
@@ -142,27 +137,28 @@ endif # CONFIG_SOFTMMU
# Workaround for http://gcc.gnu.org/PR55489, see configure.
%/translate.o: QEMU_CFLAGS += $(TRANSLATE_OPT_CFLAGS)
dummy := $(call unnest-vars,,obj-y)
all-obj-y := $(obj-y)
nested-vars += obj-y
block-obj-y :=
common-obj-y :=
# This resolves all nested paths, so it must come last
include $(SRC_PATH)/Makefile.objs
dummy := $(call unnest-vars,.., \
block-obj-y \
block-obj-m \
common-obj-y \
common-obj-m)
all-obj-y += $(common-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(block-obj-y)
all-obj-y = $(obj-y)
all-obj-y += $(addprefix ../, $(common-obj-y))
ifndef CONFIG_HAIKU
LIBS+=-lm
endif
# build either PROG or PROGW
$(QEMU_PROG_BUILD): $(all-obj-y) ../libqemuutil.a ../libqemustub.a
ifdef QEMU_PROGW
# The linker builds a windows executable. Make also a console executable.
$(QEMU_PROGW): $(all-obj-y) ../libqemuutil.a ../libqemustub.a
$(call LINK,$^)
$(QEMU_PROG): $(QEMU_PROGW)
$(call quiet-command,$(OBJCOPY) --subsystem console $(QEMU_PROGW) $(QEMU_PROG)," GEN $(TARGET_DIR)$(QEMU_PROG)")
else
$(QEMU_PROG): $(all-obj-y) ../libqemuutil.a ../libqemustub.a
$(call LINK,$^)
endif
gdbstub-xml.c: $(TARGET_XML_FILES) $(SRC_PATH)/scripts/feature_to_c.sh
$(call quiet-command,rm -f $@ && $(SHELL) $(SRC_PATH)/scripts/feature_to_c.sh $@ $(TARGET_XML_FILES)," GEN $(TARGET_DIR)$@")
@@ -183,14 +179,14 @@ endif
install: all
ifneq ($(PROGS),)
$(INSTALL_PROG) $(PROGS) "$(DESTDIR)$(bindir)"
$(INSTALL) -m 755 $(PROGS) "$(DESTDIR)$(bindir)"
ifneq ($(STRIP),)
$(STRIP) $(PROGS:%="$(DESTDIR)$(bindir)/%")
$(STRIP) $(patsubst %,"$(DESTDIR)$(bindir)/%",$(PROGS))
endif
endif
ifdef CONFIG_TRACE_SYSTEMTAP
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset"
$(INSTALL_DATA) $(QEMU_PROG).stp-installed "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset/$(QEMU_PROG).stp"
$(INSTALL_DATA) $(QEMU_PROG).stp "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset"
endif
GENERATED_HEADERS += config-target.h

88
QMP/README Normal file
View File

@@ -0,0 +1,88 @@
QEMU Monitor Protocol
=====================
Introduction
-------------
The QEMU Monitor Protocol (QMP) allows applications to communicate with
QEMU's Monitor.
QMP is JSON[1] based and currently has the following features:
- Lightweight, text-based, easy to parse data format
- Asynchronous messages support (ie. events)
- Capabilities Negotiation
For detailed information on QMP's usage, please, refer to the following files:
o qmp-spec.txt QEMU Monitor Protocol current specification
o qmp-commands.txt QMP supported commands (auto-generated at build-time)
o qmp-events.txt List of available asynchronous events
There is also a simple Python script called 'qmp-shell' available.
IMPORTANT: It's strongly recommended to read the 'Stability Considerations'
section in the qmp-commands.txt file before making any serious use of QMP.
[1] http://www.json.org
Usage
-----
To enable QMP, you need a QEMU monitor instance in "control mode". There are
two ways of doing this.
The simplest one is using the '-qmp' command-line option. The following
example makes QMP available on localhost port 4444:
$ qemu [...] -qmp tcp:localhost:4444,server
However, in order to have more complex combinations, like multiple monitors,
the '-mon' command-line option should be used along with the '-chardev' one.
For instance, the following example creates one user monitor on stdio and one
QMP monitor on localhost port 4444.
$ qemu [...] -chardev stdio,id=mon0 -mon chardev=mon0,mode=readline \
-chardev socket,id=mon1,host=localhost,port=4444,server \
-mon chardev=mon1,mode=control
Please, refer to QEMU's manpage for more information.
Simple Testing
--------------
To manually test QMP one can connect with telnet and issue commands by hand:
$ telnet localhost 4444
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 13, "major": 0}, "package": ""}, "capabilities": []}}
{ "execute": "qmp_capabilities" }
{"return": {}}
{ "execute": "query-version" }
{"return": {"qemu": {"micro": 50, "minor": 13, "major": 0}, "package": ""}}
Development Process
-------------------
When changing QMP's interface (by adding new commands, events or modifying
existing ones) it's mandatory to update the relevant documentation, which is
one (or more) of the files listed in the 'Introduction' section*.
Also, it's strongly recommended to send the documentation patch first, before
doing any code change. This is so because:
1. Avoids the code dictating the interface
2. Review can improve your interface. Letting that happen before
you implement it can save you work.
* The qmp-commands.txt file is generated from the qmp-commands.hx one, which
is the file that should be edited.
Homepage
--------
http://wiki.qemu.org/QMP

View File

@@ -33,7 +33,7 @@
# $ qemu-ga-client fsfreeze freeze
# 2 filesystems frozen
#
# See also: http://wiki.qemu-project.org/Features/QAPI/GuestAgent
# See also: http://wiki.qemu.org/Features/QAPI/GuestAgent
#
import base64
@@ -267,9 +267,7 @@ def main(address, cmd, args):
print('Hint: qemu is not running?')
sys.exit(1)
if cmd == 'fsfreeze' and args[0] == 'freeze':
client.sync(60)
elif cmd != 'ping':
if cmd != 'ping':
client.sync()
globals()['_cmd_' + cmd](client, args)

View File

@@ -1,4 +1,4 @@
QEMU Machine Protocol Events
QEMU Monitor Protocol Events
============================
BALLOON_CHANGE
@@ -18,28 +18,6 @@ Example:
"data": { "actual": 944766976 },
"timestamp": { "seconds": 1267020223, "microseconds": 435656 } }
BLOCK_IMAGE_CORRUPTED
---------------------
Emitted when a disk image is being marked corrupt.
Data:
- "device": Device name (json-string)
- "msg": Informative message (e.g., reason for the corruption) (json-string)
- "offset": If the corruption resulted from an image access, this is the access
offset into the image (json-int)
- "size": If the corruption resulted from an image access, this is the access
size (json-int)
Example:
{ "event": "BLOCK_IMAGE_CORRUPTED",
"data": { "device": "ide0-hd0",
"msg": "Prevented active L1 table overwrite", "offset": 196608,
"size": 65536 },
"timestamp": { "seconds": 1378126126, "microseconds": 966463 } }
BLOCK_IO_ERROR
--------------
@@ -159,7 +137,7 @@ Note: The "ready to complete" status is always reset by a BLOCK_JOB_ERROR
event.
DEVICE_DELETED
--------------
-----------------
Emitted whenever the device removal completion is acknowledged
by the guest.
@@ -194,76 +172,6 @@ Data:
},
"timestamp": { "seconds": 1265044230, "microseconds": 450486 } }
GUEST_PANICKED
--------------
Emitted when guest OS panic is detected.
Data:
- "action": Action that has been taken (json-string, currently always "pause").
Example:
{ "event": "GUEST_PANICKED",
"data": { "action": "pause" } }
NIC_RX_FILTER_CHANGED
---------------------
The event is emitted once until the query command is executed,
the first event will always be emitted.
Data:
- "name": net client name (json-string)
- "path": device path (json-string)
{ "event": "NIC_RX_FILTER_CHANGED",
"data": { "name": "vnet0",
"path": "/machine/peripheral/vnet0/virtio-backend" },
"timestamp": { "seconds": 1368697518, "microseconds": 326866 } }
}
QUORUM_FAILURE
--------------
Emitted by the Quorum block driver if it fails to establish a quorum.
Data:
- "reference": device name if defined else node name.
- "sector-num": Number of the first sector of the failed read operation.
- "sector-count": Failed read operation sector count.
Example:
{ "event": "QUORUM_FAILURE",
"data": { "reference": "usr1", "sector-num": 345435, "sector-count": 5 },
"timestamp": { "seconds": 1344522075, "microseconds": 745528 } }
QUORUM_REPORT_BAD
-----------------
Emitted to report a corruption of a Quorum file.
Data:
- "error": Error message (json-string, optional)
Only present on failure. This field contains a human-readable
error message. There are no semantics other than that the
block layer reported an error and clients should not try to
interpret the error string.
- "node-name": The graph node name of the block driver state.
- "sector-num": Number of the first sector of the failed read operation.
- "sector-count": Failed read operation sector count.
Example:
{ "event": "QUORUM_REPORT_BAD",
"data": { "node-name": "1.raw", "sector-num": 345435, "sector-count": 5 },
"timestamp": { "seconds": 1344522075, "microseconds": 745528 } }
RESET
-----
@@ -295,8 +203,7 @@ Emitted when the guest changes the RTC time.
Data:
- "offset": Offset between base RTC clock (as specified by -rtc base), and
new RTC clock value (json-number)
- "offset": delta against the host UTC in seconds (json-number)
Example:
@@ -518,7 +425,7 @@ Data: None.
Example:
{ "event": "WAKEUP",
{ "event": "WATCHDOG",
"timestamp": { "seconds": 1344522075, "microseconds": 745528 } }
WATCHDOG
@@ -539,3 +446,17 @@ Example:
Note: If action is "reset", "shutdown", or "pause" the WATCHDOG event is
followed respectively by the RESET, SHUTDOWN, or STOP events.
GUEST_PANICKED
--------------
Emitted when guest OS panic is detected.
Data:
- "action": Action that has been taken (json-string, currently always "pause").
Example:
{ "event": "GUEST_PANICKED",
"data": { "action": "pause" } }

View File

@@ -31,7 +31,6 @@
# (QEMU)
import qmp
import json
import readline
import sys
import pprint
@@ -92,7 +91,7 @@ class QMPShell(qmp.QEMUMonitorProtocol):
"""
Build a QMP input object from a user provided command-line in the
following format:
< command-name > [ arg-name1=arg1 ] ... [ arg-nameN=argN ]
"""
cmdargs = cmdline.split()
@@ -108,33 +107,15 @@ class QMPShell(qmp.QEMUMonitorProtocol):
value = True
elif opt[1] == 'false':
value = False
elif opt[1].startswith('{'):
value = json.loads(opt[1])
else:
value = opt[1]
optpath = opt[0].split('.')
parent = qmpcmd['arguments']
curpath = []
for p in optpath[:-1]:
curpath.append(p)
d = parent.get(p, {})
if type(d) is not dict:
raise QMPShellError('Cannot use "%s" as both leaf and non-leaf key' % '.'.join(curpath))
parent[p] = d
parent = d
if optpath[-1] in parent:
if type(parent[optpath[-1]]) is dict:
raise QMPShellError('Cannot use "%s" as both leaf and non-leaf key' % '.'.join(curpath))
else:
raise QMPShellError('Cannot set "%s" multiple times' % opt[0])
parent[optpath[-1]] = value
qmpcmd['arguments'][opt[0]] = value
return qmpcmd
def _execute_cmd(self, cmdline):
try:
qmpcmd = self.__build_cmd(cmdline)
except Exception, e:
print 'Error while parsing command line: %s' % e
except:
print 'command format: <command-name> ',
print '[arg-name1=arg1] ... [arg-nameN=argN]'
return True

View File

@@ -1,17 +1,21 @@
QEMU Machine Protocol Specification
QEMU Monitor Protocol Specification - Version 0.1
1. Introduction
===============
This document specifies the QEMU Machine Protocol (QMP), a JSON-based protocol
which is available for applications to operate QEMU at the machine-level.
This document specifies the QEMU Monitor Protocol (QMP), a JSON-based protocol
which is available for applications to control QEMU at the machine-level.
To enable QMP support, QEMU has to be run in "control mode". This is done by
starting QEMU with the appropriate command-line options. Please, refer to the
QEMU manual page for more information.
2. Protocol Specification
=========================
This section details the protocol format. For the purpose of this document
"Client" is any application which is using QMP to communicate with QEMU and
"Server" is QEMU itself.
"Client" is any application which is communicating with QEMU in control mode,
and "Server" is QEMU itself.
JSON data structures, when mentioned in this document, are always in the
following format:
@@ -43,14 +47,14 @@ that the connection has been successfully established and that the Server is
ready for capabilities negotiation (for more information refer to section
'4. Capabilities Negotiation').
The greeting message format is:
The format is:
{ "QMP": { "version": json-object, "capabilities": json-array } }
Where,
- The "version" member contains the Server's version information (the format
is the same of the query-version command)
is the same of the 'query-version' command)
- The "capabilities" member specify the availability of features beyond the
baseline specification
@@ -79,7 +83,10 @@ of a command execution: success or error.
2.4.1 success
-------------
The format of a success response is:
The success response is issued when the command execution has finished
without errors.
The format is:
{ "return": json-object, "id": json-value }
@@ -89,12 +96,15 @@ The format of a success response is:
in a per-command basis or an empty json-object if the command does not
return data
- The "id" member contains the transaction identification associated
with the command execution if issued by the Client
with the command execution (if issued by the Client)
2.4.2 error
-----------
The format of an error response is:
The error response is issued when the command execution could not be
completed because of an error condition.
The format is:
{ "error": { "class": json-string, "desc": json-string }, "id": json-value }
@@ -104,7 +114,7 @@ The format of an error response is:
- The "desc" member is a human-readable error message. Clients should
not attempt to parse this message.
- The "id" member contains the transaction identification associated with
the command execution if issued by the Client
the command execution (if issued by the Client)
NOTE: Some errors can occur before the Server is able to read the "id" member,
in these cases the "id" member will not be part of the error response, even
@@ -114,9 +124,9 @@ if provided by the client.
-----------------------
As a result of state changes, the Server may send messages unilaterally
to the Client at any time. They are called "asynchronous events".
to the Client at any time. They are called 'asynchronous events'.
The format of asynchronous events is:
The format is:
{ "event": json-string, "data": json-object,
"timestamp": { "seconds": json-number, "microseconds": json-number } }
@@ -137,37 +147,36 @@ qmp-events.txt file.
===============
This section provides some examples of real QMP usage, in all of them
"C" stands for "Client" and "S" stands for "Server".
'C' stands for 'Client' and 'S' stands for 'Server'.
3.1 Server greeting
-------------------
S: { "QMP": { "version": { "qemu": { "micro": 50, "minor": 6, "major": 1 },
"package": ""}, "capabilities": []}}
S: {"QMP": {"version": {"qemu": "0.12.50", "package": ""}, "capabilities": []}}
3.2 Simple 'stop' execution
---------------------------
C: { "execute": "stop" }
S: { "return": {} }
S: {"return": {}}
3.3 KVM information
-------------------
C: { "execute": "query-kvm", "id": "example" }
S: { "return": { "enabled": true, "present": true }, "id": "example"}
S: {"return": {"enabled": true, "present": true}, "id": "example"}
3.4 Parsing error
------------------
C: { "execute": }
S: { "error": { "class": "GenericError", "desc": "Invalid JSON syntax" } }
S: {"error": {"class": "GenericError", "desc": "Invalid JSON syntax" } }
3.5 Powerdown event
-------------------
S: { "timestamp": { "seconds": 1258551470, "microseconds": 802384 },
"event": "POWERDOWN" }
S: {"timestamp": {"seconds": 1258551470, "microseconds": 802384}, "event":
"POWERDOWN"}
4. Capabilities Negotiation
----------------------------
@@ -175,17 +184,17 @@ S: { "timestamp": { "seconds": 1258551470, "microseconds": 802384 },
When a Client successfully establishes a connection, the Server is in
Capabilities Negotiation mode.
In this mode only the qmp_capabilities command is allowed to run, all
other commands will return the CommandNotFound error. Asynchronous
messages are not delivered either.
In this mode only the 'qmp_capabilities' command is allowed to run, all
other commands will return the CommandNotFound error. Asynchronous messages
are not delivered either.
Clients should use the qmp_capabilities command to enable capabilities
Clients should use the 'qmp_capabilities' command to enable capabilities
advertised in the Server's greeting (section '2.2 Server Greeting') they
support.
When the qmp_capabilities command is issued, and if it does not return an
When the 'qmp_capabilities' command is issued, and if it does not return an
error, the Server enters in Command mode where capabilities changes take
effect, all commands (except qmp_capabilities) are allowed and asynchronous
effect, all commands (except 'qmp_capabilities') are allowed and asynchronous
messages are delivered.
5 Compatibility Considerations
@@ -236,7 +245,7 @@ arguments, errors, asynchronous events, and so forth.
Any new names downstream wishes to add must begin with '__'. To
ensure compatibility with other downstreams, it is strongly
recommended that you prefix your downstream names with '__RFQDN_' where
recommended that you prefix your downstram names with '__RFQDN_' where
RFQDN is a valid, reverse fully qualified domain name which you
control. For example, a qemu-kvm specific monitor command would be:

View File

@@ -1,5 +1,5 @@
# QEMU Monitor Protocol Python class
#
#
# Copyright (C) 2009, 2010 Red Hat Inc.
#
# Authors:
@@ -171,12 +171,7 @@ class QEMUMonitorProtocol:
pass
self.__sock.setblocking(1)
if not self.__events and wait:
ret = self.__json_read(only_event=True)
if ret == None:
# We are in blocking mode, if don't get anything, something
# went wrong
raise QMPConnectError("Error while reading from socket")
self.__json_read(only_event=True)
return self.__events
def clear_events(self):
@@ -193,9 +188,3 @@ class QEMUMonitorProtocol:
def settimeout(self, timeout):
self.__sock.settimeout(timeout)
def get_sock_fd(self):
return self.__sock.fileno()
def is_scm_available(self):
return self.__sock.family == socket.AF_UNIX

2
README
View File

@@ -1,3 +1,3 @@
Read the documentation in qemu-doc.html or on http://wiki.qemu-project.org
Read the documentation in qemu-doc.html or on http://wiki.qemu.org
- QEMU team

View File

@@ -1 +1 @@
2.0.50
1.5.2

View File

@@ -23,6 +23,7 @@ struct AioHandler
GPollFD pfd;
IOHandler *io_read;
IOHandler *io_write;
AioFlushHandler *io_flush;
int deleted;
int pollfds_idx;
void *opaque;
@@ -46,6 +47,7 @@ void aio_set_fd_handler(AioContext *ctx,
int fd,
IOHandler *io_read,
IOHandler *io_write,
AioFlushHandler *io_flush,
void *opaque)
{
AioHandler *node;
@@ -82,6 +84,7 @@ void aio_set_fd_handler(AioContext *ctx,
/* Update handler with latest information */
node->io_read = io_read;
node->io_write = io_write;
node->io_flush = io_flush;
node->opaque = opaque;
node->pollfds_idx = -1;
@@ -94,10 +97,12 @@ void aio_set_fd_handler(AioContext *ctx,
void aio_set_event_notifier(AioContext *ctx,
EventNotifier *notifier,
EventNotifierHandler *io_read)
EventNotifierHandler *io_read,
AioFlushEventNotifierHandler *io_flush)
{
aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
(IOHandler *)io_read, NULL, notifier);
(IOHandler *)io_read, NULL,
(AioFlushHandler *)io_flush, notifier);
}
bool aio_pending(AioContext *ctx)
@@ -142,11 +147,7 @@ static bool aio_dispatch(AioContext *ctx)
(revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
node->io_read) {
node->io_read(node->opaque);
/* aio_notify() does not count as progress */
if (node->opaque != &ctx->notifier) {
progress = true;
}
progress = true;
}
if (!node->deleted &&
(revents & (G_IO_OUT | G_IO_ERR)) &&
@@ -165,10 +166,6 @@ static bool aio_dispatch(AioContext *ctx)
g_free(tmp);
}
}
/* Run our timers */
progress |= timerlistgroup_run_timers(&ctx->tlg);
return progress;
}
@@ -176,7 +173,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
int ret;
bool progress;
bool busy, progress;
progress = false;
@@ -203,8 +200,20 @@ bool aio_poll(AioContext *ctx, bool blocking)
g_array_set_size(ctx->pollfds, 0);
/* fill pollfds */
busy = false;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
node->pollfds_idx = -1;
/* If there aren't pending AIO operations, don't invoke callbacks.
* Otherwise, if there are no AIO requests, qemu_aio_wait() would
* wait indefinitely.
*/
if (!node->deleted && node->io_flush) {
if (node->io_flush(node->opaque) == 0) {
continue;
}
busy = true;
}
if (!node->deleted && node->pfd.events) {
GPollFD pfd = {
.fd = node->pfd.fd,
@@ -217,10 +226,15 @@ bool aio_poll(AioContext *ctx, bool blocking)
ctx->walking_handlers--;
/* No AIO operations? Get us out of here */
if (!busy) {
return progress;
}
/* wait until next event */
ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
ctx->pollfds->len,
blocking ? timerlistgroup_deadline_ns(&ctx->tlg) : 0);
ret = g_poll((GPollFD *)ctx->pollfds->data,
ctx->pollfds->len,
blocking ? -1 : 0);
/* if we have any readable fds, dispatch event */
if (ret > 0) {
@@ -231,12 +245,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
node->pfd.revents = pfd->revents;
}
}
if (aio_dispatch(ctx)) {
progress = true;
}
}
/* Run dispatch even if there were no readable fds to run timers */
if (aio_dispatch(ctx)) {
progress = true;
}
return progress;
assert(progress || busy);
return true;
}

View File

@@ -23,6 +23,7 @@
struct AioHandler {
EventNotifier *e;
EventNotifierHandler *io_notify;
AioFlushEventNotifierHandler *io_flush;
GPollFD pfd;
int deleted;
QLIST_ENTRY(AioHandler) node;
@@ -30,7 +31,8 @@ struct AioHandler {
void aio_set_event_notifier(AioContext *ctx,
EventNotifier *e,
EventNotifierHandler *io_notify)
EventNotifierHandler *io_notify,
AioFlushEventNotifierHandler *io_flush)
{
AioHandler *node;
@@ -71,6 +73,7 @@ void aio_set_event_notifier(AioContext *ctx,
}
/* Update handler with latest information */
node->io_notify = io_notify;
node->io_flush = io_flush;
}
aio_notify(ctx);
@@ -93,9 +96,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
bool progress;
bool busy, progress;
int count;
int timeout;
progress = false;
@@ -109,9 +111,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
progress = true;
}
/* Run timers */
progress |= timerlistgroup_run_timers(&ctx->tlg);
/*
* Then dispatch any pending callbacks from the GSource.
*
@@ -127,11 +126,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
if (node->pfd.revents && node->io_notify) {
node->pfd.revents = 0;
node->io_notify(node->e);
/* aio_notify() does not count as progress */
if (node->e != &ctx->notifier) {
progress = true;
}
progress = true;
}
tmp = node;
@@ -152,8 +147,19 @@ bool aio_poll(AioContext *ctx, bool blocking)
ctx->walking_handlers++;
/* fill fd sets */
busy = false;
count = 0;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
/* If there aren't pending AIO operations, don't invoke callbacks.
* Otherwise, if there are no AIO requests, qemu_aio_wait() would
* wait indefinitely.
*/
if (!node->deleted && node->io_flush) {
if (node->io_flush(node->e) == 0) {
continue;
}
busy = true;
}
if (!node->deleted && node->io_notify) {
events[count++] = event_notifier_get_handle(node->e);
}
@@ -161,13 +167,15 @@ bool aio_poll(AioContext *ctx, bool blocking)
ctx->walking_handlers--;
/* No AIO operations? Get us out of here */
if (!busy) {
return progress;
}
/* wait until next event */
while (count > 0) {
int ret;
timeout = blocking ?
qemu_timeout_ns_to_ms(timerlistgroup_deadline_ns(&ctx->tlg)) : 0;
ret = WaitForMultipleObjects(count, events, FALSE, timeout);
int timeout = blocking ? INFINITE : 0;
int ret = WaitForMultipleObjects(count, events, FALSE, timeout);
/* if we have any signaled events, dispatch event */
if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
@@ -188,11 +196,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
event_notifier_get_handle(node->e) == events[ret - WAIT_OBJECT_0] &&
node->io_notify) {
node->io_notify(node->e);
/* aio_notify() does not count as progress */
if (node->e != &ctx->notifier) {
progress = true;
}
progress = true;
}
tmp = node;
@@ -210,14 +214,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
events[ret - WAIT_OBJECT_0] = events[--count];
}
if (blocking) {
/* Run the timers a second time. We do this because otherwise aio_wait
* will not note progress - and will stop a drain early - if we have
* a timer that was not ready to run entering g_poll but is ready
* after g_poll. This will only do anything if a timer has expired.
*/
progress |= timerlistgroup_run_timers(&ctx->tlg);
}
return progress;
assert(progress || busy);
return true;
}

File diff suppressed because it is too large Load Diff

75
async.c
View File

@@ -47,16 +47,11 @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
bh->ctx = ctx;
bh->cb = cb;
bh->opaque = opaque;
qemu_mutex_lock(&ctx->bh_lock);
bh->next = ctx->first_bh;
/* Make sure that the members are ready before putting bh into list */
smp_wmb();
ctx->first_bh = bh;
qemu_mutex_unlock(&ctx->bh_lock);
return bh;
}
/* Multiple occurrences of aio_bh_poll cannot be called concurrently */
int aio_bh_poll(AioContext *ctx)
{
QEMUBH *bh, **bhp, *next;
@@ -66,15 +61,9 @@ int aio_bh_poll(AioContext *ctx)
ret = 0;
for (bh = ctx->first_bh; bh; bh = next) {
/* Make sure that fetching bh happens before accessing its members */
smp_read_barrier_depends();
next = bh->next;
if (!bh->deleted && bh->scheduled) {
bh->scheduled = 0;
/* Paired with write barrier in bh schedule to ensure reading for
* idle & callbacks coming after bh's scheduling.
*/
smp_rmb();
if (!bh->idle)
ret = 1;
bh->idle = 0;
@@ -86,7 +75,6 @@ int aio_bh_poll(AioContext *ctx)
/* remove deleted bhs */
if (!ctx->walking_bh) {
qemu_mutex_lock(&ctx->bh_lock);
bhp = &ctx->first_bh;
while (*bhp) {
bh = *bhp;
@@ -97,7 +85,6 @@ int aio_bh_poll(AioContext *ctx)
bhp = &bh->next;
}
}
qemu_mutex_unlock(&ctx->bh_lock);
}
return ret;
@@ -107,38 +94,24 @@ void qemu_bh_schedule_idle(QEMUBH *bh)
{
if (bh->scheduled)
return;
bh->idle = 1;
/* Make sure that idle & any writes needed by the callback are done
* before the locations are read in the aio_bh_poll.
*/
smp_wmb();
bh->scheduled = 1;
bh->idle = 1;
}
void qemu_bh_schedule(QEMUBH *bh)
{
if (bh->scheduled)
return;
bh->idle = 0;
/* Make sure that idle & any writes needed by the callback are done
* before the locations are read in the aio_bh_poll.
*/
smp_wmb();
bh->scheduled = 1;
bh->idle = 0;
aio_notify(bh->ctx);
}
/* This func is async.
*/
void qemu_bh_cancel(QEMUBH *bh)
{
bh->scheduled = 0;
}
/* This func is async.The bottom half will do the delete action at the finial
* end.
*/
void qemu_bh_delete(QEMUBH *bh)
{
bh->scheduled = 0;
@@ -150,10 +123,7 @@ aio_ctx_prepare(GSource *source, gint *timeout)
{
AioContext *ctx = (AioContext *) source;
QEMUBH *bh;
int deadline;
/* We assume there is no timeout already supplied */
*timeout = -1;
for (bh = ctx->first_bh; bh; bh = bh->next) {
if (!bh->deleted && bh->scheduled) {
if (bh->idle) {
@@ -169,14 +139,6 @@ aio_ctx_prepare(GSource *source, gint *timeout)
}
}
deadline = qemu_timeout_ns_to_ms(timerlistgroup_deadline_ns(&ctx->tlg));
if (deadline == 0) {
*timeout = 0;
return true;
} else {
*timeout = qemu_soonest_timeout(*timeout, deadline);
}
return false;
}
@@ -191,7 +153,7 @@ aio_ctx_check(GSource *source)
return true;
}
}
return aio_pending(ctx) || (timerlistgroup_deadline_ns(&ctx->tlg) == 0);
return aio_pending(ctx);
}
static gboolean
@@ -212,12 +174,9 @@ aio_ctx_finalize(GSource *source)
AioContext *ctx = (AioContext *) source;
thread_pool_free(ctx->thread_pool);
aio_set_event_notifier(ctx, &ctx->notifier, NULL);
aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL);
event_notifier_cleanup(&ctx->notifier);
rfifolock_destroy(&ctx->lock);
qemu_mutex_destroy(&ctx->bh_lock);
g_array_free(ctx->pollfds, TRUE);
timerlistgroup_deinit(&ctx->tlg);
}
static GSourceFuncs aio_source_funcs = {
@@ -246,30 +205,16 @@ void aio_notify(AioContext *ctx)
event_notifier_set(&ctx->notifier);
}
static void aio_timerlist_notify(void *opaque)
{
aio_notify(opaque);
}
static void aio_rfifolock_cb(void *opaque)
{
/* Kick owner thread in case they are blocked in aio_poll() */
aio_notify(opaque);
}
AioContext *aio_context_new(void)
{
AioContext *ctx;
ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext));
ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
ctx->thread_pool = NULL;
qemu_mutex_init(&ctx->bh_lock);
rfifolock_init(&ctx->lock, aio_rfifolock_cb, ctx);
event_notifier_init(&ctx->notifier, false);
aio_set_event_notifier(ctx, &ctx->notifier,
(EventNotifierHandler *)
event_notifier_test_and_clear);
timerlistgroup_init(&ctx->tlg, aio_timerlist_notify, ctx);
event_notifier_test_and_clear, NULL);
return ctx;
}
@@ -283,13 +228,3 @@ void aio_context_unref(AioContext *ctx)
{
g_source_unref(&ctx->source);
}
void aio_context_acquire(AioContext *ctx)
{
rfifolock_lock(&ctx->lock);
}
void aio_context_release(AioContext *ctx)
{
rfifolock_unlock(&ctx->lock);
}

View File

@@ -14,4 +14,4 @@ common-obj-$(CONFIG_AUDIO_WIN_INT) += audio_win_int.o
common-obj-y += wavcapture.o
$(obj)/audio.o $(obj)/fmodaudio.o: QEMU_CFLAGS += $(FMOD_CFLAGS)
sdlaudio.o-cflags := $(SDL_CFLAGS)
$(obj)/sdlaudio.o: QEMU_CFLAGS += $(SDL_CFLAGS)

View File

@@ -95,7 +95,7 @@ static struct {
}
},
.period = { .hertz = 100 },
.period = { .hertz = 250 },
.plive = 0,
.log_to_monitor = 0,
.try_poll_in = 1,
@@ -1124,11 +1124,10 @@ static int audio_is_timer_needed (void)
static void audio_reset_timer (AudioState *s)
{
if (audio_is_timer_needed ()) {
timer_mod (s->ts,
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + conf.period.ticks);
qemu_mod_timer (s->ts, qemu_get_clock_ns (vm_clock) + 1);
}
else {
timer_del (s->ts);
qemu_del_timer (s->ts);
}
}
@@ -1812,7 +1811,8 @@ static const VMStateDescription vmstate_audio = {
.name = "audio",
.version_id = 1,
.minimum_version_id = 1,
.fields = (VMStateField[]) {
.minimum_version_id_old = 1,
.fields = (VMStateField []) {
VMSTATE_END_OF_LIST()
}
};
@@ -1834,7 +1834,7 @@ static void audio_init (void)
QLIST_INIT (&s->cap_head);
atexit (audio_atexit);
s->ts = timer_new_ns(QEMU_CLOCK_VIRTUAL, audio_timer, s);
s->ts = qemu_new_timer_ns (vm_clock, audio_timer, s);
if (!s->ts) {
hw_error("Could not create audio timer\n");
}

View File

@@ -243,13 +243,38 @@ static inline int audio_ring_dist (int dst, int src, int len)
return (dst >= src) ? (dst - src) : (len - src + dst);
}
#define dolog(fmt, ...) AUD_log(AUDIO_CAP, fmt, ## __VA_ARGS__)
static void GCC_ATTR dolog (const char *fmt, ...)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
}
#ifdef DEBUG
#define ldebug(fmt, ...) AUD_log(AUDIO_CAP, fmt, ## __VA_ARGS__)
static void GCC_ATTR ldebug (const char *fmt, ...)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
}
#else
#define ldebug(fmt, ...) (void)0
#if defined NDEBUG && defined __GNUC__
#define ldebug(...)
#elif defined NDEBUG && defined _MSC_VER
#define ldebug __noop
#else
static void GCC_ATTR ldebug (const char *fmt, ...)
{
(void) fmt;
}
#endif
#endif
#undef GCC_ATTR
#define AUDIO_STRINGIFY_(n) #n
#define AUDIO_STRINGIFY(n) AUDIO_STRINGIFY_(n)

View File

@@ -1,6 +1,7 @@
/* public domain */
#include "qemu-common.h"
#include "audio.h"
#define AUDIO_CAP "win-int"
#include <windows.h>

View File

@@ -348,6 +348,7 @@ void mixeng_clear (struct st_sample *buf, int len)
void mixeng_volume (struct st_sample *buf, int len, struct mixeng_volume *vol)
{
#ifdef CONFIG_MIXEMU
if (vol->mute) {
mixeng_clear (buf, len);
return;
@@ -363,4 +364,9 @@ void mixeng_volume (struct st_sample *buf, int len, struct mixeng_volume *vol)
#endif
buf += 1;
}
#else
(void) buf;
(void) len;
(void) vol;
#endif
}

View File

@@ -35,7 +35,7 @@
#define IN_T glue (glue (ITYPE, BSIZE), _t)
#ifdef FLOAT_MIXENG
static inline mixeng_real glue (conv_, ET) (IN_T v)
static mixeng_real inline glue (conv_, ET) (IN_T v)
{
IN_T nv = ENDIAN_CONVERT (v);
@@ -54,7 +54,7 @@ static inline mixeng_real glue (conv_, ET) (IN_T v)
#endif
}
static inline IN_T glue (clip_, ET) (mixeng_real v)
static IN_T inline glue (clip_, ET) (mixeng_real v)
{
if (v >= 0.5) {
return IN_MAX;

View File

@@ -46,7 +46,7 @@ static int no_run_out (HWVoiceOut *hw, int live)
int64_t ticks;
int64_t bytes;
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
now = qemu_get_clock_ns (vm_clock);
ticks = now - no->old_ticks;
bytes = muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());
bytes = audio_MIN (bytes, INT_MAX);
@@ -102,7 +102,7 @@ static int no_run_in (HWVoiceIn *hw)
int samples = 0;
if (dead) {
int64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
int64_t now = qemu_get_clock_ns (vm_clock);
int64_t ticks = now - no->old_ticks;
int64_t bytes =
muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());

View File

@@ -849,10 +849,6 @@ static int oss_ctl_in (HWVoiceIn *hw, int cmd, ...)
static void *oss_audio_init (void)
{
if (access(conf.devpath_in, R_OK | W_OK) < 0 ||
access(conf.devpath_out, R_OK | W_OK) < 0) {
return NULL;
}
return &conf;
}

View File

@@ -547,11 +547,11 @@ static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
ss.rate = as->freq;
/*
* qemu audio tick runs at 100 Hz (by default), so processing
* data chunks worth 10 ms of sound should be a good fit.
* qemu audio tick runs at 250 Hz (by default), so processing
* data chunks worth 4 ms of sound should be a good fit.
*/
ba.tlength = pa_usec_to_bytes (10 * 1000, &ss);
ba.minreq = pa_usec_to_bytes (5 * 1000, &ss);
ba.tlength = pa_usec_to_bytes (4 * 1000, &ss);
ba.minreq = pa_usec_to_bytes (2 * 1000, &ss);
ba.maxlength = -1;
ba.prebuf = -1;

View File

@@ -25,17 +25,8 @@
#include "audio.h"
#include "audio_int.h"
#if SPICE_INTERFACE_PLAYBACK_MAJOR > 1 || SPICE_INTERFACE_PLAYBACK_MINOR >= 3
#define LINE_OUT_SAMPLES (480 * 4)
#else
#define LINE_OUT_SAMPLES (256 * 4)
#endif
#if SPICE_INTERFACE_RECORD_MAJOR > 2 || SPICE_INTERFACE_RECORD_MINOR >= 3
#define LINE_IN_SAMPLES (480 * 4)
#else
#define LINE_IN_SAMPLES (256 * 4)
#endif
#define LINE_IN_SAMPLES 1024
#define LINE_OUT_SAMPLES 1024
typedef struct SpiceRateCtl {
int64_t start_ticks;
@@ -90,7 +81,7 @@ static void spice_audio_fini (void *opaque)
static void rate_start (SpiceRateCtl *rate)
{
memset (rate, 0, sizeof (*rate));
rate->start_ticks = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
rate->start_ticks = qemu_get_clock_ns (vm_clock);
}
static int rate_get_samples (struct audio_pcm_info *info, SpiceRateCtl *rate)
@@ -100,12 +91,12 @@ static int rate_get_samples (struct audio_pcm_info *info, SpiceRateCtl *rate)
int64_t bytes;
int64_t samples;
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
now = qemu_get_clock_ns (vm_clock);
ticks = now - rate->start_ticks;
bytes = muldiv64 (ticks, info->bytes_per_second, get_ticks_per_sec ());
samples = (bytes - rate->bytes_sent) >> info->shift;
if (samples < 0 || samples > 65536) {
error_report("Resetting rate control (%" PRId64 " samples)", samples);
fprintf (stderr, "Resetting rate control (%" PRId64 " samples)\n", samples);
rate_start (rate);
samples = 0;
}
@@ -120,11 +111,7 @@ static int line_out_init (HWVoiceOut *hw, struct audsettings *as)
SpiceVoiceOut *out = container_of (hw, SpiceVoiceOut, hw);
struct audsettings settings;
#if SPICE_INTERFACE_PLAYBACK_MAJOR > 1 || SPICE_INTERFACE_PLAYBACK_MINOR >= 3
settings.freq = spice_server_get_best_playback_rate(NULL);
#else
settings.freq = SPICE_INTERFACE_PLAYBACK_FREQ;
#endif
settings.nchannels = SPICE_INTERFACE_PLAYBACK_CHAN;
settings.fmt = AUD_FMT_S16;
settings.endianness = AUDIO_HOST_ENDIANNESS;
@@ -135,9 +122,6 @@ static int line_out_init (HWVoiceOut *hw, struct audsettings *as)
out->sin.base.sif = &playback_sif.base;
qemu_spice_add_interface (&out->sin.base);
#if SPICE_INTERFACE_PLAYBACK_MAJOR > 1 || SPICE_INTERFACE_PLAYBACK_MINOR >= 3
spice_server_set_playback_rate(&out->sin, settings.freq);
#endif
return 0;
}
@@ -248,11 +232,7 @@ static int line_in_init (HWVoiceIn *hw, struct audsettings *as)
SpiceVoiceIn *in = container_of (hw, SpiceVoiceIn, hw);
struct audsettings settings;
#if SPICE_INTERFACE_RECORD_MAJOR > 2 || SPICE_INTERFACE_RECORD_MINOR >= 3
settings.freq = spice_server_get_best_record_rate(NULL);
#else
settings.freq = SPICE_INTERFACE_RECORD_FREQ;
#endif
settings.nchannels = SPICE_INTERFACE_RECORD_CHAN;
settings.fmt = AUD_FMT_S16;
settings.endianness = AUDIO_HOST_ENDIANNESS;
@@ -263,9 +243,6 @@ static int line_in_init (HWVoiceIn *hw, struct audsettings *as)
in->sin.base.sif = &record_sif.base;
qemu_spice_add_interface (&in->sin.base);
#if SPICE_INTERFACE_RECORD_MAJOR > 2 || SPICE_INTERFACE_RECORD_MINOR >= 3
spice_server_set_record_rate(&in->sin, settings.freq);
#endif
return 0;
}

View File

@@ -52,7 +52,7 @@ static int wav_run_out (HWVoiceOut *hw, int live)
int rpos, decr, samples;
uint8_t *dst;
struct st_sample *src;
int64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
int64_t now = qemu_get_clock_ns (vm_clock);
int64_t ticks = now - wav->old_ticks;
int64_t bytes =
muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());

View File

@@ -63,7 +63,8 @@ static void wav_destroy (void *opaque)
}
doclose:
if (fclose (wav->f)) {
error_report("wav_destroy: fclose failed: %s", strerror(errno));
fprintf (stderr, "wav_destroy: fclose failed: %s",
strerror (errno));
}
}

View File

@@ -3,6 +3,6 @@ common-obj-$(CONFIG_POSIX) += rng-random.o
common-obj-y += msmouse.o
common-obj-$(CONFIG_BRLAPI) += baum.o
baum.o-cflags := $(SDL_CFLAGS)
$(obj)/baum.o: QEMU_CFLAGS += $(SDL_CFLAGS)
common-obj-$(CONFIG_TPM) += tpm.o

View File

@@ -314,9 +314,9 @@ static int baum_eat_packet(BaumDriverState *baum, const uint8_t *buf, int len)
return 0; \
if (*cur++ != ESC) { \
DPRINTF("Broken packet %#2x, tossing\n", req); \
if (timer_pending(baum->cellCount_timer)) { \
timer_del(baum->cellCount_timer); \
baum_cellCount_timer_cb(baum); \
if (qemu_timer_pending(baum->cellCount_timer)) { \
qemu_del_timer(baum->cellCount_timer); \
baum_cellCount_timer_cb(baum); \
} \
return (cur - 2 - buf); \
} \
@@ -334,7 +334,7 @@ static int baum_eat_packet(BaumDriverState *baum, const uint8_t *buf, int len)
int i;
/* Allow 100ms to complete the DisplayData packet */
timer_mod(baum->cellCount_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
qemu_mod_timer(baum->cellCount_timer, qemu_get_clock_ns(vm_clock) +
get_ticks_per_sec() / 10);
for (i = 0; i < baum->x * baum->y ; i++) {
EAT(c);
@@ -348,7 +348,7 @@ static int baum_eat_packet(BaumDriverState *baum, const uint8_t *buf, int len)
c = '?';
text[i] = c;
}
timer_del(baum->cellCount_timer);
qemu_del_timer(baum->cellCount_timer);
memset(zero, 0, sizeof(zero));
@@ -553,7 +553,7 @@ static void baum_close(struct CharDriverState *chr)
{
BaumDriverState *baum = chr->opaque;
timer_free(baum->cellCount_timer);
qemu_free_timer(baum->cellCount_timer);
if (baum->brlapi) {
brlapi__closeConnection(baum->brlapi);
g_free(baum->brlapi);
@@ -566,10 +566,8 @@ CharDriverState *chr_baum_init(void)
BaumDriverState *baum;
CharDriverState *chr;
brlapi_handle_t *handle;
#if defined(CONFIG_SDL)
#if SDL_COMPILEDVERSION < SDL_VERSIONNUM(2, 0, 0)
#ifdef CONFIG_SDL
SDL_SysWMinfo info;
#endif
#endif
int tty;
@@ -590,21 +588,19 @@ CharDriverState *chr_baum_init(void)
goto fail_handle;
}
baum->cellCount_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, baum_cellCount_timer_cb, baum);
baum->cellCount_timer = qemu_new_timer_ns(vm_clock, baum_cellCount_timer_cb, baum);
if (brlapi__getDisplaySize(handle, &baum->x, &baum->y) == -1) {
brlapi_perror("baum_init: brlapi_getDisplaySize");
goto fail;
}
#if defined(CONFIG_SDL)
#if SDL_COMPILEDVERSION < SDL_VERSIONNUM(2, 0, 0)
#ifdef CONFIG_SDL
memset(&info, 0, sizeof(info));
SDL_VERSION(&info.version);
if (SDL_GetWMInfo(&info))
tty = info.info.x11.wmwindow;
else
#endif
#endif
tty = BRLAPI_TTY_DEFAULT;
@@ -618,7 +614,7 @@ CharDriverState *chr_baum_init(void)
return chr;
fail:
timer_free(baum->cellCount_timer);
qemu_free_timer(baum->cellCount_timer);
brlapi__closeConnection(handle);
fail_handle:
g_free(handle);

View File

@@ -91,14 +91,12 @@ static int rng_egd_chr_can_read(void *opaque)
static void rng_egd_chr_read(void *opaque, const uint8_t *buf, int size)
{
RngEgd *s = RNG_EGD(opaque);
size_t buf_offset = 0;
while (size > 0 && s->requests) {
RngRequest *req = s->requests->data;
int len = MIN(size, req->size - req->offset);
memcpy(req->data + req->offset, buf + buf_offset, len);
buf_offset += len;
memcpy(req->data + req->offset, buf, len);
req->offset += len;
size -= len;
@@ -169,6 +167,7 @@ static void rng_egd_set_chardev(Object *obj, const char *value, Error **errp)
if (b->opened) {
error_set(errp, QERR_PERMISSION_DENIED);
} else {
g_free(s->chr_name);
s->chr_name = g_strdup(value);
}
}

View File

@@ -78,8 +78,9 @@ static void rng_random_opened(RngBackend *b, Error **errp)
"filename", "a valid filename");
} else {
s->fd = qemu_open(s->filename, O_RDONLY | O_NONBLOCK);
if (s->fd == -1) {
error_setg_file_open(errp, errno, s->filename);
error_set(errp, QERR_OPEN_FILE_FAILED, s->filename);
}
}
}
@@ -123,15 +124,15 @@ static void rng_random_init(Object *obj)
NULL);
s->filename = g_strdup("/dev/random");
s->fd = -1;
}
static void rng_random_finalize(Object *obj)
{
RndRandom *s = RNG_RANDOM(obj);
qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
if (s->fd != -1) {
qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
qemu_close(s->fd);
}

View File

@@ -12,7 +12,6 @@
#include "sysemu/rng.h"
#include "qapi/qmp/qerror.h"
#include "qom/object_interfaces.h"
void rng_backend_request_entropy(RngBackend *s, size_t size,
EntropyReceiveFunc *receive_entropy,
@@ -41,16 +40,15 @@ static bool rng_backend_prop_get_opened(Object *obj, Error **errp)
return s->opened;
}
static void rng_backend_complete(UserCreatable *uc, Error **errp)
void rng_backend_open(RngBackend *s, Error **errp)
{
object_property_set_bool(OBJECT(uc), true, "opened", errp);
object_property_set_bool(OBJECT(s), true, "opened", errp);
}
static void rng_backend_prop_set_opened(Object *obj, bool value, Error **errp)
{
RngBackend *s = RNG_BACKEND(obj);
RngBackendClass *k = RNG_BACKEND_GET_CLASS(s);
Error *local_err = NULL;
if (value == s->opened) {
return;
@@ -62,14 +60,12 @@ static void rng_backend_prop_set_opened(Object *obj, bool value, Error **errp)
}
if (k->opened) {
k->opened(s, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
k->opened(s, errp);
}
s->opened = true;
if (!error_is_set(errp)) {
s->opened = value;
}
}
static void rng_backend_init(Object *obj)
@@ -80,25 +76,13 @@ static void rng_backend_init(Object *obj)
NULL);
}
static void rng_backend_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
ucc->complete = rng_backend_complete;
}
static const TypeInfo rng_backend_info = {
.name = TYPE_RNG_BACKEND,
.parent = TYPE_OBJECT,
.instance_size = sizeof(RngBackend),
.instance_init = rng_backend_init,
.class_size = sizeof(RngBackendClass),
.class_init = rng_backend_class_init,
.abstract = true,
.interfaces = (InterfaceInfo[]) {
{ TYPE_USER_CREATABLE },
{ }
}
};
static void register_types(void)

View File

@@ -112,7 +112,6 @@ static void tpm_backend_prop_set_opened(Object *obj, bool value, Error **errp)
{
TPMBackend *s = TPM_BACKEND(obj);
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
Error *local_err = NULL;
if (value == s->opened) {
return;
@@ -124,14 +123,12 @@ static void tpm_backend_prop_set_opened(Object *obj, bool value, Error **errp)
}
if (k->opened) {
k->opened(s, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
k->opened(s, errp);
}
s->opened = true;
if (!error_is_set(errp)) {
s->opened = value;
}
}
static void tpm_backend_instance_init(Object *obj)

View File

@@ -29,7 +29,6 @@
#define BLK_MIG_FLAG_DEVICE_BLOCK 0x01
#define BLK_MIG_FLAG_EOS 0x02
#define BLK_MIG_FLAG_PROGRESS 0x04
#define BLK_MIG_FLAG_ZERO_BLOCK 0x08
#define MAX_IS_ALLOCATED_SEARCH 65536
@@ -58,8 +57,6 @@ typedef struct BlkMigDevState {
/* Protected by block migration lock. */
unsigned long *aio_bitmap;
int64_t completed_sectors;
BdrvDirtyBitmap *dirty_bitmap;
Error *blocker;
} BlkMigDevState;
typedef struct BlkMigBlock {
@@ -83,7 +80,6 @@ typedef struct BlkMigState {
int shared_base;
QSIMPLEQ_HEAD(bmds_list, BlkMigDevState) bmds_list;
int64_t total_sector_sum;
bool zero_blocks;
/* Protected by lock. */
QSIMPLEQ_HEAD(blk_list, BlkMigBlock) blk_list;
@@ -118,30 +114,16 @@ static void blk_mig_unlock(void)
static void blk_send(QEMUFile *f, BlkMigBlock * blk)
{
int len;
uint64_t flags = BLK_MIG_FLAG_DEVICE_BLOCK;
if (block_mig_state.zero_blocks &&
buffer_is_zero(blk->buf, BLOCK_SIZE)) {
flags |= BLK_MIG_FLAG_ZERO_BLOCK;
}
/* sector number and flags */
qemu_put_be64(f, (blk->sector << BDRV_SECTOR_BITS)
| flags);
| BLK_MIG_FLAG_DEVICE_BLOCK);
/* device name */
len = strlen(blk->bmds->bs->device_name);
qemu_put_byte(f, len);
qemu_put_buffer(f, (uint8_t *)blk->bmds->bs->device_name, len);
/* if a block is zero we need to flush here since the network
* bandwidth is now a lot higher than the storage device bandwidth.
* thus if we queue zero blocks we slow down the migration */
if (flags & BLK_MIG_FLAG_ZERO_BLOCK) {
qemu_fflush(f);
return;
}
qemu_put_buffer(f, blk->buf, BLOCK_SIZE);
}
@@ -311,36 +293,12 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
/* Called with iothread lock taken. */
static int set_dirty_tracking(void)
{
BlkMigDevState *bmds;
int ret;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
bmds->dirty_bitmap = bdrv_create_dirty_bitmap(bmds->bs, BLOCK_SIZE,
NULL);
if (!bmds->dirty_bitmap) {
ret = -errno;
goto fail;
}
}
return 0;
fail:
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
if (bmds->dirty_bitmap) {
bdrv_release_dirty_bitmap(bmds->bs, bmds->dirty_bitmap);
}
}
return ret;
}
static void unset_dirty_tracking(void)
static void set_dirty_tracking(int enable)
{
BlkMigDevState *bmds;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
bdrv_release_dirty_bitmap(bmds->bs, bmds->dirty_bitmap);
bdrv_set_dirty_tracking(bmds->bs, enable ? BLOCK_SIZE : 0);
}
}
@@ -362,9 +320,8 @@ static void init_blk_migration_it(void *opaque, BlockDriverState *bs)
bmds->completed_sectors = 0;
bmds->shared_base = block_mig_state.shared_base;
alloc_aio_bitmap(bmds);
error_setg(&bmds->blocker, "block device is in use by migration");
bdrv_op_block_all(bs, bmds->blocker);
bdrv_ref(bs);
drive_get_ref(drive_get_by_blockdev(bs));
bdrv_set_in_use(bs, 1);
block_mig_state.total_sector_sum += sectors;
@@ -387,7 +344,6 @@ static void init_blk_migration(QEMUFile *f)
block_mig_state.total_sector_sum = 0;
block_mig_state.prev_progress = -1;
block_mig_state.bulk_completed = 0;
block_mig_state.zero_blocks = migrate_zero_blocks();
bdrv_iterate(init_blk_migration_it, NULL);
}
@@ -459,7 +415,7 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
} else {
blk_mig_unlock();
}
if (bdrv_get_dirty(bmds->bs, bmds->dirty_bitmap, sector)) {
if (bdrv_get_dirty(bmds->bs, sector)) {
if (total_sectors - sector < BDRV_SECTORS_PER_DIRTY_CHUNK) {
nr_sectors = total_sectors - sector;
@@ -581,7 +537,7 @@ static int64_t get_remaining_dirty(void)
int64_t dirty = 0;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
dirty += bdrv_get_dirty_count(bmds->bs, bmds->dirty_bitmap);
dirty += bdrv_get_dirty_count(bmds->bs);
}
return dirty << BDRV_SECTOR_BITS;
@@ -596,14 +552,13 @@ static void blk_mig_cleanup(void)
bdrv_drain_all();
unset_dirty_tracking();
set_dirty_tracking(0);
blk_mig_lock();
while ((bmds = QSIMPLEQ_FIRST(&block_mig_state.bmds_list)) != NULL) {
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.bmds_list, entry);
bdrv_op_unblock_all(bmds->bs, bmds->blocker);
error_free(bmds->blocker);
bdrv_unref(bmds->bs);
bdrv_set_in_use(bmds->bs, 0);
drive_put_ref(drive_get_by_blockdev(bmds->bs));
g_free(bmds->aio_bitmap);
g_free(bmds);
}
@@ -629,17 +584,10 @@ static int block_save_setup(QEMUFile *f, void *opaque)
block_mig_state.submitted, block_mig_state.transferred);
qemu_mutex_lock_iothread();
/* start track dirty blocks */
ret = set_dirty_tracking();
if (ret) {
qemu_mutex_unlock_iothread();
return ret;
}
init_blk_migration(f);
/* start track dirty blocks */
set_dirty_tracking(1);
qemu_mutex_unlock_iothread();
ret = flush_blks(f);
@@ -814,16 +762,12 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
nr_sectors = BDRV_SECTORS_PER_DIRTY_CHUNK;
}
if (flags & BLK_MIG_FLAG_ZERO_BLOCK) {
ret = bdrv_write_zeroes(bs, addr, nr_sectors,
BDRV_REQ_MAY_UNMAP);
} else {
buf = g_malloc(BLOCK_SIZE);
qemu_get_buffer(f, buf, BLOCK_SIZE);
ret = bdrv_write(bs, addr, buf, nr_sectors);
g_free(buf);
}
buf = g_malloc(BLOCK_SIZE);
qemu_get_buffer(f, buf, BLOCK_SIZE);
ret = bdrv_write(bs, addr, buf, nr_sectors);
g_free(buf);
if (ret < 0) {
return ret;
}

3315
block.c

File diff suppressed because it is too large Load Diff

View File

@@ -1,19 +1,16 @@
block-obj-y += raw_bsd.o cow.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat.o
block-obj-y += raw.o cow.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat.o
block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o
block-obj-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o
block-obj-y += qed-check.o
block-obj-$(CONFIG_VHDX) += vhdx.o vhdx-endian.o vhdx-log.o
block-obj-$(CONFIG_QUORUM) += quorum.o
block-obj-y += vhdx.o
block-obj-y += parallels.o blkdebug.o blkverify.o
block-obj-y += snapshot.o qapi.o
block-obj-$(CONFIG_WIN32) += raw-win32.o win32-aio.o
block-obj-$(CONFIG_POSIX) += raw-posix.o
block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
ifeq ($(CONFIG_POSIX),y)
block-obj-y += nbd.o nbd-client.o sheepdog.o
block-obj-y += nbd.o sheepdog.o
block-obj-$(CONFIG_LIBISCSI) += iscsi.o
block-obj-$(CONFIG_LIBNFS) += nfs.o
block-obj-$(CONFIG_CURL) += curl.o
block-obj-$(CONFIG_RBD) += rbd.o
block-obj-$(CONFIG_GLUSTERFS) += gluster.o
@@ -23,17 +20,5 @@ endif
common-obj-y += stream.o
common-obj-y += commit.o
common-obj-y += mirror.o
common-obj-y += backup.o
iscsi.o-cflags := $(LIBISCSI_CFLAGS)
iscsi.o-libs := $(LIBISCSI_LIBS)
curl.o-cflags := $(CURL_CFLAGS)
curl.o-libs := $(CURL_LIBS)
rbd.o-cflags := $(RBD_CFLAGS)
rbd.o-libs := $(RBD_LIBS)
gluster.o-cflags := $(GLUSTERFS_CFLAGS)
gluster.o-libs := $(GLUSTERFS_LIBS)
ssh.o-cflags := $(LIBSSH2_CFLAGS)
ssh.o-libs := $(LIBSSH2_LIBS)
qcow.o-libs := -lz
linux-aio.o-libs := -laio
$(obj)/curl.o: QEMU_CFLAGS+=$(CURL_CFLAGS)

View File

@@ -1,392 +0,0 @@
/*
* QEMU backup
*
* Copyright (C) 2013 Proxmox Server Solutions
*
* Authors:
* Dietmar Maurer (dietmar@proxmox.com)
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include <stdio.h>
#include <errno.h>
#include <unistd.h>
#include "trace.h"
#include "block/block.h"
#include "block/block_int.h"
#include "block/blockjob.h"
#include "qemu/ratelimit.h"
#define BACKUP_CLUSTER_BITS 16
#define BACKUP_CLUSTER_SIZE (1 << BACKUP_CLUSTER_BITS)
#define BACKUP_SECTORS_PER_CLUSTER (BACKUP_CLUSTER_SIZE / BDRV_SECTOR_SIZE)
#define SLICE_TIME 100000000ULL /* ns */
typedef struct CowRequest {
int64_t start;
int64_t end;
QLIST_ENTRY(CowRequest) list;
CoQueue wait_queue; /* coroutines blocked on this request */
} CowRequest;
typedef struct BackupBlockJob {
BlockJob common;
BlockDriverState *target;
MirrorSyncMode sync_mode;
RateLimit limit;
BlockdevOnError on_source_error;
BlockdevOnError on_target_error;
CoRwlock flush_rwlock;
uint64_t sectors_read;
HBitmap *bitmap;
QLIST_HEAD(, CowRequest) inflight_reqs;
} BackupBlockJob;
/* See if in-flight requests overlap and wait for them to complete */
static void coroutine_fn wait_for_overlapping_requests(BackupBlockJob *job,
int64_t start,
int64_t end)
{
CowRequest *req;
bool retry;
do {
retry = false;
QLIST_FOREACH(req, &job->inflight_reqs, list) {
if (end > req->start && start < req->end) {
qemu_co_queue_wait(&req->wait_queue);
retry = true;
break;
}
}
} while (retry);
}
/* Keep track of an in-flight request */
static void cow_request_begin(CowRequest *req, BackupBlockJob *job,
int64_t start, int64_t end)
{
req->start = start;
req->end = end;
qemu_co_queue_init(&req->wait_queue);
QLIST_INSERT_HEAD(&job->inflight_reqs, req, list);
}
/* Forget about a completed request */
static void cow_request_end(CowRequest *req)
{
QLIST_REMOVE(req, list);
qemu_co_queue_restart_all(&req->wait_queue);
}
static int coroutine_fn backup_do_cow(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
bool *error_is_read)
{
BackupBlockJob *job = (BackupBlockJob *)bs->job;
CowRequest cow_request;
struct iovec iov;
QEMUIOVector bounce_qiov;
void *bounce_buffer = NULL;
int ret = 0;
int64_t start, end;
int n;
qemu_co_rwlock_rdlock(&job->flush_rwlock);
start = sector_num / BACKUP_SECTORS_PER_CLUSTER;
end = DIV_ROUND_UP(sector_num + nb_sectors, BACKUP_SECTORS_PER_CLUSTER);
trace_backup_do_cow_enter(job, start, sector_num, nb_sectors);
wait_for_overlapping_requests(job, start, end);
cow_request_begin(&cow_request, job, start, end);
for (; start < end; start++) {
if (hbitmap_get(job->bitmap, start)) {
trace_backup_do_cow_skip(job, start);
continue; /* already copied */
}
trace_backup_do_cow_process(job, start);
n = MIN(BACKUP_SECTORS_PER_CLUSTER,
job->common.len / BDRV_SECTOR_SIZE -
start * BACKUP_SECTORS_PER_CLUSTER);
if (!bounce_buffer) {
bounce_buffer = qemu_blockalign(bs, BACKUP_CLUSTER_SIZE);
}
iov.iov_base = bounce_buffer;
iov.iov_len = n * BDRV_SECTOR_SIZE;
qemu_iovec_init_external(&bounce_qiov, &iov, 1);
ret = bdrv_co_readv(bs, start * BACKUP_SECTORS_PER_CLUSTER, n,
&bounce_qiov);
if (ret < 0) {
trace_backup_do_cow_read_fail(job, start, ret);
if (error_is_read) {
*error_is_read = true;
}
goto out;
}
if (buffer_is_zero(iov.iov_base, iov.iov_len)) {
ret = bdrv_co_write_zeroes(job->target,
start * BACKUP_SECTORS_PER_CLUSTER,
n, BDRV_REQ_MAY_UNMAP);
} else {
ret = bdrv_co_writev(job->target,
start * BACKUP_SECTORS_PER_CLUSTER, n,
&bounce_qiov);
}
if (ret < 0) {
trace_backup_do_cow_write_fail(job, start, ret);
if (error_is_read) {
*error_is_read = false;
}
goto out;
}
hbitmap_set(job->bitmap, start, 1);
/* Publish progress, guest I/O counts as progress too. Note that the
* offset field is an opaque progress value, it is not a disk offset.
*/
job->sectors_read += n;
job->common.offset += n * BDRV_SECTOR_SIZE;
}
out:
if (bounce_buffer) {
qemu_vfree(bounce_buffer);
}
cow_request_end(&cow_request);
trace_backup_do_cow_return(job, sector_num, nb_sectors, ret);
qemu_co_rwlock_unlock(&job->flush_rwlock);
return ret;
}
static int coroutine_fn backup_before_write_notify(
NotifierWithReturn *notifier,
void *opaque)
{
BdrvTrackedRequest *req = opaque;
int64_t sector_num = req->offset >> BDRV_SECTOR_BITS;
int nb_sectors = req->bytes >> BDRV_SECTOR_BITS;
assert((req->offset & (BDRV_SECTOR_SIZE - 1)) == 0);
assert((req->bytes & (BDRV_SECTOR_SIZE - 1)) == 0);
return backup_do_cow(req->bs, sector_num, nb_sectors, NULL);
}
static void backup_set_speed(BlockJob *job, int64_t speed, Error **errp)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
if (speed < 0) {
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
}
static void backup_iostatus_reset(BlockJob *job)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
bdrv_iostatus_reset(s->target);
}
static const BlockJobDriver backup_job_driver = {
.instance_size = sizeof(BackupBlockJob),
.job_type = BLOCK_JOB_TYPE_BACKUP,
.set_speed = backup_set_speed,
.iostatus_reset = backup_iostatus_reset,
};
static BlockErrorAction backup_error_action(BackupBlockJob *job,
bool read, int error)
{
if (read) {
return block_job_error_action(&job->common, job->common.bs,
job->on_source_error, true, error);
} else {
return block_job_error_action(&job->common, job->target,
job->on_target_error, false, error);
}
}
static void coroutine_fn backup_run(void *opaque)
{
BackupBlockJob *job = opaque;
BlockDriverState *bs = job->common.bs;
BlockDriverState *target = job->target;
BlockdevOnError on_target_error = job->on_target_error;
NotifierWithReturn before_write = {
.notify = backup_before_write_notify,
};
int64_t start, end;
int ret = 0;
QLIST_INIT(&job->inflight_reqs);
qemu_co_rwlock_init(&job->flush_rwlock);
start = 0;
end = DIV_ROUND_UP(job->common.len / BDRV_SECTOR_SIZE,
BACKUP_SECTORS_PER_CLUSTER);
job->bitmap = hbitmap_alloc(end, 0);
bdrv_set_enable_write_cache(target, true);
bdrv_set_on_error(target, on_target_error, on_target_error);
bdrv_iostatus_enable(target);
bdrv_add_before_write_notifier(bs, &before_write);
if (job->sync_mode == MIRROR_SYNC_MODE_NONE) {
while (!block_job_is_cancelled(&job->common)) {
/* Yield until the job is cancelled. We just let our before_write
* notify callback service CoW requests. */
job->common.busy = false;
qemu_coroutine_yield();
job->common.busy = true;
}
} else {
/* Both FULL and TOP SYNC_MODE's require copying.. */
for (; start < end; start++) {
bool error_is_read;
if (block_job_is_cancelled(&job->common)) {
break;
}
/* we need to yield so that qemu_aio_flush() returns.
* (without, VM does not reboot)
*/
if (job->common.speed) {
uint64_t delay_ns = ratelimit_calculate_delay(
&job->limit, job->sectors_read);
job->sectors_read = 0;
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, delay_ns);
} else {
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, 0);
}
if (block_job_is_cancelled(&job->common)) {
break;
}
if (job->sync_mode == MIRROR_SYNC_MODE_TOP) {
int i, n;
int alloced = 0;
/* Check to see if these blocks are already in the
* backing file. */
for (i = 0; i < BACKUP_SECTORS_PER_CLUSTER;) {
/* bdrv_is_allocated() only returns true/false based
* on the first set of sectors it comes across that
* are are all in the same state.
* For that reason we must verify each sector in the
* backup cluster length. We end up copying more than
* needed but at some point that is always the case. */
alloced =
bdrv_is_allocated(bs,
start * BACKUP_SECTORS_PER_CLUSTER + i,
BACKUP_SECTORS_PER_CLUSTER - i, &n);
i += n;
if (alloced == 1) {
break;
}
}
/* If the above loop never found any sectors that are in
* the topmost image, skip this backup. */
if (alloced == 0) {
continue;
}
}
/* FULL sync mode we copy the whole drive. */
ret = backup_do_cow(bs, start * BACKUP_SECTORS_PER_CLUSTER,
BACKUP_SECTORS_PER_CLUSTER, &error_is_read);
if (ret < 0) {
/* Depending on error action, fail now or retry cluster */
BlockErrorAction action =
backup_error_action(job, error_is_read, -ret);
if (action == BDRV_ACTION_REPORT) {
break;
} else {
start--;
continue;
}
}
}
}
notifier_with_return_remove(&before_write);
/* wait until pending backup_do_cow() calls have completed */
qemu_co_rwlock_wrlock(&job->flush_rwlock);
qemu_co_rwlock_unlock(&job->flush_rwlock);
hbitmap_free(job->bitmap);
bdrv_iostatus_disable(target);
bdrv_unref(target);
block_job_completed(&job->common, ret);
}
void backup_start(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, MirrorSyncMode sync_mode,
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockDriverCompletionFunc *cb, void *opaque,
Error **errp)
{
int64_t len;
assert(bs);
assert(target);
assert(cb);
if ((on_source_error == BLOCKDEV_ON_ERROR_STOP ||
on_source_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_set(errp, QERR_INVALID_PARAMETER, "on-source-error");
return;
}
len = bdrv_getlength(bs);
if (len < 0) {
error_setg_errno(errp, -len, "unable to get length for '%s'",
bdrv_get_device_name(bs));
return;
}
BackupBlockJob *job = block_job_create(&backup_job_driver, bs, speed,
cb, opaque, errp);
if (!job) {
return;
}
job->on_source_error = on_source_error;
job->on_target_error = on_target_error;
job->target = target;
job->sync_mode = sync_mode;
job->common.len = len;
job->common.co = qemu_coroutine_create(backup_run);
qemu_coroutine_enter(job->common.co, job);
}

View File

@@ -168,7 +168,6 @@ static const char *event_names[BLKDBG_EVENT_MAX] = {
[BLKDBG_REFTABLE_LOAD] = "reftable_load",
[BLKDBG_REFTABLE_GROW] = "reftable_grow",
[BLKDBG_REFTABLE_UPDATE] = "reftable_update",
[BLKDBG_REFBLOCK_LOAD] = "refblock_load",
[BLKDBG_REFBLOCK_UPDATE] = "refblock_update",
@@ -183,17 +182,6 @@ static const char *event_names[BLKDBG_EVENT_MAX] = {
[BLKDBG_CLUSTER_ALLOC] = "cluster_alloc",
[BLKDBG_CLUSTER_ALLOC_BYTES] = "cluster_alloc_bytes",
[BLKDBG_CLUSTER_FREE] = "cluster_free",
[BLKDBG_FLUSH_TO_OS] = "flush_to_os",
[BLKDBG_FLUSH_TO_DISK] = "flush_to_disk",
[BLKDBG_PWRITEV_RMW_HEAD] = "pwritev_rmw.head",
[BLKDBG_PWRITEV_RMW_AFTER_HEAD] = "pwritev_rmw.after_head",
[BLKDBG_PWRITEV_RMW_TAIL] = "pwritev_rmw.tail",
[BLKDBG_PWRITEV_RMW_AFTER_TAIL] = "pwritev_rmw.after_tail",
[BLKDBG_PWRITEV] = "pwritev",
[BLKDBG_PWRITEV_ZERO] = "pwritev_zero",
[BLKDBG_PWRITEV_DONE] = "pwritev_done",
};
static int get_event_by_name(const char *name, BlkDebugEvent *event)
@@ -279,33 +267,19 @@ static void remove_rule(BlkdebugRule *rule)
g_free(rule);
}
static int read_config(BDRVBlkdebugState *s, const char *filename,
QDict *options, Error **errp)
static int read_config(BDRVBlkdebugState *s, const char *filename)
{
FILE *f = NULL;
FILE *f;
int ret;
struct add_rule_data d;
Error *local_err = NULL;
if (filename) {
f = fopen(filename, "r");
if (f == NULL) {
error_setg_errno(errp, errno, "Could not read blkdebug config file");
return -errno;
}
ret = qemu_config_parse(f, config_groups, filename);
if (ret < 0) {
error_setg(errp, "Could not parse blkdebug config file");
ret = -EINVAL;
goto fail;
}
f = fopen(filename, "r");
if (f == NULL) {
return -errno;
}
qemu_config_parse_qdict(options, config_groups, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
ret = qemu_config_parse(f, config_groups, filename);
if (ret < 0) {
goto fail;
}
@@ -320,9 +294,7 @@ static int read_config(BDRVBlkdebugState *s, const char *filename,
fail:
qemu_opts_reset(&inject_error_opts);
qemu_opts_reset(&set_state_opts);
if (f) {
fclose(f);
}
fclose(f);
return ret;
}
@@ -334,9 +306,7 @@ static void blkdebug_parse_filename(const char *filename, QDict *options,
/* Parse the blkdebug: prefix */
if (!strstart(filename, "blkdebug:", &filename)) {
/* There was no prefix; therefore, all options have to be already
present in the QDict (except for the filename) */
qdict_put(options, "x-image", qstring_from_str(filename));
error_setg(errp, "File name string must start with 'blkdebug:'");
return;
}
@@ -372,68 +342,53 @@ static QemuOptsList runtime_opts = {
.type = QEMU_OPT_STRING,
.help = "[internal use only, will be removed]",
},
{
.name = "align",
.type = QEMU_OPT_SIZE,
.help = "Required alignment in bytes",
},
{ /* end of list */ }
},
};
static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVBlkdebugState *s = bs->opaque;
QemuOpts *opts;
Error *local_err = NULL;
const char *config;
uint64_t align;
const char *filename, *config;
int ret;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
ret = -EINVAL;
goto out;
goto fail;
}
/* Read rules from config file or command line options */
/* Read rules from config file */
config = qemu_opt_get(opts, "config");
ret = read_config(s, config, options, errp);
if (ret) {
goto out;
if (config) {
ret = read_config(s, config);
if (ret < 0) {
goto fail;
}
}
/* Set initial state */
s->state = 1;
/* Open the backing file */
assert(bs->file == NULL);
ret = bdrv_open_image(&bs->file, qemu_opt_get(opts, "x-image"), options, "image",
flags | BDRV_O_PROTOCOL, false, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto out;
filename = qemu_opt_get(opts, "x-image");
if (filename == NULL) {
ret = -EINVAL;
goto fail;
}
/* Set request alignment */
align = qemu_opt_get_size(opts, "align", bs->request_alignment);
if (align > 0 && align < INT_MAX && !(align & (align - 1))) {
bs->request_alignment = align;
} else {
error_setg(errp, "Invalid alignment");
ret = -EINVAL;
goto fail_unref;
ret = bdrv_file_open(&bs->file, filename, NULL, flags);
if (ret < 0) {
goto fail;
}
ret = 0;
goto out;
fail_unref:
bdrv_unref(bs->file);
out:
fail:
qemu_opts_del(opts);
return ret;
}
@@ -632,9 +587,9 @@ static int blkdebug_debug_breakpoint(BlockDriverState *bs, const char *event,
static int blkdebug_debug_resume(BlockDriverState *bs, const char *tag)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugSuspendedReq *r, *next;
BlkdebugSuspendedReq *r;
QLIST_FOREACH_SAFE(r, &s->suspended_reqs, next, next) {
QLIST_FOREACH(r, &s->suspended_reqs, next) {
if (!strcmp(r->tag, tag)) {
qemu_coroutine_enter(r->co, NULL);
return 0;
@@ -643,31 +598,6 @@ static int blkdebug_debug_resume(BlockDriverState *bs, const char *tag)
return -ENOENT;
}
static int blkdebug_debug_remove_breakpoint(BlockDriverState *bs,
const char *tag)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugSuspendedReq *r, *r_next;
BlkdebugRule *rule, *next;
int i, ret = -ENOENT;
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
QLIST_FOREACH_SAFE(rule, &s->rules[i], next, next) {
if (rule->action == ACTION_SUSPEND &&
!strcmp(rule->options.suspend.tag, tag)) {
remove_rule(rule);
ret = 0;
}
}
}
QLIST_FOREACH_SAFE(r, &s->suspended_reqs, next, r_next) {
if (!strcmp(r->tag, tag)) {
qemu_coroutine_enter(r->co, NULL);
ret = 0;
}
}
return ret;
}
static bool blkdebug_debug_is_suspended(BlockDriverState *bs, const char *tag)
{
@@ -702,8 +632,6 @@ static BlockDriver bdrv_blkdebug = {
.bdrv_debug_event = blkdebug_debug_event,
.bdrv_debug_breakpoint = blkdebug_debug_breakpoint,
.bdrv_debug_remove_breakpoint
= blkdebug_debug_remove_breakpoint,
.bdrv_debug_resume = blkdebug_debug_resume,
.bdrv_debug_is_suspended = blkdebug_debug_is_suspended,
};

View File

@@ -78,9 +78,7 @@ static void blkverify_parse_filename(const char *filename, QDict *options,
/* Parse the blkverify: prefix */
if (!strstart(filename, "blkverify:", &filename)) {
/* There was no prefix; therefore, all options have to be already
present in the QDict (except for the filename) */
qdict_put(options, "x-image", qstring_from_str(filename));
error_setg(errp, "File name string must start with 'blkverify:'");
return;
}
@@ -118,37 +116,46 @@ static QemuOptsList runtime_opts = {
},
};
static int blkverify_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int blkverify_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVBlkverifyState *s = bs->opaque;
QemuOpts *opts;
Error *local_err = NULL;
const char *filename, *raw;
int ret;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
ret = -EINVAL;
goto fail;
}
/* Open the raw file */
assert(bs->file == NULL);
ret = bdrv_open_image(&bs->file, qemu_opt_get(opts, "x-raw"), options,
"raw", flags | BDRV_O_PROTOCOL, false, &local_err);
/* Parse the raw image filename */
raw = qemu_opt_get(opts, "x-raw");
if (raw == NULL) {
ret = -EINVAL;
goto fail;
}
ret = bdrv_file_open(&bs->file, raw, NULL, flags);
if (ret < 0) {
error_propagate(errp, local_err);
goto fail;
}
/* Open the test file */
assert(s->test_file == NULL);
ret = bdrv_open_image(&s->test_file, qemu_opt_get(opts, "x-image"), options,
"test", flags, false, &local_err);
filename = qemu_opt_get(opts, "x-image");
if (filename == NULL) {
ret = -EINVAL;
goto fail;
}
s->test_file = bdrv_new("");
ret = bdrv_open(s->test_file, filename, NULL, flags, NULL);
if (ret < 0) {
error_propagate(errp, local_err);
bdrv_delete(s->test_file);
s->test_file = NULL;
goto fail;
}
@@ -162,7 +169,7 @@ static void blkverify_close(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
bdrv_unref(s->test_file);
bdrv_delete(s->test_file);
s->test_file = NULL;
}
@@ -173,6 +180,110 @@ static int64_t blkverify_getlength(BlockDriverState *bs)
return bdrv_getlength(s->test_file);
}
/**
* Check that I/O vector contents are identical
*
* @a: I/O vector
* @b: I/O vector
* @ret: Offset to first mismatching byte or -1 if match
*/
static ssize_t blkverify_iovec_compare(QEMUIOVector *a, QEMUIOVector *b)
{
int i;
ssize_t offset = 0;
assert(a->niov == b->niov);
for (i = 0; i < a->niov; i++) {
size_t len = 0;
uint8_t *p = (uint8_t *)a->iov[i].iov_base;
uint8_t *q = (uint8_t *)b->iov[i].iov_base;
assert(a->iov[i].iov_len == b->iov[i].iov_len);
while (len < a->iov[i].iov_len && *p++ == *q++) {
len++;
}
offset += len;
if (len != a->iov[i].iov_len) {
return offset;
}
}
return -1;
}
typedef struct {
int src_index;
struct iovec *src_iov;
void *dest_base;
} IOVectorSortElem;
static int sortelem_cmp_src_base(const void *a, const void *b)
{
const IOVectorSortElem *elem_a = a;
const IOVectorSortElem *elem_b = b;
/* Don't overflow */
if (elem_a->src_iov->iov_base < elem_b->src_iov->iov_base) {
return -1;
} else if (elem_a->src_iov->iov_base > elem_b->src_iov->iov_base) {
return 1;
} else {
return 0;
}
}
static int sortelem_cmp_src_index(const void *a, const void *b)
{
const IOVectorSortElem *elem_a = a;
const IOVectorSortElem *elem_b = b;
return elem_a->src_index - elem_b->src_index;
}
/**
* Copy contents of I/O vector
*
* The relative relationships of overlapping iovecs are preserved. This is
* necessary to ensure identical semantics in the cloned I/O vector.
*/
static void blkverify_iovec_clone(QEMUIOVector *dest, const QEMUIOVector *src,
void *buf)
{
IOVectorSortElem sortelems[src->niov];
void *last_end;
int i;
/* Sort by source iovecs by base address */
for (i = 0; i < src->niov; i++) {
sortelems[i].src_index = i;
sortelems[i].src_iov = &src->iov[i];
}
qsort(sortelems, src->niov, sizeof(sortelems[0]), sortelem_cmp_src_base);
/* Allocate buffer space taking into account overlapping iovecs */
last_end = NULL;
for (i = 0; i < src->niov; i++) {
struct iovec *cur = sortelems[i].src_iov;
ptrdiff_t rewind = 0;
/* Detect overlap */
if (last_end && last_end > cur->iov_base) {
rewind = last_end - cur->iov_base;
}
sortelems[i].dest_base = buf - rewind;
buf += cur->iov_len - MIN(rewind, cur->iov_len);
last_end = MAX(cur->iov_base + cur->iov_len, last_end);
}
/* Sort by source iovec index and build destination iovec */
qsort(sortelems, src->niov, sizeof(sortelems[0]), sortelem_cmp_src_index);
for (i = 0; i < src->niov; i++) {
qemu_iovec_add(dest, sortelems[i].dest_base, src->iov[i].iov_len);
}
}
static BlkverifyAIOCB *blkverify_aio_get(BlockDriverState *bs, bool is_write,
int64_t sector_num, QEMUIOVector *qiov,
int nb_sectors,
@@ -236,7 +347,7 @@ static void blkverify_aio_cb(void *opaque, int ret)
static void blkverify_verify_readv(BlkverifyAIOCB *acb)
{
ssize_t offset = qemu_iovec_compare(acb->qiov, &acb->raw_qiov);
ssize_t offset = blkverify_iovec_compare(acb->qiov, &acb->raw_qiov);
if (offset != -1) {
blkverify_err(acb, "contents mismatch in sector %" PRId64,
acb->sector_num + (int64_t)(offset / BDRV_SECTOR_SIZE));
@@ -254,7 +365,7 @@ static BlockDriverAIOCB *blkverify_aio_readv(BlockDriverState *bs,
acb->verify = blkverify_verify_readv;
acb->buf = qemu_blockalign(bs->file, qiov->size);
qemu_iovec_init(&acb->raw_qiov, acb->qiov->niov);
qemu_iovec_clone(&acb->raw_qiov, qiov, acb->buf);
blkverify_iovec_clone(&acb->raw_qiov, qiov, acb->buf);
bdrv_aio_readv(s->test_file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb);
@@ -288,20 +399,6 @@ static BlockDriverAIOCB *blkverify_aio_flush(BlockDriverState *bs,
return bdrv_aio_flush(s->test_file, cb, opaque);
}
static bool blkverify_recurse_is_first_non_filter(BlockDriverState *bs,
BlockDriverState *candidate)
{
BDRVBlkverifyState *s = bs->opaque;
bool perm = bdrv_recurse_is_first_non_filter(bs->file, candidate);
if (perm) {
return true;
}
return bdrv_recurse_is_first_non_filter(s->test_file, candidate);
}
static BlockDriver bdrv_blkverify = {
.format_name = "blkverify",
.protocol_name = "blkverify",
@@ -315,9 +412,6 @@ static BlockDriver bdrv_blkverify = {
.bdrv_aio_readv = blkverify_aio_readv,
.bdrv_aio_writev = blkverify_aio_writev,
.bdrv_aio_flush = blkverify_aio_flush,
.is_filter = true,
.bdrv_recurse_is_first_non_filter = blkverify_recurse_is_first_non_filter,
};
static void bdrv_blkverify_init(void)

View File

@@ -39,41 +39,56 @@
// not allocated: 0xffffffff
// always little-endian
struct bochs_header {
char magic[32]; /* "Bochs Virtual HD Image" */
char type[16]; /* "Redolog" */
char subtype[16]; /* "Undoable" / "Volatile" / "Growing" */
struct bochs_header_v1 {
char magic[32]; // "Bochs Virtual HD Image"
char type[16]; // "Redolog"
char subtype[16]; // "Undoable" / "Volatile" / "Growing"
uint32_t version;
uint32_t header; /* size of header */
uint32_t catalog; /* num of entries */
uint32_t bitmap; /* bitmap size */
uint32_t extent; /* extent size */
uint32_t header; // size of header
union {
struct {
uint32_t reserved; /* for ??? */
uint64_t disk; /* disk size */
char padding[HEADER_SIZE - 64 - 20 - 12];
} QEMU_PACKED redolog;
struct {
uint64_t disk; /* disk size */
char padding[HEADER_SIZE - 64 - 20 - 8];
} QEMU_PACKED redolog_v1;
char padding[HEADER_SIZE - 64 - 20];
struct {
uint32_t catalog; // num of entries
uint32_t bitmap; // bitmap size
uint32_t extent; // extent size
uint64_t disk; // disk size
char padding[HEADER_SIZE - 64 - 8 - 20];
} redolog;
char padding[HEADER_SIZE - 64 - 8];
} extra;
} QEMU_PACKED;
};
// always little-endian
struct bochs_header {
char magic[32]; // "Bochs Virtual HD Image"
char type[16]; // "Redolog"
char subtype[16]; // "Undoable" / "Volatile" / "Growing"
uint32_t version;
uint32_t header; // size of header
union {
struct {
uint32_t catalog; // num of entries
uint32_t bitmap; // bitmap size
uint32_t extent; // extent size
uint32_t reserved; // for ???
uint64_t disk; // disk size
char padding[HEADER_SIZE - 64 - 8 - 24];
} redolog;
char padding[HEADER_SIZE - 64 - 8];
} extra;
};
typedef struct BDRVBochsState {
CoMutex lock;
uint32_t *catalog_bitmap;
uint32_t catalog_size;
int catalog_size;
uint32_t data_offset;
int data_offset;
uint32_t bitmap_blocks;
uint32_t extent_blocks;
uint32_t extent_size;
int bitmap_blocks;
int extent_blocks;
int extent_size;
} BDRVBochsState;
static int bochs_probe(const uint8_t *buf, int buf_size, const char *filename)
@@ -93,12 +108,12 @@ static int bochs_probe(const uint8_t *buf, int buf_size, const char *filename)
return 0;
}
static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int bochs_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVBochsState *s = bs->opaque;
uint32_t i;
int i;
struct bochs_header bochs;
struct bochs_header_v1 header_v1;
int ret;
bs->read_only = 1; // no write support yet
@@ -113,24 +128,17 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
strcmp(bochs.subtype, GROWING_TYPE) ||
((le32_to_cpu(bochs.version) != HEADER_VERSION) &&
(le32_to_cpu(bochs.version) != HEADER_V1))) {
error_setg(errp, "Image not in Bochs format");
return -EINVAL;
return -EMEDIUMTYPE;
}
if (le32_to_cpu(bochs.version) == HEADER_V1) {
bs->total_sectors = le64_to_cpu(bochs.extra.redolog_v1.disk) / 512;
memcpy(&header_v1, &bochs, sizeof(bochs));
bs->total_sectors = le64_to_cpu(header_v1.extra.redolog.disk) / 512;
} else {
bs->total_sectors = le64_to_cpu(bochs.extra.redolog.disk) / 512;
}
/* Limit to 1M entries to avoid unbounded allocation. This is what is
* needed for the largest image that bximage can create (~8 TB). */
s->catalog_size = le32_to_cpu(bochs.catalog);
if (s->catalog_size > 0x100000) {
error_setg(errp, "Catalog size is too large");
return -EFBIG;
bs->total_sectors = le64_to_cpu(bochs.extra.redolog.disk) / 512;
}
s->catalog_size = le32_to_cpu(bochs.extra.redolog.catalog);
s->catalog_bitmap = g_malloc(s->catalog_size * 4);
ret = bdrv_pread(bs->file, le32_to_cpu(bochs.header), s->catalog_bitmap,
@@ -144,34 +152,10 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
s->data_offset = le32_to_cpu(bochs.header) + (s->catalog_size * 4);
s->bitmap_blocks = 1 + (le32_to_cpu(bochs.bitmap) - 1) / 512;
s->extent_blocks = 1 + (le32_to_cpu(bochs.extent) - 1) / 512;
s->bitmap_blocks = 1 + (le32_to_cpu(bochs.extra.redolog.bitmap) - 1) / 512;
s->extent_blocks = 1 + (le32_to_cpu(bochs.extra.redolog.extent) - 1) / 512;
s->extent_size = le32_to_cpu(bochs.extent);
if (s->extent_size < BDRV_SECTOR_SIZE) {
/* bximage actually never creates extents smaller than 4k */
error_setg(errp, "Extent size must be at least 512");
ret = -EINVAL;
goto fail;
} else if (!is_power_of_2(s->extent_size)) {
error_setg(errp, "Extent size %" PRIu32 " is not a power of two",
s->extent_size);
ret = -EINVAL;
goto fail;
} else if (s->extent_size > 0x800000) {
error_setg(errp, "Extent size %" PRIu32 " is too large",
s->extent_size);
ret = -EINVAL;
goto fail;
}
if (s->catalog_size < DIV_ROUND_UP(bs->total_sectors,
s->extent_size / BDRV_SECTOR_SIZE))
{
error_setg(errp, "Catalog size is too small for this disk size");
ret = -EINVAL;
goto fail;
}
s->extent_size = le32_to_cpu(bochs.extra.redolog.extent);
qemu_co_mutex_init(&s->lock);
return 0;
@@ -184,32 +168,29 @@ fail:
static int64_t seek_to_sector(BlockDriverState *bs, int64_t sector_num)
{
BDRVBochsState *s = bs->opaque;
uint64_t offset = sector_num * 512;
uint64_t extent_index, extent_offset, bitmap_offset;
int64_t offset = sector_num * 512;
int64_t extent_index, extent_offset, bitmap_offset;
char bitmap_entry;
int ret;
// seek to sector
extent_index = offset / s->extent_size;
extent_offset = (offset % s->extent_size) / 512;
if (s->catalog_bitmap[extent_index] == 0xffffffff) {
return 0; /* not allocated */
return -1; /* not allocated */
}
bitmap_offset = s->data_offset +
(512 * (uint64_t) s->catalog_bitmap[extent_index] *
(s->extent_blocks + s->bitmap_blocks));
bitmap_offset = s->data_offset + (512 * s->catalog_bitmap[extent_index] *
(s->extent_blocks + s->bitmap_blocks));
/* read in bitmap for current extent */
ret = bdrv_pread(bs->file, bitmap_offset + (extent_offset / 8),
&bitmap_entry, 1);
if (ret < 0) {
return ret;
if (bdrv_pread(bs->file, bitmap_offset + (extent_offset / 8),
&bitmap_entry, 1) != 1) {
return -1;
}
if (!((bitmap_entry >> (extent_offset % 8)) & 1)) {
return 0; /* not allocated */
return -1; /* not allocated */
}
return bitmap_offset + (512 * (s->bitmap_blocks + extent_offset));
@@ -222,16 +203,13 @@ static int bochs_read(BlockDriverState *bs, int64_t sector_num,
while (nb_sectors > 0) {
int64_t block_offset = seek_to_sector(bs, sector_num);
if (block_offset < 0) {
return block_offset;
} else if (block_offset > 0) {
if (block_offset >= 0) {
ret = bdrv_pread(bs->file, block_offset, buf, 512);
if (ret < 0) {
return ret;
if (ret != 512) {
return -1;
}
} else {
} else
memset(buf, 0, 512);
}
nb_sectors--;
sector_num++;
buf += 512;

View File

@@ -26,9 +26,6 @@
#include "qemu/module.h"
#include <zlib.h>
/* Maximum compressed block size */
#define MAX_BLOCK_SIZE (64 * 1024 * 1024)
typedef struct BDRVCloopState {
CoMutex lock;
uint32_t block_size;
@@ -56,8 +53,7 @@ static int cloop_probe(const uint8_t *buf, int buf_size, const char *filename)
return 0;
}
static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int cloop_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVCloopState *s = bs->opaque;
uint32_t offsets_size, max_compressed_block_size = 1, i;
@@ -71,26 +67,6 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
return ret;
}
s->block_size = be32_to_cpu(s->block_size);
if (s->block_size % 512) {
error_setg(errp, "block_size %" PRIu32 " must be a multiple of 512",
s->block_size);
return -EINVAL;
}
if (s->block_size == 0) {
error_setg(errp, "block_size cannot be zero");
return -EINVAL;
}
/* cloop's create_compressed_fs.c warns about block sizes beyond 256 KB but
* we can accept more. Prevent ridiculous values like 4 GB - 1 since we
* need a buffer this big.
*/
if (s->block_size > MAX_BLOCK_SIZE) {
error_setg(errp, "block_size %" PRIu32 " must be %u MB or less",
s->block_size,
MAX_BLOCK_SIZE / (1024 * 1024));
return -EINVAL;
}
ret = bdrv_pread(bs->file, 128 + 4, &s->n_blocks, 4);
if (ret < 0) {
@@ -99,23 +75,7 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
s->n_blocks = be32_to_cpu(s->n_blocks);
/* read offsets */
if (s->n_blocks > (UINT32_MAX - 1) / sizeof(uint64_t)) {
/* Prevent integer overflow */
error_setg(errp, "n_blocks %" PRIu32 " must be %zu or less",
s->n_blocks,
(UINT32_MAX - 1) / sizeof(uint64_t));
return -EINVAL;
}
offsets_size = (s->n_blocks + 1) * sizeof(uint64_t);
if (offsets_size > 512 * 1024 * 1024) {
/* Prevent ridiculous offsets_size which causes memory allocation to
* fail or overflows bdrv_pread() size. In practice the 512 MB
* offsets[] limit supports 16 TB images at 256 KB block size.
*/
error_setg(errp, "image requires too many offsets, "
"try increasing block size");
return -EINVAL;
}
offsets_size = s->n_blocks * sizeof(uint64_t);
s->offsets = g_malloc(offsets_size);
ret = bdrv_pread(bs->file, 128 + 4 + 4, s->offsets, offsets_size);
@@ -123,37 +83,13 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
goto fail;
}
for (i = 0; i < s->n_blocks + 1; i++) {
uint64_t size;
for(i=0;i<s->n_blocks;i++) {
s->offsets[i] = be64_to_cpu(s->offsets[i]);
if (i == 0) {
continue;
}
if (s->offsets[i] < s->offsets[i - 1]) {
error_setg(errp, "offsets not monotonically increasing at "
"index %" PRIu32 ", image file is corrupt", i);
ret = -EINVAL;
goto fail;
}
size = s->offsets[i] - s->offsets[i - 1];
/* Compressed blocks should be smaller than the uncompressed block size
* but maybe compression performed poorly so the compressed block is
* actually bigger. Clamp down on unrealistic values to prevent
* ridiculous s->compressed_block allocation.
*/
if (size > 2 * MAX_BLOCK_SIZE) {
error_setg(errp, "invalid compressed block size at index %" PRIu32
", image file is corrupt", i);
ret = -EINVAL;
goto fail;
}
if (size > max_compressed_block_size) {
max_compressed_block_size = size;
if (i > 0) {
uint32_t size = s->offsets[i] - s->offsets[i - 1];
if (size > max_compressed_block_size) {
max_compressed_block_size = size;
}
}
}
@@ -243,7 +179,9 @@ static coroutine_fn int cloop_co_read(BlockDriverState *bs, int64_t sector_num,
static void cloop_close(BlockDriverState *bs)
{
BDRVCloopState *s = bs->opaque;
g_free(s->offsets);
if (s->n_blocks > 0) {
g_free(s->offsets);
}
g_free(s->compressed_block);
g_free(s->uncompressed_block);
inflateEnd(&s->zstream);

View File

@@ -103,14 +103,14 @@ wait:
/* Note that even when no rate limit is applied we need to yield
* with no pending I/O here so that bdrv_drain_all() returns.
*/
block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, delay_ns);
block_job_sleep_ns(&s->common, rt_clock, delay_ns);
if (block_job_is_cancelled(&s->common)) {
break;
}
/* Copy if allocated above the base */
ret = bdrv_is_allocated_above(top, base, sector_num,
COMMIT_BUFFER_SIZE / BDRV_SECTOR_SIZE,
&n);
ret = bdrv_co_is_allocated_above(top, base, sector_num,
COMMIT_BUFFER_SIZE / BDRV_SECTOR_SIZE,
&n);
copy = (ret == 1);
trace_commit_one_iteration(s, sector_num, n, ret);
if (copy) {
@@ -173,9 +173,9 @@ static void commit_set_speed(BlockJob *job, int64_t speed, Error **errp)
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
}
static const BlockJobDriver commit_job_driver = {
static BlockJobType commit_job_type = {
.instance_size = sizeof(CommitBlockJob),
.job_type = BLOCK_JOB_TYPE_COMMIT,
.job_type = "commit",
.set_speed = commit_set_speed,
};
@@ -194,11 +194,17 @@ void commit_start(BlockDriverState *bs, BlockDriverState *base,
if ((on_error == BLOCKDEV_ON_ERROR_STOP ||
on_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_setg(errp, "Invalid parameter combination");
error_set(errp, QERR_INVALID_PARAMETER_COMBINATION);
return;
}
/* Once we support top == active layer, remove this check */
if (top == bs) {
error_setg(errp,
"Top image as the active layer is currently unsupported");
return;
}
assert(top != bs);
if (top == base) {
error_setg(errp, "Invalid files for merge: top and base are the same");
return;
@@ -232,7 +238,7 @@ void commit_start(BlockDriverState *bs, BlockDriverState *base,
}
s = block_job_create(&commit_job_driver, bs, speed, cb, opaque, errp);
s = block_job_create(&commit_job_type, bs, speed, cb, opaque, errp);
if (!s) {
return;
}

View File

@@ -58,8 +58,7 @@ static int cow_probe(const uint8_t *buf, int buf_size, const char *filename)
return 0;
}
static int cow_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int cow_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVCowState *s = bs->opaque;
struct cow_header_v2 cow_header;
@@ -74,16 +73,15 @@ static int cow_open(BlockDriverState *bs, QDict *options, int flags,
}
if (be32_to_cpu(cow_header.magic) != COW_MAGIC) {
error_setg(errp, "Image not in COW format");
ret = -EINVAL;
ret = -EMEDIUMTYPE;
goto fail;
}
if (be32_to_cpu(cow_header.version) != COW_VERSION) {
char version[64];
snprintf(version, sizeof(version),
"COW version %" PRIu32, cow_header.version);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
"COW version %d", cow_header.version);
qerror_report(QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "cow", version);
ret = -ENOTSUP;
goto fail;
@@ -104,45 +102,42 @@ static int cow_open(BlockDriverState *bs, QDict *options, int flags,
return ret;
}
static inline void cow_set_bits(uint8_t *bitmap, int start, int64_t nb_sectors)
/*
* XXX(hch): right now these functions are extremely inefficient.
* We should just read the whole bitmap we'll need in one go instead.
*/
static inline int cow_set_bit(BlockDriverState *bs, int64_t bitnum)
{
int64_t bitnum = start, last = start + nb_sectors;
while (bitnum < last) {
if ((bitnum & 7) == 0 && bitnum + 8 <= last) {
bitmap[bitnum / 8] = 0xFF;
bitnum += 8;
continue;
}
bitmap[bitnum/8] |= (1 << (bitnum % 8));
bitnum++;
uint64_t offset = sizeof(struct cow_header_v2) + bitnum / 8;
uint8_t bitmap;
int ret;
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
bitmap |= (1 << (bitnum % 8));
ret = bdrv_pwrite_sync(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
return 0;
}
#define BITS_PER_BITMAP_SECTOR (512 * 8)
/* Cannot use bitmap.c on big-endian machines. */
static int cow_test_bit(int64_t bitnum, const uint8_t *bitmap)
static inline int is_bit_set(BlockDriverState *bs, int64_t bitnum)
{
return (bitmap[bitnum / 8] & (1 << (bitnum & 7))) != 0;
}
uint64_t offset = sizeof(struct cow_header_v2) + bitnum / 8;
uint8_t bitmap;
int ret;
static int cow_find_streak(const uint8_t *bitmap, int value, int start, int nb_sectors)
{
int streak_value = value ? 0xFF : 0;
int last = MIN(start + nb_sectors, BITS_PER_BITMAP_SECTOR);
int bitnum = start;
while (bitnum < last) {
if ((bitnum & 7) == 0 && bitmap[bitnum / 8] == streak_value) {
bitnum += 8;
continue;
}
if (cow_test_bit(bitnum, bitmap) == value) {
bitnum++;
continue;
}
break;
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
return MIN(bitnum, last) - start;
return !!(bitmap & (1 << (bitnum % 8)));
}
/* Return true if first block has been changed (ie. current version is
@@ -151,100 +146,40 @@ static int cow_find_streak(const uint8_t *bitmap, int value, int start, int nb_s
static int coroutine_fn cow_co_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *num_same)
{
int64_t bitnum = sector_num + sizeof(struct cow_header_v2) * 8;
uint64_t offset = (bitnum / 8) & -BDRV_SECTOR_SIZE;
bool first = true;
int changed = 0, same = 0;
int changed;
do {
int ret;
uint8_t bitmap[BDRV_SECTOR_SIZE];
bitnum &= BITS_PER_BITMAP_SECTOR - 1;
int sector_bits = MIN(nb_sectors, BITS_PER_BITMAP_SECTOR - bitnum);
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
if (first) {
changed = cow_test_bit(bitnum, bitmap);
first = false;
}
same += cow_find_streak(bitmap, changed, bitnum, nb_sectors);
bitnum += sector_bits;
nb_sectors -= sector_bits;
offset += BDRV_SECTOR_SIZE;
} while (nb_sectors);
*num_same = same;
return changed;
}
static int64_t coroutine_fn cow_co_get_block_status(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *num_same)
{
BDRVCowState *s = bs->opaque;
int ret = cow_co_is_allocated(bs, sector_num, nb_sectors, num_same);
int64_t offset = s->cow_sectors_offset + (sector_num << BDRV_SECTOR_BITS);
if (ret < 0) {
return ret;
if (nb_sectors == 0) {
*num_same = nb_sectors;
return 0;
}
return (ret ? BDRV_BLOCK_DATA : 0) | offset | BDRV_BLOCK_OFFSET_VALID;
changed = is_bit_set(bs, sector_num);
if (changed < 0) {
return 0; /* XXX: how to return I/O errors? */
}
for (*num_same = 1; *num_same < nb_sectors; (*num_same)++) {
if (is_bit_set(bs, sector_num + *num_same) != changed)
break;
}
return changed;
}
static int cow_update_bitmap(BlockDriverState *bs, int64_t sector_num,
int nb_sectors)
{
int64_t bitnum = sector_num + sizeof(struct cow_header_v2) * 8;
uint64_t offset = (bitnum / 8) & -BDRV_SECTOR_SIZE;
bool first = true;
int sector_bits;
int error = 0;
int i;
for ( ; nb_sectors;
bitnum += sector_bits,
nb_sectors -= sector_bits,
offset += BDRV_SECTOR_SIZE) {
int ret, set;
uint8_t bitmap[BDRV_SECTOR_SIZE];
bitnum &= BITS_PER_BITMAP_SECTOR - 1;
sector_bits = MIN(nb_sectors, BITS_PER_BITMAP_SECTOR - bitnum);
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
/* Skip over any already set bits */
set = cow_find_streak(bitmap, 1, bitnum, sector_bits);
bitnum += set;
sector_bits -= set;
nb_sectors -= set;
if (!sector_bits) {
continue;
}
if (first) {
ret = bdrv_flush(bs->file);
if (ret < 0) {
return ret;
}
first = false;
}
cow_set_bits(bitmap, bitnum, sector_bits);
ret = bdrv_pwrite(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
for (i = 0; i < nb_sectors; i++) {
error = cow_set_bit(bs, sector_num + i);
if (error) {
break;
}
}
return 0;
return error;
}
static int coroutine_fn cow_read(BlockDriverState *bs, int64_t sector_num,
@@ -254,11 +189,7 @@ static int coroutine_fn cow_read(BlockDriverState *bs, int64_t sector_num,
int ret, n;
while (nb_sectors > 0) {
ret = cow_co_is_allocated(bs, sector_num, nb_sectors, &n);
if (ret < 0) {
return ret;
}
if (ret) {
if (bdrv_co_is_allocated(bs, sector_num, nb_sectors, &n)) {
ret = bdrv_pread(bs->file,
s->cow_sectors_offset + sector_num * 512,
buf, n * 512);
@@ -324,14 +255,12 @@ static void cow_close(BlockDriverState *bs)
{
}
static int cow_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
static int cow_create(const char *filename, QEMUOptionParameter *options)
{
struct cow_header_v2 cow_header;
struct stat st;
int64_t image_sectors = 0;
const char *image_filename = NULL;
Error *local_err = NULL;
int ret;
BlockDriverState *cow_bs;
@@ -345,17 +274,13 @@ static int cow_create(const char *filename, QEMUOptionParameter *options,
options++;
}
ret = bdrv_create_file(filename, options, &local_err);
ret = bdrv_create_file(filename, options);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
cow_bs = NULL;
ret = bdrv_open(&cow_bs, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, NULL, &local_err);
ret = bdrv_file_open(&cow_bs, filename, NULL, BDRV_O_RDWR);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
@@ -389,7 +314,7 @@ static int cow_create(const char *filename, QEMUOptionParameter *options,
}
exit:
bdrv_unref(cow_bs);
bdrv_delete(cow_bs);
return ret;
}
@@ -415,11 +340,10 @@ static BlockDriver bdrv_cow = {
.bdrv_open = cow_open,
.bdrv_close = cow_close,
.bdrv_create = cow_create,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_read = cow_co_read,
.bdrv_write = cow_co_write,
.bdrv_co_get_block_status = cow_co_get_block_status,
.bdrv_co_is_allocated = cow_co_is_allocated,
.create_options = cow_create_options,
};

View File

@@ -23,7 +23,6 @@
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "qapi/qmp/qbool.h"
#include <curl/curl.h>
// #define DEBUG
@@ -35,26 +34,6 @@
#define DPRINTF(fmt, ...) do { } while (0)
#endif
#if LIBCURL_VERSION_NUM >= 0x071000
/* The multi interface timer callback was introduced in 7.16.0 */
#define NEED_CURL_TIMER_CALLBACK
#define HAVE_SOCKET_ACTION
#endif
#ifndef HAVE_SOCKET_ACTION
/* If curl_multi_socket_action isn't available, define it statically here in
* terms of curl_multi_socket. Note that ev_bitmask will be ignored, which is
* less efficient but still safe. */
static CURLMcode __curl_multi_socket_action(CURLM *multi_handle,
curl_socket_t sockfd,
int ev_bitmask,
int *running_handles)
{
return curl_multi_socket(multi_handle, sockfd, running_handles);
}
#define curl_multi_socket_action __curl_multi_socket_action
#endif
#define PROTOCOLS (CURLPROTO_HTTP | CURLPROTO_HTTPS | \
CURLPROTO_FTP | CURLPROTO_FTPS | \
CURLPROTO_TFTP)
@@ -62,16 +41,12 @@ static CURLMcode __curl_multi_socket_action(CURLM *multi_handle,
#define CURL_NUM_STATES 8
#define CURL_NUM_ACB 8
#define SECTOR_SIZE 512
#define READ_AHEAD_DEFAULT (256 * 1024)
#define READ_AHEAD_SIZE (256 * 1024)
#define FIND_RET_NONE 0
#define FIND_RET_OK 1
#define FIND_RET_WAIT 2
#define CURL_BLOCK_OPT_URL "url"
#define CURL_BLOCK_OPT_READAHEAD "readahead"
#define CURL_BLOCK_OPT_SSLVERIFY "sslverify"
struct BDRVCURLState;
typedef struct CURLAIOCB {
@@ -91,7 +66,6 @@ typedef struct CURLState
struct BDRVCURLState *s;
CURLAIOCB *acb[CURL_NUM_ACB];
CURL *curl;
curl_socket_t sock_fd;
char *orig_buf;
size_t buf_start;
size_t buf_off;
@@ -103,71 +77,47 @@ typedef struct CURLState
typedef struct BDRVCURLState {
CURLM *multi;
QEMUTimer timer;
size_t len;
CURLState states[CURL_NUM_STATES];
char *url;
size_t readahead_size;
bool sslverify;
bool accept_range;
} BDRVCURLState;
static void curl_clean_state(CURLState *s);
static void curl_multi_do(void *arg);
static void curl_multi_read(void *arg);
#ifdef NEED_CURL_TIMER_CALLBACK
static int curl_timer_cb(CURLM *multi, long timeout_ms, void *opaque)
{
BDRVCURLState *s = opaque;
DPRINTF("CURL: timer callback timeout_ms %ld\n", timeout_ms);
if (timeout_ms == -1) {
timer_del(&s->timer);
} else {
int64_t timeout_ns = (int64_t)timeout_ms * 1000 * 1000;
timer_mod(&s->timer,
qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + timeout_ns);
}
return 0;
}
#endif
static int curl_aio_flush(void *opaque);
static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
void *s, void *sp)
{
CURLState *state = NULL;
curl_easy_getinfo(curl, CURLINFO_PRIVATE, (char **)&state);
state->sock_fd = fd;
DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd);
switch (action) {
case CURL_POLL_IN:
qemu_aio_set_fd_handler(fd, curl_multi_read, NULL, state);
qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, curl_aio_flush, s);
break;
case CURL_POLL_OUT:
qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, state);
qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, curl_aio_flush, s);
break;
case CURL_POLL_INOUT:
qemu_aio_set_fd_handler(fd, curl_multi_read, curl_multi_do, state);
qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do,
curl_aio_flush, s);
break;
case CURL_POLL_REMOVE:
qemu_aio_set_fd_handler(fd, NULL, NULL, NULL);
qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL);
break;
}
return 0;
}
static size_t curl_header_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
static size_t curl_size_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
{
BDRVCURLState *s = opaque;
CURLState *s = ((CURLState*)opaque);
size_t realsize = size * nmemb;
const char *accept_line = "Accept-Ranges: bytes";
size_t fsize;
if (realsize >= strlen(accept_line)
&& strncmp((char *)ptr, accept_line, strlen(accept_line)) == 0) {
s->accept_range = true;
if(sscanf(ptr, "Content-Length: %zd", &fsize) == 1) {
s->s->len = fsize;
}
return realsize;
@@ -182,13 +132,8 @@ static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
DPRINTF("CURL: Just reading %zd bytes\n", realsize);
if (!s || !s->orig_buf)
return 0;
goto read_end;
if (s->buf_off >= s->buf_len) {
/* buffer full, read nothing */
return 0;
}
realsize = MIN(realsize, s->buf_len - s->buf_off);
memcpy(s->orig_buf + s->buf_off, ptr, realsize);
s->buf_off += realsize;
@@ -207,6 +152,7 @@ static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
}
}
read_end:
return realsize;
}
@@ -241,8 +187,7 @@ static int curl_find_buf(BDRVCURLState *s, size_t start, size_t len,
}
// Wait for unfinished chunks
if (state->in_use &&
(start >= state->buf_start) &&
if ((start >= state->buf_start) &&
(start <= buf_fend) &&
(end >= state->buf_start) &&
(end <= buf_fend))
@@ -264,87 +209,61 @@ static int curl_find_buf(BDRVCURLState *s, size_t start, size_t len,
return FIND_RET_NONE;
}
static void curl_multi_check_completion(BDRVCURLState *s)
static void curl_multi_do(void *arg)
{
BDRVCURLState *s = (BDRVCURLState *)arg;
int running;
int r;
int msgs_in_queue;
if (!s->multi)
return;
do {
r = curl_multi_socket_all(s->multi, &running);
} while(r == CURLM_CALL_MULTI_PERFORM);
/* Try to find done transfers, so we can free the easy
* handle again. */
for (;;) {
do {
CURLMsg *msg;
msg = curl_multi_info_read(s->multi, &msgs_in_queue);
/* Quit when there are no more completions */
if (!msg)
break;
if (msg->msg == CURLMSG_DONE) {
CURLState *state = NULL;
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE,
(char **)&state);
/* ACBs for successful messages get completed in curl_read_cb */
if (msg->data.result != CURLE_OK) {
int i;
for (i = 0; i < CURL_NUM_ACB; i++) {
CURLAIOCB *acb = state->acb[i];
if (acb == NULL) {
continue;
}
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_release(acb);
state->acb[i] = NULL;
}
}
curl_clean_state(state);
if (msg->msg == CURLMSG_NONE)
break;
switch (msg->msg) {
case CURLMSG_DONE:
{
CURLState *state = NULL;
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE, (char**)&state);
/* ACBs for successful messages get completed in curl_read_cb */
if (msg->data.result != CURLE_OK) {
int i;
for (i = 0; i < CURL_NUM_ACB; i++) {
CURLAIOCB *acb = state->acb[i];
if (acb == NULL) {
continue;
}
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_release(acb);
state->acb[i] = NULL;
}
}
curl_clean_state(state);
break;
}
default:
msgs_in_queue = 0;
break;
}
}
}
static void curl_multi_do(void *arg)
{
CURLState *s = (CURLState *)arg;
int running;
int r;
if (!s->s->multi) {
return;
}
do {
r = curl_multi_socket_action(s->s->multi, s->sock_fd, 0, &running);
} while(r == CURLM_CALL_MULTI_PERFORM);
}
static void curl_multi_read(void *arg)
{
CURLState *s = (CURLState *)arg;
curl_multi_do(arg);
curl_multi_check_completion(s->s);
}
static void curl_multi_timeout_do(void *arg)
{
#ifdef NEED_CURL_TIMER_CALLBACK
BDRVCURLState *s = (BDRVCURLState *)arg;
int running;
if (!s->multi) {
return;
}
curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running);
curl_multi_check_completion(s);
#else
abort();
#endif
} while(msgs_in_queue);
}
static CURLState *curl_init_state(BDRVCURLState *s)
@@ -365,44 +284,44 @@ static CURLState *curl_init_state(BDRVCURLState *s)
break;
}
if (!state) {
qemu_aio_wait();
g_usleep(100);
curl_multi_do(s);
}
} while(!state);
if (!state->curl) {
state->curl = curl_easy_init();
if (!state->curl) {
return NULL;
}
curl_easy_setopt(state->curl, CURLOPT_URL, s->url);
curl_easy_setopt(state->curl, CURLOPT_SSL_VERIFYPEER,
(long) s->sslverify);
curl_easy_setopt(state->curl, CURLOPT_TIMEOUT, 5);
curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION,
(void *)curl_read_cb);
curl_easy_setopt(state->curl, CURLOPT_WRITEDATA, (void *)state);
curl_easy_setopt(state->curl, CURLOPT_PRIVATE, (void *)state);
curl_easy_setopt(state->curl, CURLOPT_AUTOREFERER, 1);
curl_easy_setopt(state->curl, CURLOPT_FOLLOWLOCATION, 1);
curl_easy_setopt(state->curl, CURLOPT_NOSIGNAL, 1);
curl_easy_setopt(state->curl, CURLOPT_ERRORBUFFER, state->errmsg);
curl_easy_setopt(state->curl, CURLOPT_FAILONERROR, 1);
if (state->curl)
goto has_curl;
/* Restrict supported protocols to avoid security issues in the more
* obscure protocols. For example, do not allow POP3/SMTP/IMAP see
* CVE-2013-0249.
*
* Restricting protocols is only supported from 7.19.4 upwards.
*/
state->curl = curl_easy_init();
if (!state->curl)
return NULL;
curl_easy_setopt(state->curl, CURLOPT_URL, s->url);
curl_easy_setopt(state->curl, CURLOPT_TIMEOUT, 5);
curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION, (void *)curl_read_cb);
curl_easy_setopt(state->curl, CURLOPT_WRITEDATA, (void *)state);
curl_easy_setopt(state->curl, CURLOPT_PRIVATE, (void *)state);
curl_easy_setopt(state->curl, CURLOPT_AUTOREFERER, 1);
curl_easy_setopt(state->curl, CURLOPT_FOLLOWLOCATION, 1);
curl_easy_setopt(state->curl, CURLOPT_NOSIGNAL, 1);
curl_easy_setopt(state->curl, CURLOPT_ERRORBUFFER, state->errmsg);
curl_easy_setopt(state->curl, CURLOPT_FAILONERROR, 1);
/* Restrict supported protocols to avoid security issues in the more
* obscure protocols. For example, do not allow POP3/SMTP/IMAP see
* CVE-2013-0249.
*
* Restricting protocols is only supported from 7.19.4 upwards.
*/
#if LIBCURL_VERSION_NUM >= 0x071304
curl_easy_setopt(state->curl, CURLOPT_PROTOCOLS, PROTOCOLS);
curl_easy_setopt(state->curl, CURLOPT_REDIR_PROTOCOLS, PROTOCOLS);
curl_easy_setopt(state->curl, CURLOPT_PROTOCOLS, PROTOCOLS);
curl_easy_setopt(state->curl, CURLOPT_REDIR_PROTOCOLS, PROTOCOLS);
#endif
#ifdef DEBUG_VERBOSE
curl_easy_setopt(state->curl, CURLOPT_VERBOSE, 1);
curl_easy_setopt(state->curl, CURLOPT_VERBOSE, 1);
#endif
}
has_curl:
state->s = s;
@@ -419,7 +338,43 @@ static void curl_clean_state(CURLState *s)
static void curl_parse_filename(const char *filename, QDict *options,
Error **errp)
{
qdict_put(options, CURL_BLOCK_OPT_URL, qstring_from_str(filename));
#define RA_OPTSTR ":readahead="
char *file;
char *ra;
const char *ra_val;
int parse_state = 0;
file = g_strdup(filename);
/* Parse a trailing ":readahead=#:" param, if present. */
ra = file + strlen(file) - 1;
while (ra >= file) {
if (parse_state == 0) {
if (*ra == ':') {
parse_state++;
} else {
break;
}
} else if (parse_state == 1) {
if (*ra > '9' || *ra < '0') {
char *opt_start = ra - strlen(RA_OPTSTR) + 1;
if (opt_start > file &&
strncmp(opt_start, RA_OPTSTR, strlen(RA_OPTSTR)) == 0) {
ra_val = ra + 1;
ra -= strlen(RA_OPTSTR) - 1;
*ra = '\0';
qdict_put(options, "readahead", qstring_from_str(ra_val));
}
break;
}
}
ra--;
}
qdict_put(options, "url", qstring_from_str(file));
g_free(file);
}
static QemuOptsList runtime_opts = {
@@ -427,26 +382,20 @@ static QemuOptsList runtime_opts = {
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
.desc = {
{
.name = CURL_BLOCK_OPT_URL,
.name = "url",
.type = QEMU_OPT_STRING,
.help = "URL to open",
},
{
.name = CURL_BLOCK_OPT_READAHEAD,
.name = "readahead",
.type = QEMU_OPT_SIZE,
.help = "Readahead size",
},
{
.name = CURL_BLOCK_OPT_SSLVERIFY,
.type = QEMU_OPT_BOOL,
.help = "Verify SSL certificate"
},
{ /* end of list */ }
},
};
static int curl_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int curl_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVCURLState *s = bs->opaque;
CURLState *state = NULL;
@@ -457,31 +406,25 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags,
static int inited = 0;
if (flags & BDRV_O_RDWR) {
error_setg(errp, "curl block device does not support writes");
return -EROFS;
}
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
goto out_noclean;
}
s->readahead_size = qemu_opt_get_size(opts, CURL_BLOCK_OPT_READAHEAD,
READ_AHEAD_DEFAULT);
s->readahead_size = qemu_opt_get_size(opts, "readahead", READ_AHEAD_SIZE);
if ((s->readahead_size & 0x1ff) != 0) {
error_setg(errp, "HTTP_READAHEAD_SIZE %zd is not a multiple of 512",
s->readahead_size);
fprintf(stderr, "HTTP_READAHEAD_SIZE %zd is not a multiple of 512\n",
s->readahead_size);
goto out_noclean;
}
s->sslverify = qemu_opt_get_bool(opts, CURL_BLOCK_OPT_SSLVERIFY, true);
file = qemu_opt_get(opts, CURL_BLOCK_OPT_URL);
file = qemu_opt_get(opts, "url");
if (file == NULL) {
error_setg(errp, "curl block driver requires an 'url' option");
qerror_report(ERROR_CLASS_GENERIC_ERROR, "curl block driver requires "
"an 'url' option");
goto out_noclean;
}
@@ -498,50 +441,36 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags,
// Get file size
s->accept_range = false;
curl_easy_setopt(state->curl, CURLOPT_NOBODY, 1);
curl_easy_setopt(state->curl, CURLOPT_HEADERFUNCTION,
curl_header_cb);
curl_easy_setopt(state->curl, CURLOPT_HEADERDATA, s);
curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION, (void *)curl_size_cb);
if (curl_easy_perform(state->curl))
goto out;
curl_easy_getinfo(state->curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD, &d);
curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION, (void *)curl_read_cb);
curl_easy_setopt(state->curl, CURLOPT_NOBODY, 0);
if (d)
s->len = (size_t)d;
else if(!s->len)
goto out;
if ((!strncasecmp(s->url, "http://", strlen("http://"))
|| !strncasecmp(s->url, "https://", strlen("https://")))
&& !s->accept_range) {
pstrcpy(state->errmsg, CURL_ERROR_SIZE,
"Server does not support 'range' (byte ranges).");
goto out;
}
DPRINTF("CURL: Size = %zd\n", s->len);
curl_clean_state(state);
curl_easy_cleanup(state->curl);
state->curl = NULL;
aio_timer_init(bdrv_get_aio_context(bs), &s->timer,
QEMU_CLOCK_REALTIME, SCALE_NS,
curl_multi_timeout_do, s);
// Now we know the file exists and its size, so let's
// initialize the multi interface!
s->multi = curl_multi_init();
curl_multi_setopt(s->multi, CURLMOPT_SOCKETFUNCTION, curl_sock_cb);
#ifdef NEED_CURL_TIMER_CALLBACK
curl_multi_setopt(s->multi, CURLMOPT_TIMERDATA, s);
curl_multi_setopt(s->multi, CURLMOPT_TIMERFUNCTION, curl_timer_cb);
#endif
curl_multi_setopt( s->multi, CURLMOPT_SOCKETDATA, s);
curl_multi_setopt( s->multi, CURLMOPT_SOCKETFUNCTION, curl_sock_cb );
curl_multi_do(s);
qemu_opts_del(opts);
return 0;
out:
error_setg(errp, "CURL: Error opening file: %s", state->errmsg);
fprintf(stderr, "CURL: Error opening file: %s\n", state->errmsg);
curl_easy_cleanup(state->curl);
state->curl = NULL;
out_noclean:
@@ -550,6 +479,21 @@ out_noclean:
return -EINVAL;
}
static int curl_aio_flush(void *opaque)
{
BDRVCURLState *s = opaque;
int i, j;
for (i=0; i < CURL_NUM_STATES; i++) {
for(j=0; j < CURL_NUM_ACB; j++) {
if (s->states[i].acb[j]) {
return 1;
}
}
}
return 0;
}
static void curl_aio_cancel(BlockDriverAIOCB *blockacb)
{
// Do we have to implement canceling? Seems to work without...
@@ -564,7 +508,6 @@ static const AIOCBInfo curl_aiocb_info = {
static void curl_readv_bh_cb(void *p)
{
CURLState *state;
int running;
CURLAIOCB *acb = p;
BDRVCURLState *s = acb->common.bs->opaque;
@@ -613,9 +556,8 @@ static void curl_readv_bh_cb(void *p)
curl_easy_setopt(state->curl, CURLOPT_RANGE, state->range);
curl_multi_add_handle(s->multi, state->curl);
curl_multi_do(s);
/* Tell curl it needs to kick things off */
curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running);
}
static BlockDriverAIOCB *curl_aio_readv(BlockDriverState *bs,
@@ -631,6 +573,12 @@ static BlockDriverAIOCB *curl_aio_readv(BlockDriverState *bs,
acb->nb_sectors = nb_sectors;
acb->bh = qemu_bh_new(curl_readv_bh_cb, acb);
if (!acb->bh) {
DPRINTF("CURL: qemu_bh_new failed\n");
return NULL;
}
qemu_bh_schedule(acb->bh);
return &acb->common;
}
@@ -655,9 +603,6 @@ static void curl_close(BlockDriverState *bs)
}
if (s->multi)
curl_multi_cleanup(s->multi);
timer_del(&s->timer);
g_free(s->url);
}

View File

@@ -27,14 +27,6 @@
#include "qemu/module.h"
#include <zlib.h>
enum {
/* Limit chunk sizes to prevent unreasonable amounts of memory being used
* or truncating when converting to 32-bit types
*/
DMG_LENGTHS_MAX = 64 * 1024 * 1024, /* 64 MB */
DMG_SECTORCOUNTS_MAX = DMG_LENGTHS_MAX / 512,
};
typedef struct BDRVDMGState {
CoMutex lock;
/* each chunk contains a certain number of sectors,
@@ -100,44 +92,12 @@ static int read_uint32(BlockDriverState *bs, int64_t offset, uint32_t *result)
return 0;
}
/* Increase max chunk sizes, if necessary. This function is used to calculate
* the buffer sizes needed for compressed/uncompressed chunk I/O.
*/
static void update_max_chunk_size(BDRVDMGState *s, uint32_t chunk,
uint32_t *max_compressed_size,
uint32_t *max_sectors_per_chunk)
{
uint32_t compressed_size = 0;
uint32_t uncompressed_sectors = 0;
switch (s->types[chunk]) {
case 0x80000005: /* zlib compressed */
compressed_size = s->lengths[chunk];
uncompressed_sectors = s->sectorcounts[chunk];
break;
case 1: /* copy */
uncompressed_sectors = (s->lengths[chunk] + 511) / 512;
break;
case 2: /* zero */
uncompressed_sectors = s->sectorcounts[chunk];
break;
}
if (compressed_size > *max_compressed_size) {
*max_compressed_size = compressed_size;
}
if (uncompressed_sectors > *max_sectors_per_chunk) {
*max_sectors_per_chunk = uncompressed_sectors;
}
}
static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int dmg_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVDMGState *s = bs->opaque;
uint64_t info_begin, info_end, last_in_offset, last_out_offset;
uint64_t info_begin,info_end,last_in_offset,last_out_offset;
uint32_t count, tmp;
uint32_t max_compressed_size = 1, max_sectors_per_chunk = 1, i;
uint32_t max_compressed_size=1,max_sectors_per_chunk=1,i;
int64_t offset;
int ret;
@@ -199,40 +159,37 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
goto fail;
}
if (type == 0x6d697368 && count >= 244) {
size_t new_size;
uint32_t chunk_count;
if (type == 0x6d697368 && count >= 244) {
int new_size, chunk_count;
offset += 4;
offset += 200;
chunk_count = (count - 204) / 40;
new_size = sizeof(uint64_t) * (s->n_chunks + chunk_count);
s->types = g_realloc(s->types, new_size / 2);
s->offsets = g_realloc(s->offsets, new_size);
s->lengths = g_realloc(s->lengths, new_size);
s->sectors = g_realloc(s->sectors, new_size);
s->sectorcounts = g_realloc(s->sectorcounts, new_size);
chunk_count = (count-204)/40;
new_size = sizeof(uint64_t) * (s->n_chunks + chunk_count);
s->types = g_realloc(s->types, new_size/2);
s->offsets = g_realloc(s->offsets, new_size);
s->lengths = g_realloc(s->lengths, new_size);
s->sectors = g_realloc(s->sectors, new_size);
s->sectorcounts = g_realloc(s->sectorcounts, new_size);
for (i = s->n_chunks; i < s->n_chunks + chunk_count; i++) {
ret = read_uint32(bs, offset, &s->types[i]);
if (ret < 0) {
goto fail;
}
offset += 4;
if (s->types[i] != 0x80000005 && s->types[i] != 1 &&
s->types[i] != 2) {
if (s->types[i] == 0xffffffff && i > 0) {
last_in_offset = s->offsets[i - 1] + s->lengths[i - 1];
last_out_offset = s->sectors[i - 1] +
s->sectorcounts[i - 1];
}
chunk_count--;
i--;
offset += 36;
continue;
}
offset += 4;
offset += 4;
if(s->types[i]!=0x80000005 && s->types[i]!=1 && s->types[i]!=2) {
if(s->types[i]==0xffffffff) {
last_in_offset = s->offsets[i-1]+s->lengths[i-1];
last_out_offset = s->sectors[i-1]+s->sectorcounts[i-1];
}
chunk_count--;
i--;
offset += 36;
continue;
}
offset += 4;
ret = read_uint64(bs, offset, &s->sectors[i]);
if (ret < 0) {
@@ -247,14 +204,6 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
}
offset += 8;
if (s->sectorcounts[i] > DMG_SECTORCOUNTS_MAX) {
error_report("sector count %" PRIu64 " for chunk %" PRIu32
" is larger than max (%u)",
s->sectorcounts[i], i, DMG_SECTORCOUNTS_MAX);
ret = -EINVAL;
goto fail;
}
ret = read_uint64(bs, offset, &s->offsets[i]);
if (ret < 0) {
goto fail;
@@ -268,25 +217,19 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
}
offset += 8;
if (s->lengths[i] > DMG_LENGTHS_MAX) {
error_report("length %" PRIu64 " for chunk %" PRIu32
" is larger than max (%u)",
s->lengths[i], i, DMG_LENGTHS_MAX);
ret = -EINVAL;
goto fail;
}
update_max_chunk_size(s, i, &max_compressed_size,
&max_sectors_per_chunk);
}
s->n_chunks += chunk_count;
}
if(s->lengths[i]>max_compressed_size)
max_compressed_size = s->lengths[i];
if(s->sectorcounts[i]>max_sectors_per_chunk)
max_sectors_per_chunk = s->sectorcounts[i];
}
s->n_chunks+=chunk_count;
}
}
/* initialize zlib engine */
s->compressed_chunk = g_malloc(max_compressed_size + 1);
s->uncompressed_chunk = g_malloc(512 * max_sectors_per_chunk);
if (inflateInit(&s->zstream) != Z_OK) {
s->compressed_chunk = g_malloc(max_compressed_size+1);
s->uncompressed_chunk = g_malloc(512*max_sectors_per_chunk);
if(inflateInit(&s->zstream) != Z_OK) {
ret = -EINVAL;
goto fail;
}
@@ -308,82 +251,83 @@ fail:
}
static inline int is_sector_in_chunk(BDRVDMGState* s,
uint32_t chunk_num, uint64_t sector_num)
uint32_t chunk_num,int sector_num)
{
if (chunk_num >= s->n_chunks || s->sectors[chunk_num] > sector_num ||
s->sectors[chunk_num] + s->sectorcounts[chunk_num] <= sector_num) {
return 0;
} else {
return -1;
}
if(chunk_num>=s->n_chunks || s->sectors[chunk_num]>sector_num ||
s->sectors[chunk_num]+s->sectorcounts[chunk_num]<=sector_num)
return 0;
else
return -1;
}
static inline uint32_t search_chunk(BDRVDMGState *s, uint64_t sector_num)
static inline uint32_t search_chunk(BDRVDMGState* s,int sector_num)
{
/* binary search */
uint32_t chunk1 = 0, chunk2 = s->n_chunks, chunk3;
while (chunk1 != chunk2) {
chunk3 = (chunk1 + chunk2) / 2;
if (s->sectors[chunk3] > sector_num) {
chunk2 = chunk3;
} else if (s->sectors[chunk3] + s->sectorcounts[chunk3] > sector_num) {
return chunk3;
} else {
chunk1 = chunk3;
}
uint32_t chunk1=0,chunk2=s->n_chunks,chunk3;
while(chunk1!=chunk2) {
chunk3 = (chunk1+chunk2)/2;
if(s->sectors[chunk3]>sector_num)
chunk2 = chunk3;
else if(s->sectors[chunk3]+s->sectorcounts[chunk3]>sector_num)
return chunk3;
else
chunk1 = chunk3;
}
return s->n_chunks; /* error */
}
static inline int dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
static inline int dmg_read_chunk(BlockDriverState *bs, int sector_num)
{
BDRVDMGState *s = bs->opaque;
if (!is_sector_in_chunk(s, s->current_chunk, sector_num)) {
int ret;
uint32_t chunk = search_chunk(s, sector_num);
if(!is_sector_in_chunk(s,s->current_chunk,sector_num)) {
int ret;
uint32_t chunk = search_chunk(s,sector_num);
if (chunk >= s->n_chunks) {
return -1;
}
if(chunk>=s->n_chunks)
return -1;
s->current_chunk = s->n_chunks;
switch (s->types[chunk]) {
case 0x80000005: { /* zlib compressed */
/* we need to buffer, because only the chunk as whole can be
* inflated. */
ret = bdrv_pread(bs->file, s->offsets[chunk],
s->compressed_chunk, s->lengths[chunk]);
if (ret != s->lengths[chunk]) {
return -1;
}
s->current_chunk = s->n_chunks;
switch(s->types[chunk]) {
case 0x80000005: { /* zlib compressed */
int i;
s->zstream.next_in = s->compressed_chunk;
s->zstream.avail_in = s->lengths[chunk];
s->zstream.next_out = s->uncompressed_chunk;
s->zstream.avail_out = 512 * s->sectorcounts[chunk];
ret = inflateReset(&s->zstream);
if (ret != Z_OK) {
return -1;
}
ret = inflate(&s->zstream, Z_FINISH);
if (ret != Z_STREAM_END ||
s->zstream.total_out != 512 * s->sectorcounts[chunk]) {
return -1;
}
break; }
case 1: /* copy */
ret = bdrv_pread(bs->file, s->offsets[chunk],
/* we need to buffer, because only the chunk as whole can be
* inflated. */
i=0;
do {
ret = bdrv_pread(bs->file, s->offsets[chunk] + i,
s->compressed_chunk+i, s->lengths[chunk]-i);
if(ret<0 && errno==EINTR)
ret=0;
i+=ret;
} while(ret>=0 && ret+i<s->lengths[chunk]);
if (ret != s->lengths[chunk])
return -1;
s->zstream.next_in = s->compressed_chunk;
s->zstream.avail_in = s->lengths[chunk];
s->zstream.next_out = s->uncompressed_chunk;
s->zstream.avail_out = 512*s->sectorcounts[chunk];
ret = inflateReset(&s->zstream);
if(ret != Z_OK)
return -1;
ret = inflate(&s->zstream, Z_FINISH);
if(ret != Z_STREAM_END || s->zstream.total_out != 512*s->sectorcounts[chunk])
return -1;
break; }
case 1: /* copy */
ret = bdrv_pread(bs->file, s->offsets[chunk],
s->uncompressed_chunk, s->lengths[chunk]);
if (ret != s->lengths[chunk]) {
return -1;
}
break;
case 2: /* zero */
memset(s->uncompressed_chunk, 0, 512 * s->sectorcounts[chunk]);
break;
}
s->current_chunk = chunk;
if (ret != s->lengths[chunk])
return -1;
break;
case 2: /* zero */
memset(s->uncompressed_chunk, 0, 512*s->sectorcounts[chunk]);
break;
}
s->current_chunk = chunk;
}
return 0;
}
@@ -394,14 +338,12 @@ static int dmg_read(BlockDriverState *bs, int64_t sector_num,
BDRVDMGState *s = bs->opaque;
int i;
for (i = 0; i < nb_sectors; i++) {
uint32_t sector_offset_in_chunk;
if (dmg_read_chunk(bs, sector_num + i) != 0) {
return -1;
}
sector_offset_in_chunk = sector_num + i - s->sectors[s->current_chunk];
memcpy(buf + i * 512,
s->uncompressed_chunk + sector_offset_in_chunk * 512, 512);
for(i=0;i<nb_sectors;i++) {
uint32_t sector_offset_in_chunk;
if(dmg_read_chunk(bs, sector_num+i) != 0)
return -1;
sector_offset_in_chunk = sector_num+i-s->sectors[s->current_chunk];
memcpy(buf+i*512,s->uncompressed_chunk+sector_offset_in_chunk*512,512);
}
return 0;
}
@@ -433,12 +375,12 @@ static void dmg_close(BlockDriverState *bs)
}
static BlockDriver bdrv_dmg = {
.format_name = "dmg",
.instance_size = sizeof(BDRVDMGState),
.bdrv_probe = dmg_probe,
.bdrv_open = dmg_open,
.bdrv_read = dmg_co_read,
.bdrv_close = dmg_close,
.format_name = "dmg",
.instance_size = sizeof(BDRVDMGState),
.bdrv_probe = dmg_probe,
.bdrv_open = dmg_open,
.bdrv_read = dmg_co_read,
.bdrv_close = dmg_close,
};
static void bdrv_dmg_init(void)

View File

@@ -3,26 +3,43 @@
*
* Copyright (C) 2012 Bharata B Rao <bharata@linux.vnet.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
* Pipe handling mechanism in AIO implementation is derived from
* block/rbd.c. Hence,
*
* Copyright (C) 2010-2011 Christian Brunner <chb@muc.de>,
* Josh Durgin <josh.durgin@dreamhost.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include <glusterfs/api/glfs.h>
#include "block/block_int.h"
#include "qemu/sockets.h"
#include "qemu/uri.h"
typedef struct GlusterAIOCB {
BlockDriverAIOCB common;
int64_t size;
int ret;
bool *finished;
QEMUBH *bh;
Coroutine *coroutine;
} GlusterAIOCB;
typedef struct BDRVGlusterState {
struct glfs *glfs;
int fds[2];
struct glfs_fd *fd;
int qemu_aio_count;
int event_reader_pos;
GlusterAIOCB *event_acb;
} BDRVGlusterState;
#define GLUSTER_FD_READ 0
#define GLUSTER_FD_WRITE 1
typedef struct GlusterConf {
char *server;
int port;
@@ -33,13 +50,11 @@ typedef struct GlusterConf {
static void qemu_gluster_gconf_free(GlusterConf *gconf)
{
if (gconf) {
g_free(gconf->server);
g_free(gconf->volname);
g_free(gconf->image);
g_free(gconf->transport);
g_free(gconf);
}
g_free(gconf->server);
g_free(gconf->volname);
g_free(gconf->image);
g_free(gconf->transport);
g_free(gconf);
}
static int parse_volume_options(GlusterConf *gconf, char *path)
@@ -80,7 +95,7 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
* 'server' specifies the server where the volume file specification for
* the given volume resides. This can be either hostname, ipv4 address
* or ipv6 address. ipv6 address needs to be within square brackets [ ].
* If transport type is 'unix', then 'server' field should not be specified.
* If transport type is 'unix', then 'server' field should not be specifed.
* The 'socket' field needs to be populated with the path to unix domain
* socket.
*
@@ -117,7 +132,7 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
}
/* transport */
if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
if (!strcmp(uri->scheme, "gluster")) {
gconf->transport = g_strdup("tcp");
} else if (!strcmp(uri->scheme, "gluster+tcp")) {
gconf->transport = g_strdup("tcp");
@@ -153,7 +168,7 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
}
gconf->server = g_strdup(qp->p[0].value);
} else {
gconf->server = g_strdup(uri->server ? uri->server : "localhost");
gconf->server = g_strdup(uri->server);
gconf->port = uri->port;
}
@@ -165,8 +180,7 @@ out:
return ret;
}
static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
Error **errp)
static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename)
{
struct glfs *glfs = NULL;
int ret;
@@ -174,8 +188,8 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
ret = qemu_gluster_parseuri(gconf, filename);
if (ret < 0) {
error_setg(errp, "Usage: file=gluster[+transport]://[server[:port]]/"
"volname/image[?socket=...]");
error_report("Usage: file=gluster[+transport]://[server[:port]]/"
"volname/image[?socket=...]");
errno = -ret;
goto out;
}
@@ -202,16 +216,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
ret = glfs_init(glfs);
if (ret) {
error_setg_errno(errp, errno,
"Gluster connection failed for server=%s port=%d "
"volume=%s image=%s transport=%s", gconf->server,
gconf->port, gconf->volname, gconf->image,
gconf->transport);
/* glfs_init sometimes doesn't set errno although docs suggest that */
if (errno == 0)
errno = EINVAL;
error_report("Gluster connection failed for server=%s port=%d "
"volume=%s image=%s transport=%s", gconf->server, gconf->port,
gconf->volname, gconf->image, gconf->transport);
goto out;
}
return glfs;
@@ -225,32 +232,54 @@ out:
return NULL;
}
static void qemu_gluster_complete_aio(void *opaque)
static void qemu_gluster_complete_aio(GlusterAIOCB *acb, BDRVGlusterState *s)
{
GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
int ret;
bool *finished = acb->finished;
BlockDriverCompletionFunc *cb = acb->common.cb;
void *opaque = acb->common.opaque;
qemu_bh_delete(acb->bh);
acb->bh = NULL;
qemu_coroutine_enter(acb->coroutine, NULL);
}
/*
* AIO callback routine called from GlusterFS thread.
*/
static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
{
GlusterAIOCB *acb = (GlusterAIOCB *)arg;
if (!ret || ret == acb->size) {
acb->ret = 0; /* Success */
} else if (ret < 0) {
acb->ret = ret; /* Read/Write failed */
if (!acb->ret || acb->ret == acb->size) {
ret = 0; /* Success */
} else if (acb->ret < 0) {
ret = acb->ret; /* Read/Write failed */
} else {
acb->ret = -EIO; /* Partial read/write - fail it */
ret = -EIO; /* Partial read/write - fail it */
}
acb->bh = qemu_bh_new(qemu_gluster_complete_aio, acb);
qemu_bh_schedule(acb->bh);
s->qemu_aio_count--;
qemu_aio_release(acb);
cb(opaque, ret);
if (finished) {
*finished = true;
}
}
static void qemu_gluster_aio_event_reader(void *opaque)
{
BDRVGlusterState *s = opaque;
ssize_t ret;
do {
char *p = (char *)&s->event_acb;
ret = read(s->fds[GLUSTER_FD_READ], p + s->event_reader_pos,
sizeof(s->event_acb) - s->event_reader_pos);
if (ret > 0) {
s->event_reader_pos += ret;
if (s->event_reader_pos == sizeof(s->event_acb)) {
s->event_reader_pos = 0;
qemu_gluster_complete_aio(s->event_acb, s);
}
}
} while (ret < 0 && errno == EINTR);
}
static int qemu_gluster_aio_flush_cb(void *opaque)
{
BDRVGlusterState *s = opaque;
return (s->qemu_aio_count > 0);
}
/* TODO Convert to fine grained options */
@@ -267,57 +296,60 @@ static QemuOptsList runtime_opts = {
},
};
static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags)
{
assert(open_flags != NULL);
*open_flags |= O_BINARY;
if (bdrv_flags & BDRV_O_RDWR) {
*open_flags |= O_RDWR;
} else {
*open_flags |= O_RDONLY;
}
if ((bdrv_flags & BDRV_O_NOCACHE)) {
*open_flags |= O_DIRECT;
}
}
static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
int bdrv_flags, Error **errp)
int bdrv_flags)
{
BDRVGlusterState *s = bs->opaque;
int open_flags = 0;
int open_flags = O_BINARY;
int ret = 0;
GlusterConf *gconf = g_malloc0(sizeof(GlusterConf));
QemuOpts *opts;
Error *local_err = NULL;
const char *filename;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
ret = -EINVAL;
goto out;
}
filename = qemu_opt_get(opts, "filename");
s->glfs = qemu_gluster_init(gconf, filename, errp);
s->glfs = qemu_gluster_init(gconf, filename);
if (!s->glfs) {
ret = -errno;
goto out;
}
qemu_gluster_parse_flags(bdrv_flags, &open_flags);
if (bdrv_flags & BDRV_O_RDWR) {
open_flags |= O_RDWR;
} else {
open_flags |= O_RDONLY;
}
if ((bdrv_flags & BDRV_O_NOCACHE)) {
open_flags |= O_DIRECT;
}
s->fd = glfs_open(s->glfs, gconf->image, open_flags);
if (!s->fd) {
ret = -errno;
goto out;
}
ret = qemu_pipe(s->fds);
if (ret < 0) {
ret = -errno;
goto out;
}
fcntl(s->fds[GLUSTER_FD_READ], F_SETFL, O_NONBLOCK);
qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ],
qemu_gluster_aio_event_reader, NULL, qemu_gluster_aio_flush_cb, s);
out:
qemu_opts_del(opts);
qemu_gluster_gconf_free(gconf);
@@ -333,159 +365,16 @@ out:
return ret;
}
typedef struct BDRVGlusterReopenState {
struct glfs *glfs;
struct glfs_fd *fd;
} BDRVGlusterReopenState;
static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
BlockReopenQueue *queue, Error **errp)
{
int ret = 0;
BDRVGlusterReopenState *reop_s;
GlusterConf *gconf = NULL;
int open_flags = 0;
assert(state != NULL);
assert(state->bs != NULL);
state->opaque = g_malloc0(sizeof(BDRVGlusterReopenState));
reop_s = state->opaque;
qemu_gluster_parse_flags(state->flags, &open_flags);
gconf = g_malloc0(sizeof(GlusterConf));
reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
if (reop_s->glfs == NULL) {
ret = -errno;
goto exit;
}
reop_s->fd = glfs_open(reop_s->glfs, gconf->image, open_flags);
if (reop_s->fd == NULL) {
/* reops->glfs will be cleaned up in _abort */
ret = -errno;
goto exit;
}
exit:
/* state->opaque will be freed in either the _abort or _commit */
qemu_gluster_gconf_free(gconf);
return ret;
}
static void qemu_gluster_reopen_commit(BDRVReopenState *state)
{
BDRVGlusterReopenState *reop_s = state->opaque;
BDRVGlusterState *s = state->bs->opaque;
/* close the old */
if (s->fd) {
glfs_close(s->fd);
}
if (s->glfs) {
glfs_fini(s->glfs);
}
/* use the newly opened image / connection */
s->fd = reop_s->fd;
s->glfs = reop_s->glfs;
g_free(state->opaque);
state->opaque = NULL;
return;
}
static void qemu_gluster_reopen_abort(BDRVReopenState *state)
{
BDRVGlusterReopenState *reop_s = state->opaque;
if (reop_s == NULL) {
return;
}
if (reop_s->fd) {
glfs_close(reop_s->fd);
}
if (reop_s->glfs) {
glfs_fini(reop_s->glfs);
}
g_free(state->opaque);
state->opaque = NULL;
return;
}
#ifdef CONFIG_GLUSTERFS_ZEROFILL
static coroutine_fn int qemu_gluster_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, BdrvRequestFlags flags)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
BDRVGlusterState *s = bs->opaque;
off_t size = nb_sectors * BDRV_SECTOR_SIZE;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
acb->size = size;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
ret = glfs_zerofill_async(s->fd, offset, size, &gluster_finish_aiocb, acb);
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
}
static inline bool gluster_supports_zerofill(void)
{
return 1;
}
static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
int64_t size)
{
return glfs_zerofill(fd, offset, size);
}
#else
static inline bool gluster_supports_zerofill(void)
{
return 0;
}
static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
int64_t size)
{
return 0;
}
#endif
static int qemu_gluster_create(const char *filename,
QEMUOptionParameter *options, Error **errp)
QEMUOptionParameter *options)
{
struct glfs *glfs;
struct glfs_fd *fd;
int ret = 0;
int prealloc = 0;
int64_t total_size = 0;
GlusterConf *gconf = g_malloc0(sizeof(GlusterConf));
glfs = qemu_gluster_init(gconf, filename, errp);
glfs = qemu_gluster_init(gconf, filename);
if (!glfs) {
ret = -errno;
goto out;
@@ -494,19 +383,6 @@ static int qemu_gluster_create(const char *filename,
while (options && options->name) {
if (!strcmp(options->name, BLOCK_OPT_SIZE)) {
total_size = options->value.n / BDRV_SECTOR_SIZE;
} else if (!strcmp(options->name, BLOCK_OPT_PREALLOC)) {
if (!options->value.s || !strcmp(options->value.s, "off")) {
prealloc = 0;
} else if (!strcmp(options->value.s, "full") &&
gluster_supports_zerofill()) {
prealloc = 1;
} else {
error_setg(errp, "Invalid preallocation mode: '%s'"
" or GlusterFS doesn't support zerofill API",
options->value.s);
ret = -EINVAL;
goto out;
}
}
options++;
}
@@ -516,15 +392,9 @@ static int qemu_gluster_create(const char *filename,
if (!fd) {
ret = -errno;
} else {
if (!glfs_ftruncate(fd, total_size * BDRV_SECTOR_SIZE)) {
if (prealloc && qemu_gluster_zerofill(fd, 0,
total_size * BDRV_SECTOR_SIZE)) {
ret = -errno;
}
} else {
if (glfs_ftruncate(fd, total_size * BDRV_SECTOR_SIZE) != 0) {
ret = -errno;
}
if (glfs_close(fd) != 0) {
ret = -errno;
}
@@ -537,18 +407,72 @@ out:
return ret;
}
static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov, int write)
static void qemu_gluster_aio_cancel(BlockDriverAIOCB *blockacb)
{
GlusterAIOCB *acb = (GlusterAIOCB *)blockacb;
bool finished = false;
acb->finished = &finished;
while (!finished) {
qemu_aio_wait();
}
}
static const AIOCBInfo gluster_aiocb_info = {
.aiocb_size = sizeof(GlusterAIOCB),
.cancel = qemu_gluster_aio_cancel,
};
static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
{
GlusterAIOCB *acb = (GlusterAIOCB *)arg;
BlockDriverState *bs = acb->common.bs;
BDRVGlusterState *s = bs->opaque;
int retval;
acb->ret = ret;
retval = qemu_write_full(s->fds[GLUSTER_FD_WRITE], &acb, sizeof(acb));
if (retval != sizeof(acb)) {
/*
* Gluster AIO callback thread failed to notify the waiting
* QEMU thread about IO completion.
*
* Complete this IO request and make the disk inaccessible for
* subsequent reads and writes.
*/
error_report("Gluster failed to notify QEMU about IO completion");
qemu_mutex_lock_iothread(); /* We are in gluster thread context */
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_release(acb);
s->qemu_aio_count--;
close(s->fds[GLUSTER_FD_READ]);
close(s->fds[GLUSTER_FD_WRITE]);
qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL,
NULL);
bs->drv = NULL; /* Make the disk inaccessible */
qemu_mutex_unlock_iothread();
}
}
static BlockDriverAIOCB *qemu_gluster_aio_rw(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque, int write)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
GlusterAIOCB *acb;
BDRVGlusterState *s = bs->opaque;
size_t size = nb_sectors * BDRV_SECTOR_SIZE;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
size_t size;
off_t offset;
offset = sector_num * BDRV_SECTOR_SIZE;
size = nb_sectors * BDRV_SECTOR_SIZE;
s->qemu_aio_count++;
acb = qemu_aio_get(&gluster_aiocb_info, bs, cb, opaque);
acb->size = size;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
acb->finished = NULL;
if (write) {
ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0,
@@ -559,96 +483,55 @@ static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
}
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
return &acb->common;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
s->qemu_aio_count--;
qemu_aio_release(acb);
return NULL;
}
static int qemu_gluster_truncate(BlockDriverState *bs, int64_t offset)
static BlockDriverAIOCB *qemu_gluster_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
return qemu_gluster_aio_rw(bs, sector_num, qiov, nb_sectors, cb, opaque, 0);
}
static BlockDriverAIOCB *qemu_gluster_aio_writev(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
return qemu_gluster_aio_rw(bs, sector_num, qiov, nb_sectors, cb, opaque, 1);
}
static BlockDriverAIOCB *qemu_gluster_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
int ret;
GlusterAIOCB *acb;
BDRVGlusterState *s = bs->opaque;
ret = glfs_ftruncate(s->fd, offset);
if (ret < 0) {
return -errno;
}
return 0;
}
static coroutine_fn int qemu_gluster_co_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
{
return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 0);
}
static coroutine_fn int qemu_gluster_co_writev(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
{
return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 1);
}
static coroutine_fn int qemu_gluster_co_flush_to_disk(BlockDriverState *bs)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
BDRVGlusterState *s = bs->opaque;
acb = qemu_aio_get(&gluster_aiocb_info, bs, cb, opaque);
acb->size = 0;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
acb->finished = NULL;
s->qemu_aio_count++;
ret = glfs_fsync_async(s->fd, &gluster_finish_aiocb, acb);
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
return &acb->common;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
s->qemu_aio_count--;
qemu_aio_release(acb);
return NULL;
}
#ifdef CONFIG_GLUSTERFS_DISCARD
static coroutine_fn int qemu_gluster_co_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
BDRVGlusterState *s = bs->opaque;
size_t size = nb_sectors * BDRV_SECTOR_SIZE;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
acb->size = 0;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
ret = glfs_discard_async(s->fd, offset, size, &gluster_finish_aiocb, acb);
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
}
#endif
static int64_t qemu_gluster_getlength(BlockDriverState *bs)
{
BDRVGlusterState *s = bs->opaque;
@@ -680,6 +563,10 @@ static void qemu_gluster_close(BlockDriverState *bs)
{
BDRVGlusterState *s = bs->opaque;
close(s->fds[GLUSTER_FD_READ]);
close(s->fds[GLUSTER_FD_WRITE]);
qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL, NULL);
if (s->fd) {
glfs_close(s->fd);
s->fd = NULL;
@@ -699,11 +586,6 @@ static QEMUOptionParameter qemu_gluster_create_options[] = {
.type = OPT_SIZE,
.help = "Virtual disk size"
},
{
.name = BLOCK_OPT_PREALLOC,
.type = OPT_STRING,
.help = "Preallocation mode (allowed values: off, full)"
},
{ NULL }
};
@@ -711,26 +593,15 @@ static BlockDriver bdrv_gluster = {
.format_name = "gluster",
.protocol_name = "gluster",
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_aio_readv = qemu_gluster_aio_readv,
.bdrv_aio_writev = qemu_gluster_aio_writev,
.bdrv_aio_flush = qemu_gluster_aio_flush,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
#endif
.create_options = qemu_gluster_create_options,
};
@@ -738,26 +609,15 @@ static BlockDriver bdrv_gluster_tcp = {
.format_name = "gluster",
.protocol_name = "gluster+tcp",
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_aio_readv = qemu_gluster_aio_readv,
.bdrv_aio_writev = qemu_gluster_aio_writev,
.bdrv_aio_flush = qemu_gluster_aio_flush,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
#endif
.create_options = qemu_gluster_create_options,
};
@@ -765,26 +625,15 @@ static BlockDriver bdrv_gluster_unix = {
.format_name = "gluster",
.protocol_name = "gluster+unix",
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_aio_readv = qemu_gluster_aio_readv,
.bdrv_aio_writev = qemu_gluster_aio_writev,
.bdrv_aio_flush = qemu_gluster_aio_flush,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
#endif
.create_options = qemu_gluster_create_options,
};
@@ -792,26 +641,15 @@ static BlockDriver bdrv_gluster_rdma = {
.format_name = "gluster",
.protocol_name = "gluster+rdma",
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_aio_readv = qemu_gluster_aio_readv,
.bdrv_aio_writev = qemu_gluster_aio_writev,
.bdrv_aio_flush = qemu_gluster_aio_flush,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
#endif
.create_options = qemu_gluster_create_options,
};

File diff suppressed because it is too large Load Diff

View File

@@ -39,6 +39,7 @@ struct qemu_laiocb {
struct qemu_laio_state {
io_context_t ctx;
EventNotifier e;
int count;
};
static inline ssize_t io_event_ret(struct io_event *ev)
@@ -54,6 +55,8 @@ static void qemu_laio_process_completion(struct qemu_laio_state *s,
{
int ret;
s->count--;
ret = laiocb->ret;
if (ret != -ECANCELED) {
if (ret == laiocb->nbytes) {
@@ -98,6 +101,13 @@ static void qemu_laio_completion_cb(EventNotifier *e)
}
}
static int qemu_laio_flush_cb(EventNotifier *e)
{
struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, e);
return (s->count > 0) ? 1 : 0;
}
static void laio_cancel(BlockDriverAIOCB *blockacb)
{
struct qemu_laiocb *laiocb = (struct qemu_laiocb *)blockacb;
@@ -167,11 +177,14 @@ BlockDriverAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd,
goto out_free_aiocb;
}
io_set_eventfd(&laiocb->iocb, event_notifier_get_fd(&s->e));
s->count++;
if (io_submit(s->ctx, 1, &iocbs) < 0)
goto out_free_aiocb;
goto out_dec_count;
return &laiocb->common;
out_dec_count:
s->count--;
out_free_aiocb:
qemu_aio_release(laiocb);
return NULL;
@@ -190,7 +203,8 @@ void *laio_init(void)
goto out_close_efd;
}
qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb);
qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb,
qemu_laio_flush_cb);
return s;

View File

@@ -31,8 +31,7 @@ typedef struct MirrorBlockJob {
BlockJob common;
RateLimit limit;
BlockDriverState *target;
BlockDriverState *base;
bool is_none_mode;
MirrorSyncMode mode;
BlockdevOnError on_source_error, on_target_error;
bool synced;
bool should_complete;
@@ -40,7 +39,6 @@ typedef struct MirrorBlockJob {
int64_t granularity;
size_t buf_size;
unsigned long *cow_bitmap;
BdrvDirtyBitmap *dirty_bitmap;
HBitmapIter hbi;
uint8_t *buf;
QSIMPLEQ_HEAD(, MirrorBuffer) buf_free;
@@ -96,16 +94,8 @@ static void mirror_iteration_done(MirrorOp *op, int ret)
bitmap_set(s->cow_bitmap, chunk_num, nb_chunks);
}
qemu_iovec_destroy(&op->qiov);
g_slice_free(MirrorOp, op);
/* Enter coroutine when it is not sleeping. The coroutine sleeps to
* rate-limit itself. The coroutine will eventually resume since there is
* a sleep timeout so don't wake it early.
*/
if (s->common.busy) {
qemu_coroutine_enter(s->common.co, NULL);
}
qemu_coroutine_enter(s->common.co, NULL);
}
static void mirror_write_complete(void *opaque, int ret)
@@ -146,20 +136,18 @@ static void mirror_read_complete(void *opaque, int ret)
mirror_write_complete, op);
}
static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s)
static void coroutine_fn mirror_iteration(MirrorBlockJob *s)
{
BlockDriverState *source = s->common.bs;
int nb_sectors, sectors_per_chunk, nb_chunks;
int64_t end, sector_num, next_chunk, next_sector, hbitmap_next_sector;
uint64_t delay_ns;
MirrorOp *op;
s->sector_num = hbitmap_iter_next(&s->hbi);
if (s->sector_num < 0) {
bdrv_dirty_iter_init(source, s->dirty_bitmap, &s->hbi);
bdrv_dirty_iter_init(source, &s->hbi);
s->sector_num = hbitmap_iter_next(&s->hbi);
trace_mirror_restart_iter(s,
bdrv_get_dirty_count(source, s->dirty_bitmap));
trace_mirror_restart_iter(s, bdrv_get_dirty_count(source));
assert(s->sector_num >= 0);
}
@@ -195,7 +183,7 @@ static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s)
do {
int added_sectors, added_chunks;
if (!bdrv_get_dirty(source, s->dirty_bitmap, next_sector) ||
if (!bdrv_get_dirty(source, next_sector) ||
test_bit(next_chunk, s->in_flight_bitmap)) {
assert(nb_sectors > 0);
break;
@@ -239,12 +227,7 @@ static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s)
nb_chunks += added_chunks;
next_sector += added_sectors;
next_chunk += added_chunks;
if (!s->synced && s->common.speed) {
delay_ns = ratelimit_calculate_delay(&s->limit, added_sectors);
} else {
delay_ns = 0;
}
} while (delay_ns == 0 && next_sector < end);
} while (next_sector < end);
/* Allocate a MirrorOp that is used as an AIO callback. */
op = g_slice_new(MirrorOp);
@@ -266,8 +249,7 @@ static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s)
/* Advance the HBitmapIter in parallel, so that we do not examine
* the same sector twice.
*/
if (next_sector > hbitmap_next_sector
&& bdrv_get_dirty(source, s->dirty_bitmap, next_sector)) {
if (next_sector > hbitmap_next_sector && bdrv_get_dirty(source, next_sector)) {
hbitmap_next_sector = hbitmap_iter_next(&s->hbi);
}
@@ -281,7 +263,6 @@ static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s)
trace_mirror_one_iteration(s, sector_num, nb_sectors);
bdrv_aio_readv(source, sector_num, &op->qiov, nb_sectors,
mirror_read_complete, op);
return delay_ns;
}
static void mirror_free_init(MirrorBlockJob *s)
@@ -325,11 +306,11 @@ static void coroutine_fn mirror_run(void *opaque)
s->common.len = bdrv_getlength(bs);
if (s->common.len <= 0) {
ret = s->common.len;
goto immediate_exit;
block_job_completed(&s->common, s->common.len);
return;
}
length = DIV_ROUND_UP(s->common.len, s->granularity);
length = (bdrv_getlength(bs) + s->granularity - 1) / s->granularity;
s->in_flight_bitmap = bitmap_new(length);
/* If we have no backing file yet in the destination, we cannot let
@@ -339,10 +320,7 @@ static void coroutine_fn mirror_run(void *opaque)
bdrv_get_backing_filename(s->target, backing_filename,
sizeof(backing_filename));
if (backing_filename[0] && !s->target->backing_hd) {
ret = bdrv_get_info(s->target, &bdi);
if (ret < 0) {
goto immediate_exit;
}
bdrv_get_info(s->target, &bdi);
if (s->granularity < bdi.cluster_size) {
s->buf_size = MAX(s->buf_size, bdi.cluster_size);
s->cow_bitmap = bitmap_new(length);
@@ -354,13 +332,14 @@ static void coroutine_fn mirror_run(void *opaque)
sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
mirror_free_init(s);
if (!s->is_none_mode) {
if (s->mode != MIRROR_SYNC_MODE_NONE) {
/* First part, loop on the sectors and initialize the dirty bitmap. */
BlockDriverState *base = s->base;
BlockDriverState *base;
base = s->mode == MIRROR_SYNC_MODE_FULL ? NULL : bs->backing_hd;
for (sector_num = 0; sector_num < end; ) {
int64_t next = (sector_num | (sectors_per_chunk - 1)) + 1;
ret = bdrv_is_allocated_above(bs, base,
sector_num, next - sector_num, &n);
ret = bdrv_co_is_allocated_above(bs, base,
sector_num, next - sector_num, &n);
if (ret < 0) {
goto immediate_exit;
@@ -376,10 +355,10 @@ static void coroutine_fn mirror_run(void *opaque)
}
}
bdrv_dirty_iter_init(bs, s->dirty_bitmap, &s->hbi);
last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
bdrv_dirty_iter_init(bs, &s->hbi);
last_pause_ns = qemu_get_clock_ns(rt_clock);
for (;;) {
uint64_t delay_ns = 0;
uint64_t delay_ns;
int64_t cnt;
bool should_complete;
@@ -388,14 +367,14 @@ static void coroutine_fn mirror_run(void *opaque)
goto immediate_exit;
}
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
cnt = bdrv_get_dirty_count(bs);
/* Note that even when no rate limit is applied we need to yield
* periodically with no pending I/O so that qemu_aio_flush() returns.
* We do so every SLICE_TIME nanoseconds, or when there is an error,
* or when the source is clean, whichever comes first.
*/
if (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - last_pause_ns < SLICE_TIME &&
if (qemu_get_clock_ns(rt_clock) - last_pause_ns < SLICE_TIME &&
s->common.iostatus == BLOCK_DEVICE_IO_STATUS_OK) {
if (s->in_flight == MAX_IN_FLIGHT || s->buf_free_count == 0 ||
(cnt == 0 && s->in_flight > 0)) {
@@ -403,10 +382,8 @@ static void coroutine_fn mirror_run(void *opaque)
qemu_coroutine_yield();
continue;
} else if (cnt != 0) {
delay_ns = mirror_iteration(s);
if (delay_ns == 0) {
continue;
}
mirror_iteration(s);
continue;
}
}
@@ -432,7 +409,7 @@ static void coroutine_fn mirror_run(void *opaque)
should_complete = s->should_complete ||
block_job_is_cancelled(&s->common);
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
cnt = bdrv_get_dirty_count(bs);
}
}
@@ -447,21 +424,28 @@ static void coroutine_fn mirror_run(void *opaque)
*/
trace_mirror_before_drain(s, cnt);
bdrv_drain_all();
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
cnt = bdrv_get_dirty_count(bs);
}
ret = 0;
trace_mirror_before_sleep(s, cnt, s->synced, delay_ns);
trace_mirror_before_sleep(s, cnt, s->synced);
if (!s->synced) {
/* Publish progress */
s->common.offset = (end - cnt) * BDRV_SECTOR_SIZE;
block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, delay_ns);
if (s->common.speed) {
delay_ns = ratelimit_calculate_delay(&s->limit, sectors_per_chunk);
} else {
delay_ns = 0;
}
block_job_sleep_ns(&s->common, rt_clock, delay_ns);
if (block_job_is_cancelled(&s->common)) {
break;
}
} else if (!should_complete) {
delay_ns = (s->in_flight == 0 && cnt == 0 ? SLICE_TIME : 0);
block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, delay_ns);
block_job_sleep_ns(&s->common, rt_clock, delay_ns);
} else if (cnt == 0) {
/* The two disks are in sync. Exit and report successful
* completion.
@@ -470,7 +454,7 @@ static void coroutine_fn mirror_run(void *opaque)
s->common.cancelled = false;
break;
}
last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
last_pause_ns = qemu_get_clock_ns(rt_clock);
}
immediate_exit:
@@ -487,22 +471,16 @@ immediate_exit:
qemu_vfree(s->buf);
g_free(s->cow_bitmap);
g_free(s->in_flight_bitmap);
bdrv_release_dirty_bitmap(bs, s->dirty_bitmap);
bdrv_set_dirty_tracking(bs, 0);
bdrv_iostatus_disable(s->target);
if (s->should_complete && ret == 0) {
if (bdrv_get_flags(s->target) != bdrv_get_flags(s->common.bs)) {
bdrv_reopen(s->target, bdrv_get_flags(s->common.bs), NULL);
}
bdrv_swap(s->target, s->common.bs);
if (s->common.driver->job_type == BLOCK_JOB_TYPE_COMMIT) {
/* drop the bs loop chain formed by the swap: break the loop then
* trigger the unref from the top one */
BlockDriverState *p = s->base->backing_hd;
bdrv_set_backing_hd(s->base, NULL);
bdrv_unref(p);
}
}
bdrv_unref(s->target);
bdrv_close(s->target);
bdrv_delete(s->target);
block_job_completed(&s->common, ret);
}
@@ -527,12 +505,14 @@ static void mirror_iostatus_reset(BlockJob *job)
static void mirror_complete(BlockJob *job, Error **errp)
{
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
Error *local_err = NULL;
int ret;
ret = bdrv_open_backing_file(s->target, NULL, &local_err);
ret = bdrv_open_backing_file(s->target, NULL);
if (ret < 0) {
error_propagate(errp, local_err);
char backing_filename[PATH_MAX];
bdrv_get_full_backing_filename(s->target, backing_filename,
sizeof(backing_filename));
error_set(errp, QERR_OPEN_FILE_FAILED, backing_filename);
return;
}
if (!s->synced) {
@@ -544,32 +524,20 @@ static void mirror_complete(BlockJob *job, Error **errp)
block_job_resume(job);
}
static const BlockJobDriver mirror_job_driver = {
static BlockJobType mirror_job_type = {
.instance_size = sizeof(MirrorBlockJob),
.job_type = BLOCK_JOB_TYPE_MIRROR,
.job_type = "mirror",
.set_speed = mirror_set_speed,
.iostatus_reset= mirror_iostatus_reset,
.complete = mirror_complete,
};
static const BlockJobDriver commit_active_job_driver = {
.instance_size = sizeof(MirrorBlockJob),
.job_type = BLOCK_JOB_TYPE_COMMIT,
.set_speed = mirror_set_speed,
.iostatus_reset
= mirror_iostatus_reset,
.complete = mirror_complete,
};
static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, int64_t granularity,
int64_t buf_size,
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp,
const BlockJobDriver *driver,
bool is_none_mode, BlockDriverState *base)
void mirror_start(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, int64_t granularity, int64_t buf_size,
MirrorSyncMode mode, BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp)
{
MirrorBlockJob *s;
@@ -594,8 +562,7 @@ static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
return;
}
s = block_job_create(driver, bs, speed, cb, opaque, errp);
s = block_job_create(&mirror_job_type, bs, speed, cb, opaque, errp);
if (!s) {
return;
}
@@ -603,15 +570,11 @@ static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
s->on_source_error = on_source_error;
s->on_target_error = on_target_error;
s->target = target;
s->is_none_mode = is_none_mode;
s->base = base;
s->mode = mode;
s->granularity = granularity;
s->buf_size = MAX(buf_size, granularity);
s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, errp);
if (!s->dirty_bitmap) {
return;
}
bdrv_set_dirty_tracking(bs, granularity);
bdrv_set_enable_write_cache(s->target, true);
bdrv_set_on_error(s->target, on_target_error, on_target_error);
bdrv_iostatus_enable(s->target);
@@ -619,80 +582,3 @@ static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
trace_mirror_start(bs, s, s->common.co, opaque);
qemu_coroutine_enter(s->common.co, s);
}
void mirror_start(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, int64_t granularity, int64_t buf_size,
MirrorSyncMode mode, BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp)
{
bool is_none_mode;
BlockDriverState *base;
is_none_mode = mode == MIRROR_SYNC_MODE_NONE;
base = mode == MIRROR_SYNC_MODE_TOP ? bs->backing_hd : NULL;
mirror_start_job(bs, target, speed, granularity, buf_size,
on_source_error, on_target_error, cb, opaque, errp,
&mirror_job_driver, is_none_mode, base);
}
void commit_active_start(BlockDriverState *bs, BlockDriverState *base,
int64_t speed,
BlockdevOnError on_error,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp)
{
int64_t length, base_length;
int orig_base_flags;
int ret;
Error *local_err = NULL;
orig_base_flags = bdrv_get_flags(base);
if (bdrv_reopen(base, bs->open_flags, errp)) {
return;
}
length = bdrv_getlength(bs);
if (length < 0) {
error_setg_errno(errp, -length,
"Unable to determine length of %s", bs->filename);
goto error_restore_flags;
}
base_length = bdrv_getlength(base);
if (base_length < 0) {
error_setg_errno(errp, -base_length,
"Unable to determine length of %s", base->filename);
goto error_restore_flags;
}
if (length > base_length) {
ret = bdrv_truncate(base, length);
if (ret < 0) {
error_setg_errno(errp, -ret,
"Top image %s is larger than base image %s, and "
"resize of base image failed",
bs->filename, base->filename);
goto error_restore_flags;
}
}
bdrv_ref(base);
mirror_start_job(bs, base, speed, 0, 0,
on_error, on_error, cb, opaque, &local_err,
&commit_active_job_driver, false, base);
if (local_err) {
error_propagate(errp, local_err);
goto error_restore_flags;
}
return;
error_restore_flags:
/* ignore error and errp for bdrv_reopen, because we want to propagate
* the original error */
bdrv_reopen(base, orig_base_flags, NULL);
return;
}

View File

@@ -1,388 +0,0 @@
/*
* QEMU Block driver for NBD
*
* Copyright (C) 2008 Bull S.A.S.
* Author: Laurent Vivier <Laurent.Vivier@bull.net>
*
* Some parts:
* Copyright (C) 2007 Anthony Liguori <anthony@codemonkey.ws>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "nbd-client.h"
#include "qemu/sockets.h"
#define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs))
#define INDEX_TO_HANDLE(bs, index) ((index) ^ ((uint64_t)(intptr_t)bs))
static void nbd_recv_coroutines_enter_all(NbdClientSession *s)
{
int i;
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
}
}
}
static void nbd_teardown_connection(NbdClientSession *client)
{
/* finish any pending coroutines */
shutdown(client->sock, 2);
nbd_recv_coroutines_enter_all(client);
qemu_aio_set_fd_handler(client->sock, NULL, NULL, NULL);
closesocket(client->sock);
client->sock = -1;
}
static void nbd_reply_ready(void *opaque)
{
NbdClientSession *s = opaque;
uint64_t i;
int ret;
if (s->reply.handle == 0) {
/* No reply already in flight. Fetch a header. It is possible
* that another thread has done the same thing in parallel, so
* the socket is not readable anymore.
*/
ret = nbd_receive_reply(s->sock, &s->reply);
if (ret == -EAGAIN) {
return;
}
if (ret < 0) {
s->reply.handle = 0;
goto fail;
}
}
/* There's no need for a mutex on the receive side, because the
* handler acts as a synchronization point and ensures that only
* one coroutine is called until the reply finishes. */
i = HANDLE_TO_INDEX(s, s->reply.handle);
if (i >= MAX_NBD_REQUESTS) {
goto fail;
}
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
return;
}
fail:
nbd_teardown_connection(s);
}
static void nbd_restart_write(void *opaque)
{
NbdClientSession *s = opaque;
qemu_coroutine_enter(s->send_coroutine, NULL);
}
static int nbd_co_send_request(NbdClientSession *s,
struct nbd_request *request,
QEMUIOVector *qiov, int offset)
{
int rc, ret;
qemu_co_mutex_lock(&s->send_mutex);
s->send_coroutine = qemu_coroutine_self();
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write, s);
if (qiov) {
if (!s->is_unix) {
socket_set_cork(s->sock, 1);
}
rc = nbd_send_request(s->sock, request);
if (rc >= 0) {
ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
rc = -EIO;
}
}
if (!s->is_unix) {
socket_set_cork(s->sock, 0);
}
} else {
rc = nbd_send_request(s->sock, request);
}
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL, s);
s->send_coroutine = NULL;
qemu_co_mutex_unlock(&s->send_mutex);
return rc;
}
static void nbd_co_receive_reply(NbdClientSession *s,
struct nbd_request *request, struct nbd_reply *reply,
QEMUIOVector *qiov, int offset)
{
int ret;
/* Wait until we're woken up by the read handler. TODO: perhaps
* peek at the next reply and avoid yielding if it's ours? */
qemu_coroutine_yield();
*reply = s->reply;
if (reply->handle != request->handle) {
reply->error = EIO;
} else {
if (qiov && reply->error == 0) {
ret = qemu_co_recvv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
reply->error = EIO;
}
}
/* Tell the read handler to read another header. */
s->reply.handle = 0;
}
}
static void nbd_coroutine_start(NbdClientSession *s,
struct nbd_request *request)
{
int i;
/* Poor man semaphore. The free_sema is locked when no other request
* can be accepted, and unlocked after receiving one reply. */
if (s->in_flight >= MAX_NBD_REQUESTS - 1) {
qemu_co_mutex_lock(&s->free_sema);
assert(s->in_flight < MAX_NBD_REQUESTS);
}
s->in_flight++;
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i] == NULL) {
s->recv_coroutine[i] = qemu_coroutine_self();
break;
}
}
assert(i < MAX_NBD_REQUESTS);
request->handle = INDEX_TO_HANDLE(s, i);
}
static void nbd_coroutine_end(NbdClientSession *s,
struct nbd_request *request)
{
int i = HANDLE_TO_INDEX(s, request->handle);
s->recv_coroutine[i] = NULL;
if (s->in_flight-- == MAX_NBD_REQUESTS) {
qemu_co_mutex_unlock(&s->free_sema);
}
}
static int nbd_co_readv_1(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
struct nbd_request request = { .type = NBD_CMD_READ };
struct nbd_reply reply;
ssize_t ret;
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, qiov, offset);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
static int nbd_co_writev_1(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
struct nbd_request request = { .type = NBD_CMD_WRITE };
struct nbd_reply reply;
ssize_t ret;
if (!bdrv_enable_write_cache(client->bs) &&
(client->nbdflags & NBD_FLAG_SEND_FUA)) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, qiov, offset);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, NULL, 0);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
/* qemu-nbd has a limit of slightly less than 1M per request. Try to
* remain aligned to 4K. */
#define NBD_MAX_SECTORS 2040
int nbd_client_session_co_readv(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_readv_1(client, sector_num,
NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_readv_1(client, sector_num, nb_sectors, qiov, offset);
}
int nbd_client_session_co_writev(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_writev_1(client, sector_num,
NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_writev_1(client, sector_num, nb_sectors, qiov, offset);
}
int nbd_client_session_co_flush(NbdClientSession *client)
{
struct nbd_request request = { .type = NBD_CMD_FLUSH };
struct nbd_reply reply;
ssize_t ret;
if (!(client->nbdflags & NBD_FLAG_SEND_FLUSH)) {
return 0;
}
if (client->nbdflags & NBD_FLAG_SEND_FUA) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = 0;
request.len = 0;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, NULL, 0);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
int nbd_client_session_co_discard(NbdClientSession *client, int64_t sector_num,
int nb_sectors)
{
struct nbd_request request = { .type = NBD_CMD_TRIM };
struct nbd_reply reply;
ssize_t ret;
if (!(client->nbdflags & NBD_FLAG_SEND_TRIM)) {
return 0;
}
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, NULL, 0);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
void nbd_client_session_close(NbdClientSession *client)
{
struct nbd_request request = {
.type = NBD_CMD_DISC,
.from = 0,
.len = 0
};
if (!client->bs) {
return;
}
if (client->sock == -1) {
return;
}
nbd_send_request(client->sock, &request);
nbd_teardown_connection(client);
client->bs = NULL;
}
int nbd_client_session_init(NbdClientSession *client, BlockDriverState *bs,
int sock, const char *export)
{
int ret;
/* NBD handshake */
logout("session init %s\n", export);
qemu_set_block(sock);
ret = nbd_receive_negotiate(sock, export,
&client->nbdflags, &client->size,
&client->blocksize);
if (ret < 0) {
logout("Failed to negotiate with the NBD server\n");
closesocket(sock);
return ret;
}
qemu_co_mutex_init(&client->send_mutex);
qemu_co_mutex_init(&client->free_sema);
client->bs = bs;
client->sock = sock;
/* Now that we're connected, set the socket to be non-blocking and
* kick the reply mechanism. */
qemu_set_nonblock(sock);
qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL, client);
logout("Established connection with NBD server\n");
return 0;
}

View File

@@ -1,50 +0,0 @@
#ifndef NBD_CLIENT_H
#define NBD_CLIENT_H
#include "qemu-common.h"
#include "block/nbd.h"
#include "block/block_int.h"
/* #define DEBUG_NBD */
#if defined(DEBUG_NBD)
#define logout(fmt, ...) \
fprintf(stderr, "nbd\t%-24s" fmt, __func__, ##__VA_ARGS__)
#else
#define logout(fmt, ...) ((void)0)
#endif
#define MAX_NBD_REQUESTS 16
typedef struct NbdClientSession {
int sock;
uint32_t nbdflags;
off_t size;
size_t blocksize;
CoMutex send_mutex;
CoMutex free_sema;
Coroutine *send_coroutine;
int in_flight;
Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
struct nbd_reply reply;
bool is_unix;
BlockDriverState *bs;
} NbdClientSession;
int nbd_client_session_init(NbdClientSession *client, BlockDriverState *bs,
int sock, const char *export_name);
void nbd_client_session_close(NbdClientSession *client);
int nbd_client_session_co_discard(NbdClientSession *client, int64_t sector_num,
int nb_sectors);
int nbd_client_session_co_flush(NbdClientSession *client);
int nbd_client_session_co_writev(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
int nbd_client_session_co_readv(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
#endif /* NBD_CLIENT_H */

View File

@@ -26,7 +26,8 @@
* THE SOFTWARE.
*/
#include "block/nbd-client.h"
#include "qemu-common.h"
#include "block/nbd.h"
#include "qemu/uri.h"
#include "block/block_int.h"
#include "qemu/module.h"
@@ -39,9 +40,37 @@
#define EN_OPTSTR ":exportname="
/* #define DEBUG_NBD */
#if defined(DEBUG_NBD)
#define logout(fmt, ...) \
fprintf(stderr, "nbd\t%-24s" fmt, __func__, ##__VA_ARGS__)
#else
#define logout(fmt, ...) ((void)0)
#endif
#define MAX_NBD_REQUESTS 16
#define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs))
#define INDEX_TO_HANDLE(bs, index) ((index) ^ ((uint64_t)(intptr_t)bs))
typedef struct BDRVNBDState {
NbdClientSession client;
int sock;
uint32_t nbdflags;
off_t size;
size_t blocksize;
CoMutex send_mutex;
CoMutex free_sema;
Coroutine *send_coroutine;
int in_flight;
Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
struct nbd_reply reply;
bool is_unix;
QemuOpts *socket_opts;
char *export_name; /* An NBD server may export several devices */
} BDRVNBDState;
static int nbd_parse_uri(const char *filename, QDict *options)
@@ -175,7 +204,7 @@ static void nbd_parse_filename(const char *filename, QDict *options,
InetSocketAddress *addr = NULL;
addr = inet_parse(host_spec, errp);
if (!addr) {
if (error_is_set(errp)) {
goto out;
}
@@ -188,49 +217,204 @@ out:
g_free(file);
}
static void nbd_config(BDRVNBDState *s, QDict *options, char **export,
Error **errp)
static int nbd_config(BDRVNBDState *s, QDict *options)
{
Error *local_err = NULL;
if (qdict_haskey(options, "path") == qdict_haskey(options, "host")) {
if (qdict_haskey(options, "path")) {
error_setg(errp, "path and host may not be used at the same time.");
} else {
error_setg(errp, "one of path and host must be specified.");
if (qdict_haskey(options, "path")) {
if (qdict_haskey(options, "host")) {
qerror_report(ERROR_CLASS_GENERIC_ERROR, "path and host may not "
"be used at the same time.");
return -EINVAL;
}
return;
s->is_unix = true;
} else if (qdict_haskey(options, "host")) {
s->is_unix = false;
} else {
return -EINVAL;
}
s->client.is_unix = qdict_haskey(options, "path");
s->socket_opts = qemu_opts_create(&socket_optslist, NULL, 0,
&error_abort);
s->socket_opts = qemu_opts_create_nofail(&socket_optslist);
qemu_opts_absorb_qdict(s->socket_opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
return -EINVAL;
}
if (!qemu_opt_get(s->socket_opts, "port")) {
qemu_opt_set_number(s->socket_opts, "port", NBD_DEFAULT_PORT);
}
*export = g_strdup(qdict_get_try_str(options, "export"));
if (*export) {
s->export_name = g_strdup(qdict_get_try_str(options, "export"));
if (s->export_name) {
qdict_del(options, "export");
}
return 0;
}
static void nbd_coroutine_start(BDRVNBDState *s, struct nbd_request *request)
{
int i;
/* Poor man semaphore. The free_sema is locked when no other request
* can be accepted, and unlocked after receiving one reply. */
if (s->in_flight >= MAX_NBD_REQUESTS - 1) {
qemu_co_mutex_lock(&s->free_sema);
assert(s->in_flight < MAX_NBD_REQUESTS);
}
s->in_flight++;
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i] == NULL) {
s->recv_coroutine[i] = qemu_coroutine_self();
break;
}
}
assert(i < MAX_NBD_REQUESTS);
request->handle = INDEX_TO_HANDLE(s, i);
}
static int nbd_have_request(void *opaque)
{
BDRVNBDState *s = opaque;
return s->in_flight > 0;
}
static void nbd_reply_ready(void *opaque)
{
BDRVNBDState *s = opaque;
uint64_t i;
int ret;
if (s->reply.handle == 0) {
/* No reply already in flight. Fetch a header. It is possible
* that another thread has done the same thing in parallel, so
* the socket is not readable anymore.
*/
ret = nbd_receive_reply(s->sock, &s->reply);
if (ret == -EAGAIN) {
return;
}
if (ret < 0) {
s->reply.handle = 0;
goto fail;
}
}
/* There's no need for a mutex on the receive side, because the
* handler acts as a synchronization point and ensures that only
* one coroutine is called until the reply finishes. */
i = HANDLE_TO_INDEX(s, s->reply.handle);
if (i >= MAX_NBD_REQUESTS) {
goto fail;
}
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
return;
}
fail:
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
}
}
}
static int nbd_establish_connection(BlockDriverState *bs, Error **errp)
static void nbd_restart_write(void *opaque)
{
BDRVNBDState *s = opaque;
qemu_coroutine_enter(s->send_coroutine, NULL);
}
static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
QEMUIOVector *qiov, int offset)
{
int rc, ret;
qemu_co_mutex_lock(&s->send_mutex);
s->send_coroutine = qemu_coroutine_self();
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write,
nbd_have_request, s);
if (qiov) {
if (!s->is_unix) {
socket_set_cork(s->sock, 1);
}
rc = nbd_send_request(s->sock, request);
if (rc >= 0) {
ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
rc = -EIO;
}
}
if (!s->is_unix) {
socket_set_cork(s->sock, 0);
}
} else {
rc = nbd_send_request(s->sock, request);
}
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL,
nbd_have_request, s);
s->send_coroutine = NULL;
qemu_co_mutex_unlock(&s->send_mutex);
return rc;
}
static void nbd_co_receive_reply(BDRVNBDState *s, struct nbd_request *request,
struct nbd_reply *reply,
QEMUIOVector *qiov, int offset)
{
int ret;
/* Wait until we're woken up by the read handler. TODO: perhaps
* peek at the next reply and avoid yielding if it's ours? */
qemu_coroutine_yield();
*reply = s->reply;
if (reply->handle != request->handle) {
reply->error = EIO;
} else {
if (qiov && reply->error == 0) {
ret = qemu_co_recvv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
reply->error = EIO;
}
}
/* Tell the read handler to read another header. */
s->reply.handle = 0;
}
}
static void nbd_coroutine_end(BDRVNBDState *s, struct nbd_request *request)
{
int i = HANDLE_TO_INDEX(s, request->handle);
s->recv_coroutine[i] = NULL;
if (s->in_flight-- == MAX_NBD_REQUESTS) {
qemu_co_mutex_unlock(&s->free_sema);
}
}
static int nbd_establish_connection(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
int sock;
int ret;
off_t size;
size_t blocksize;
if (s->client.is_unix) {
sock = unix_connect_opts(s->socket_opts, errp, NULL, NULL);
if (s->is_unix) {
sock = unix_socket_outgoing(qemu_opt_get(s->socket_opts, "path"));
} else {
sock = inet_connect_opts(s->socket_opts, errp, NULL, NULL);
sock = tcp_socket_outgoing_opts(s->socket_opts);
if (sock >= 0) {
socket_set_nodelay(sock);
}
@@ -242,85 +426,226 @@ static int nbd_establish_connection(BlockDriverState *bs, Error **errp)
return -errno;
}
return sock;
/* NBD handshake */
ret = nbd_receive_negotiate(sock, s->export_name, &s->nbdflags, &size,
&blocksize);
if (ret < 0) {
logout("Failed to negotiate with the NBD server\n");
closesocket(sock);
return ret;
}
/* Now that we're connected, set the socket to be non-blocking and
* kick the reply mechanism. */
qemu_set_nonblock(sock);
qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL,
nbd_have_request, s);
s->sock = sock;
s->size = size;
s->blocksize = blocksize;
logout("Established connection with NBD server\n");
return 0;
}
static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static void nbd_teardown_connection(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
char *export = NULL;
int result, sock;
Error *local_err = NULL;
struct nbd_request request;
request.type = NBD_CMD_DISC;
request.from = 0;
request.len = 0;
nbd_send_request(s->sock, &request);
qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL, NULL);
closesocket(s->sock);
}
static int nbd_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVNBDState *s = bs->opaque;
int result;
qemu_co_mutex_init(&s->send_mutex);
qemu_co_mutex_init(&s->free_sema);
/* Pop the config into our state object. Exit if invalid. */
nbd_config(s, options, &export, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return -EINVAL;
result = nbd_config(s, options);
if (result != 0) {
return result;
}
/* establish TCP connection, return error if it fails
* TODO: Configurable retry-until-timeout behaviour.
*/
sock = nbd_establish_connection(bs, errp);
if (sock < 0) {
return sock;
}
result = nbd_establish_connection(bs);
/* NBD handshake */
result = nbd_client_session_init(&s->client, bs, sock, export);
g_free(export);
return result;
}
static int nbd_co_readv_1(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
request.type = NBD_CMD_READ;
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, qiov, offset);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
static int nbd_co_writev_1(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
request.type = NBD_CMD_WRITE;
if (!bdrv_enable_write_cache(bs) && (s->nbdflags & NBD_FLAG_SEND_FUA)) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, qiov, offset);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, NULL, 0);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
/* qemu-nbd has a limit of slightly less than 1M per request. Try to
* remain aligned to 4K. */
#define NBD_MAX_SECTORS 2040
static int nbd_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BDRVNBDState *s = bs->opaque;
return nbd_client_session_co_readv(&s->client, sector_num,
nb_sectors, qiov);
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_readv_1(bs, sector_num, NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_readv_1(bs, sector_num, nb_sectors, qiov, offset);
}
static int nbd_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BDRVNBDState *s = bs->opaque;
return nbd_client_session_co_writev(&s->client, sector_num,
nb_sectors, qiov);
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_writev_1(bs, sector_num, NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_writev_1(bs, sector_num, nb_sectors, qiov, offset);
}
static int nbd_co_flush(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
return nbd_client_session_co_flush(&s->client);
if (!(s->nbdflags & NBD_FLAG_SEND_FLUSH)) {
return 0;
}
request.type = NBD_CMD_FLUSH;
if (s->nbdflags & NBD_FLAG_SEND_FUA) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = 0;
request.len = 0;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, NULL, 0);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
static int nbd_co_discard(BlockDriverState *bs, int64_t sector_num,
int nb_sectors)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
return nbd_client_session_co_discard(&s->client, sector_num,
nb_sectors);
if (!(s->nbdflags & NBD_FLAG_SEND_TRIM)) {
return 0;
}
request.type = NBD_CMD_TRIM;
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, NULL, 0);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
static void nbd_close(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
g_free(s->export_name);
qemu_opts_del(s->socket_opts);
nbd_client_session_close(&s->client);
nbd_teardown_connection(bs);
}
static int64_t nbd_getlength(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
return s->client.size;
return s->size;
}
static BlockDriver bdrv_nbd = {

View File

@@ -1,446 +0,0 @@
/*
* QEMU Block driver for native access to files on NFS shares
*
* Copyright (c) 2014 Peter Lieven <pl@kamp.de>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "config-host.h"
#include <poll.h>
#include "qemu-common.h"
#include "qemu/config-file.h"
#include "qemu/error-report.h"
#include "block/block_int.h"
#include "trace.h"
#include "qemu/iov.h"
#include "qemu/uri.h"
#include "sysemu/sysemu.h"
#include <nfsc/libnfs.h>
typedef struct NFSClient {
struct nfs_context *context;
struct nfsfh *fh;
int events;
bool has_zero_init;
} NFSClient;
typedef struct NFSRPC {
int ret;
int complete;
QEMUIOVector *iov;
struct stat *st;
Coroutine *co;
QEMUBH *bh;
} NFSRPC;
static void nfs_process_read(void *arg);
static void nfs_process_write(void *arg);
static void nfs_set_events(NFSClient *client)
{
int ev = nfs_which_events(client->context);
if (ev != client->events) {
qemu_aio_set_fd_handler(nfs_get_fd(client->context),
(ev & POLLIN) ? nfs_process_read : NULL,
(ev & POLLOUT) ? nfs_process_write : NULL,
client);
}
client->events = ev;
}
static void nfs_process_read(void *arg)
{
NFSClient *client = arg;
nfs_service(client->context, POLLIN);
nfs_set_events(client);
}
static void nfs_process_write(void *arg)
{
NFSClient *client = arg;
nfs_service(client->context, POLLOUT);
nfs_set_events(client);
}
static void nfs_co_init_task(NFSClient *client, NFSRPC *task)
{
*task = (NFSRPC) {
.co = qemu_coroutine_self(),
};
}
static void nfs_co_generic_bh_cb(void *opaque)
{
NFSRPC *task = opaque;
qemu_bh_delete(task->bh);
qemu_coroutine_enter(task->co, NULL);
}
static void
nfs_co_generic_cb(int ret, struct nfs_context *nfs, void *data,
void *private_data)
{
NFSRPC *task = private_data;
task->complete = 1;
task->ret = ret;
if (task->ret > 0 && task->iov) {
if (task->ret <= task->iov->size) {
qemu_iovec_from_buf(task->iov, 0, data, task->ret);
} else {
task->ret = -EIO;
}
}
if (task->ret == 0 && task->st) {
memcpy(task->st, data, sizeof(struct stat));
}
if (task->ret < 0) {
error_report("NFS Error: %s", nfs_get_error(nfs));
}
if (task->co) {
task->bh = qemu_bh_new(nfs_co_generic_bh_cb, task);
qemu_bh_schedule(task->bh);
}
}
static int coroutine_fn nfs_co_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
QEMUIOVector *iov)
{
NFSClient *client = bs->opaque;
NFSRPC task;
nfs_co_init_task(client, &task);
task.iov = iov;
if (nfs_pread_async(client->context, client->fh,
sector_num * BDRV_SECTOR_SIZE,
nb_sectors * BDRV_SECTOR_SIZE,
nfs_co_generic_cb, &task) != 0) {
return -ENOMEM;
}
while (!task.complete) {
nfs_set_events(client);
qemu_coroutine_yield();
}
if (task.ret < 0) {
return task.ret;
}
/* zero pad short reads */
if (task.ret < iov->size) {
qemu_iovec_memset(iov, task.ret, 0, iov->size - task.ret);
}
return 0;
}
static int coroutine_fn nfs_co_writev(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
QEMUIOVector *iov)
{
NFSClient *client = bs->opaque;
NFSRPC task;
char *buf = NULL;
nfs_co_init_task(client, &task);
buf = g_malloc(nb_sectors * BDRV_SECTOR_SIZE);
qemu_iovec_to_buf(iov, 0, buf, nb_sectors * BDRV_SECTOR_SIZE);
if (nfs_pwrite_async(client->context, client->fh,
sector_num * BDRV_SECTOR_SIZE,
nb_sectors * BDRV_SECTOR_SIZE,
buf, nfs_co_generic_cb, &task) != 0) {
g_free(buf);
return -ENOMEM;
}
while (!task.complete) {
nfs_set_events(client);
qemu_coroutine_yield();
}
g_free(buf);
if (task.ret != nb_sectors * BDRV_SECTOR_SIZE) {
return task.ret < 0 ? task.ret : -EIO;
}
return 0;
}
static int coroutine_fn nfs_co_flush(BlockDriverState *bs)
{
NFSClient *client = bs->opaque;
NFSRPC task;
nfs_co_init_task(client, &task);
if (nfs_fsync_async(client->context, client->fh, nfs_co_generic_cb,
&task) != 0) {
return -ENOMEM;
}
while (!task.complete) {
nfs_set_events(client);
qemu_coroutine_yield();
}
return task.ret;
}
/* TODO Convert to fine grained options */
static QemuOptsList runtime_opts = {
.name = "nfs",
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
.desc = {
{
.name = "filename",
.type = QEMU_OPT_STRING,
.help = "URL to the NFS file",
},
{ /* end of list */ }
},
};
static void nfs_client_close(NFSClient *client)
{
if (client->context) {
if (client->fh) {
nfs_close(client->context, client->fh);
}
qemu_aio_set_fd_handler(nfs_get_fd(client->context), NULL, NULL, NULL);
nfs_destroy_context(client->context);
}
memset(client, 0, sizeof(NFSClient));
}
static void nfs_file_close(BlockDriverState *bs)
{
NFSClient *client = bs->opaque;
nfs_client_close(client);
}
static int64_t nfs_client_open(NFSClient *client, const char *filename,
int flags, Error **errp)
{
int ret = -EINVAL, i;
struct stat st;
URI *uri;
QueryParams *qp = NULL;
char *file = NULL, *strp = NULL;
uri = uri_parse(filename);
if (!uri) {
error_setg(errp, "Invalid URL specified");
goto fail;
}
if (!uri->server) {
error_setg(errp, "Invalid URL specified");
goto fail;
}
strp = strrchr(uri->path, '/');
if (strp == NULL) {
error_setg(errp, "Invalid URL specified");
goto fail;
}
file = g_strdup(strp);
*strp = 0;
client->context = nfs_init_context();
if (client->context == NULL) {
error_setg(errp, "Failed to init NFS context");
goto fail;
}
qp = query_params_parse(uri->query);
for (i = 0; i < qp->n; i++) {
if (!qp->p[i].value) {
error_setg(errp, "Value for NFS parameter expected: %s",
qp->p[i].name);
goto fail;
}
if (!strncmp(qp->p[i].name, "uid", 3)) {
nfs_set_uid(client->context, atoi(qp->p[i].value));
} else if (!strncmp(qp->p[i].name, "gid", 3)) {
nfs_set_gid(client->context, atoi(qp->p[i].value));
} else if (!strncmp(qp->p[i].name, "tcp-syncnt", 10)) {
nfs_set_tcp_syncnt(client->context, atoi(qp->p[i].value));
} else {
error_setg(errp, "Unknown NFS parameter name: %s",
qp->p[i].name);
goto fail;
}
}
ret = nfs_mount(client->context, uri->server, uri->path);
if (ret < 0) {
error_setg(errp, "Failed to mount nfs share: %s",
nfs_get_error(client->context));
goto fail;
}
if (flags & O_CREAT) {
ret = nfs_creat(client->context, file, 0600, &client->fh);
if (ret < 0) {
error_setg(errp, "Failed to create file: %s",
nfs_get_error(client->context));
goto fail;
}
} else {
ret = nfs_open(client->context, file, flags, &client->fh);
if (ret < 0) {
error_setg(errp, "Failed to open file : %s",
nfs_get_error(client->context));
goto fail;
}
}
ret = nfs_fstat(client->context, client->fh, &st);
if (ret < 0) {
error_setg(errp, "Failed to fstat file: %s",
nfs_get_error(client->context));
goto fail;
}
ret = DIV_ROUND_UP(st.st_size, BDRV_SECTOR_SIZE);
client->has_zero_init = S_ISREG(st.st_mode);
goto out;
fail:
nfs_client_close(client);
out:
if (qp) {
query_params_free(qp);
}
uri_free(uri);
g_free(file);
return ret;
}
static int nfs_file_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp) {
NFSClient *client = bs->opaque;
int64_t ret;
QemuOpts *opts;
Error *local_err = NULL;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return -EINVAL;
}
ret = nfs_client_open(client, qemu_opt_get(opts, "filename"),
(flags & BDRV_O_RDWR) ? O_RDWR : O_RDONLY,
errp);
if (ret < 0) {
return ret;
}
bs->total_sectors = ret;
return 0;
}
static int nfs_file_create(const char *url, QEMUOptionParameter *options,
Error **errp)
{
int ret = 0;
int64_t total_size = 0;
NFSClient *client = g_malloc0(sizeof(NFSClient));
/* Read out options */
while (options && options->name) {
if (!strcmp(options->name, "size")) {
total_size = options->value.n;
}
options++;
}
ret = nfs_client_open(client, url, O_CREAT, errp);
if (ret < 0) {
goto out;
}
ret = nfs_ftruncate(client->context, client->fh, total_size);
nfs_client_close(client);
out:
g_free(client);
return ret;
}
static int nfs_has_zero_init(BlockDriverState *bs)
{
NFSClient *client = bs->opaque;
return client->has_zero_init;
}
static int64_t nfs_get_allocated_file_size(BlockDriverState *bs)
{
NFSClient *client = bs->opaque;
NFSRPC task = {0};
struct stat st;
task.st = &st;
if (nfs_fstat_async(client->context, client->fh, nfs_co_generic_cb,
&task) != 0) {
return -ENOMEM;
}
while (!task.complete) {
nfs_set_events(client);
qemu_aio_wait();
}
return (task.ret < 0 ? task.ret : st.st_blocks * st.st_blksize);
}
static int nfs_file_truncate(BlockDriverState *bs, int64_t offset)
{
NFSClient *client = bs->opaque;
return nfs_ftruncate(client->context, client->fh, offset);
}
static BlockDriver bdrv_nfs = {
.format_name = "nfs",
.protocol_name = "nfs",
.instance_size = sizeof(NFSClient),
.bdrv_needs_filename = true,
.bdrv_has_zero_init = nfs_has_zero_init,
.bdrv_get_allocated_file_size = nfs_get_allocated_file_size,
.bdrv_truncate = nfs_file_truncate,
.bdrv_file_open = nfs_file_open,
.bdrv_close = nfs_file_close,
.bdrv_create = nfs_file_create,
.bdrv_co_readv = nfs_co_readv,
.bdrv_co_writev = nfs_co_writev,
.bdrv_co_flush_to_disk = nfs_co_flush,
};
static void nfs_block_init(void)
{
bdrv_register(&bdrv_nfs);
}
block_init(nfs_block_init);

View File

@@ -49,9 +49,9 @@ typedef struct BDRVParallelsState {
CoMutex lock;
uint32_t *catalog_bitmap;
unsigned int catalog_size;
int catalog_size;
unsigned int tracks;
int tracks;
} BDRVParallelsState;
static int parallels_probe(const uint8_t *buf, int buf_size, const char *filename)
@@ -68,8 +68,7 @@ static int parallels_probe(const uint8_t *buf, int buf_size, const char *filenam
return 0;
}
static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int parallels_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVParallelsState *s = bs->opaque;
int i;
@@ -85,26 +84,15 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
if (memcmp(ph.magic, HEADER_MAGIC, 16) ||
(le32_to_cpu(ph.version) != HEADER_VERSION)) {
error_setg(errp, "Image not in Parallels format");
ret = -EINVAL;
ret = -EMEDIUMTYPE;
goto fail;
}
bs->total_sectors = le32_to_cpu(ph.nb_sectors);
s->tracks = le32_to_cpu(ph.tracks);
if (s->tracks == 0) {
error_setg(errp, "Invalid image: Zero sectors per track");
ret = -EINVAL;
goto fail;
}
s->catalog_size = le32_to_cpu(ph.catalog_entries);
if (s->catalog_size > INT_MAX / 4) {
error_setg(errp, "Catalog too large");
ret = -EFBIG;
goto fail;
}
s->catalog_bitmap = g_malloc(s->catalog_size * 4);
ret = bdrv_pread(bs->file, 64, s->catalog_bitmap, s->catalog_size * 4);

View File

@@ -1,624 +0,0 @@
/*
* Block layer qmp and info dump related functions
*
* Copyright (c) 2003-2008 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "block/qapi.h"
#include "block/block_int.h"
#include "qmp-commands.h"
#include "qapi-visit.h"
#include "qapi/qmp-output-visitor.h"
#include "qapi/qmp/types.h"
BlockDeviceInfo *bdrv_block_device_info(BlockDriverState *bs)
{
BlockDeviceInfo *info = g_malloc0(sizeof(*info));
info->file = g_strdup(bs->filename);
info->ro = bs->read_only;
info->drv = g_strdup(bs->drv->format_name);
info->encrypted = bs->encrypted;
info->encryption_key_missing = bdrv_key_required(bs);
if (bs->node_name[0]) {
info->has_node_name = true;
info->node_name = g_strdup(bs->node_name);
}
if (bs->backing_file[0]) {
info->has_backing_file = true;
info->backing_file = g_strdup(bs->backing_file);
}
info->backing_file_depth = bdrv_get_backing_file_depth(bs);
info->detect_zeroes = bs->detect_zeroes;
if (bs->io_limits_enabled) {
ThrottleConfig cfg;
throttle_get_config(&bs->throttle_state, &cfg);
info->bps = cfg.buckets[THROTTLE_BPS_TOTAL].avg;
info->bps_rd = cfg.buckets[THROTTLE_BPS_READ].avg;
info->bps_wr = cfg.buckets[THROTTLE_BPS_WRITE].avg;
info->iops = cfg.buckets[THROTTLE_OPS_TOTAL].avg;
info->iops_rd = cfg.buckets[THROTTLE_OPS_READ].avg;
info->iops_wr = cfg.buckets[THROTTLE_OPS_WRITE].avg;
info->has_bps_max = cfg.buckets[THROTTLE_BPS_TOTAL].max;
info->bps_max = cfg.buckets[THROTTLE_BPS_TOTAL].max;
info->has_bps_rd_max = cfg.buckets[THROTTLE_BPS_READ].max;
info->bps_rd_max = cfg.buckets[THROTTLE_BPS_READ].max;
info->has_bps_wr_max = cfg.buckets[THROTTLE_BPS_WRITE].max;
info->bps_wr_max = cfg.buckets[THROTTLE_BPS_WRITE].max;
info->has_iops_max = cfg.buckets[THROTTLE_OPS_TOTAL].max;
info->iops_max = cfg.buckets[THROTTLE_OPS_TOTAL].max;
info->has_iops_rd_max = cfg.buckets[THROTTLE_OPS_READ].max;
info->iops_rd_max = cfg.buckets[THROTTLE_OPS_READ].max;
info->has_iops_wr_max = cfg.buckets[THROTTLE_OPS_WRITE].max;
info->iops_wr_max = cfg.buckets[THROTTLE_OPS_WRITE].max;
info->has_iops_size = cfg.op_size;
info->iops_size = cfg.op_size;
}
return info;
}
/*
* Returns 0 on success, with *p_list either set to describe snapshot
* information, or NULL because there are no snapshots. Returns -errno on
* error, with *p_list untouched.
*/
int bdrv_query_snapshot_info_list(BlockDriverState *bs,
SnapshotInfoList **p_list,
Error **errp)
{
int i, sn_count;
QEMUSnapshotInfo *sn_tab = NULL;
SnapshotInfoList *info_list, *cur_item = NULL, *head = NULL;
SnapshotInfo *info;
sn_count = bdrv_snapshot_list(bs, &sn_tab);
if (sn_count < 0) {
const char *dev = bdrv_get_device_name(bs);
switch (sn_count) {
case -ENOMEDIUM:
error_setg(errp, "Device '%s' is not inserted", dev);
break;
case -ENOTSUP:
error_setg(errp,
"Device '%s' does not support internal snapshots",
dev);
break;
default:
error_setg_errno(errp, -sn_count,
"Can't list snapshots of device '%s'", dev);
break;
}
return sn_count;
}
for (i = 0; i < sn_count; i++) {
info = g_new0(SnapshotInfo, 1);
info->id = g_strdup(sn_tab[i].id_str);
info->name = g_strdup(sn_tab[i].name);
info->vm_state_size = sn_tab[i].vm_state_size;
info->date_sec = sn_tab[i].date_sec;
info->date_nsec = sn_tab[i].date_nsec;
info->vm_clock_sec = sn_tab[i].vm_clock_nsec / 1000000000;
info->vm_clock_nsec = sn_tab[i].vm_clock_nsec % 1000000000;
info_list = g_new0(SnapshotInfoList, 1);
info_list->value = info;
/* XXX: waiting for the qapi to support qemu-queue.h types */
if (!cur_item) {
head = cur_item = info_list;
} else {
cur_item->next = info_list;
cur_item = info_list;
}
}
g_free(sn_tab);
*p_list = head;
return 0;
}
/**
* bdrv_query_image_info:
* @bs: block device to examine
* @p_info: location to store image information
* @errp: location to store error information
*
* Store "flat" image information in @p_info.
*
* "Flat" means it does *not* query backing image information,
* i.e. (*pinfo)->has_backing_image will be set to false and
* (*pinfo)->backing_image to NULL even when the image does in fact have
* a backing image.
*
* @p_info will be set only on success. On error, store error in @errp.
*/
void bdrv_query_image_info(BlockDriverState *bs,
ImageInfo **p_info,
Error **errp)
{
uint64_t total_sectors;
const char *backing_filename;
char backing_filename2[1024];
BlockDriverInfo bdi;
int ret;
Error *err = NULL;
ImageInfo *info = g_new0(ImageInfo, 1);
bdrv_get_geometry(bs, &total_sectors);
info->filename = g_strdup(bs->filename);
info->format = g_strdup(bdrv_get_format_name(bs));
info->virtual_size = total_sectors * 512;
info->actual_size = bdrv_get_allocated_file_size(bs);
info->has_actual_size = info->actual_size >= 0;
if (bdrv_is_encrypted(bs)) {
info->encrypted = true;
info->has_encrypted = true;
}
if (bdrv_get_info(bs, &bdi) >= 0) {
if (bdi.cluster_size != 0) {
info->cluster_size = bdi.cluster_size;
info->has_cluster_size = true;
}
info->dirty_flag = bdi.is_dirty;
info->has_dirty_flag = true;
}
info->format_specific = bdrv_get_specific_info(bs);
info->has_format_specific = info->format_specific != NULL;
backing_filename = bs->backing_file;
if (backing_filename[0] != '\0') {
info->backing_filename = g_strdup(backing_filename);
info->has_backing_filename = true;
bdrv_get_full_backing_filename(bs, backing_filename2,
sizeof(backing_filename2));
if (strcmp(backing_filename, backing_filename2) != 0) {
info->full_backing_filename =
g_strdup(backing_filename2);
info->has_full_backing_filename = true;
}
if (bs->backing_format[0]) {
info->backing_filename_format = g_strdup(bs->backing_format);
info->has_backing_filename_format = true;
}
}
ret = bdrv_query_snapshot_info_list(bs, &info->snapshots, &err);
switch (ret) {
case 0:
if (info->snapshots) {
info->has_snapshots = true;
}
break;
/* recoverable error */
case -ENOMEDIUM:
case -ENOTSUP:
error_free(err);
break;
default:
error_propagate(errp, err);
qapi_free_ImageInfo(info);
return;
}
*p_info = info;
}
/* @p_info will be set only on success. */
void bdrv_query_info(BlockDriverState *bs,
BlockInfo **p_info,
Error **errp)
{
BlockInfo *info = g_malloc0(sizeof(*info));
BlockDriverState *bs0;
ImageInfo **p_image_info;
Error *local_err = NULL;
info->device = g_strdup(bs->device_name);
info->type = g_strdup("unknown");
info->locked = bdrv_dev_is_medium_locked(bs);
info->removable = bdrv_dev_has_removable_media(bs);
if (bdrv_dev_has_removable_media(bs)) {
info->has_tray_open = true;
info->tray_open = bdrv_dev_is_tray_open(bs);
}
if (bdrv_iostatus_is_enabled(bs)) {
info->has_io_status = true;
info->io_status = bs->iostatus;
}
if (!QLIST_EMPTY(&bs->dirty_bitmaps)) {
info->has_dirty_bitmaps = true;
info->dirty_bitmaps = bdrv_query_dirty_bitmaps(bs);
}
if (bs->drv) {
info->has_inserted = true;
info->inserted = bdrv_block_device_info(bs);
bs0 = bs;
p_image_info = &info->inserted->image;
while (1) {
bdrv_query_image_info(bs0, p_image_info, &local_err);
if (local_err) {
error_propagate(errp, local_err);
goto err;
}
if (bs0->drv && bs0->backing_hd) {
bs0 = bs0->backing_hd;
(*p_image_info)->has_backing_image = true;
p_image_info = &((*p_image_info)->backing_image);
} else {
break;
}
}
}
*p_info = info;
return;
err:
qapi_free_BlockInfo(info);
}
BlockStats *bdrv_query_stats(const BlockDriverState *bs)
{
BlockStats *s;
s = g_malloc0(sizeof(*s));
if (bs->device_name[0]) {
s->has_device = true;
s->device = g_strdup(bs->device_name);
}
s->stats = g_malloc0(sizeof(*s->stats));
s->stats->rd_bytes = bs->nr_bytes[BDRV_ACCT_READ];
s->stats->wr_bytes = bs->nr_bytes[BDRV_ACCT_WRITE];
s->stats->rd_operations = bs->nr_ops[BDRV_ACCT_READ];
s->stats->wr_operations = bs->nr_ops[BDRV_ACCT_WRITE];
s->stats->wr_highest_offset = bs->wr_highest_sector * BDRV_SECTOR_SIZE;
s->stats->flush_operations = bs->nr_ops[BDRV_ACCT_FLUSH];
s->stats->wr_total_time_ns = bs->total_time_ns[BDRV_ACCT_WRITE];
s->stats->rd_total_time_ns = bs->total_time_ns[BDRV_ACCT_READ];
s->stats->flush_total_time_ns = bs->total_time_ns[BDRV_ACCT_FLUSH];
if (bs->file) {
s->has_parent = true;
s->parent = bdrv_query_stats(bs->file);
}
if (bs->backing_hd) {
s->has_backing = true;
s->backing = bdrv_query_stats(bs->backing_hd);
}
return s;
}
BlockInfoList *qmp_query_block(Error **errp)
{
BlockInfoList *head = NULL, **p_next = &head;
BlockDriverState *bs = NULL;
Error *local_err = NULL;
while ((bs = bdrv_next(bs))) {
BlockInfoList *info = g_malloc0(sizeof(*info));
bdrv_query_info(bs, &info->value, &local_err);
if (local_err) {
error_propagate(errp, local_err);
goto err;
}
*p_next = info;
p_next = &info->next;
}
return head;
err:
qapi_free_BlockInfoList(head);
return NULL;
}
BlockStatsList *qmp_query_blockstats(Error **errp)
{
BlockStatsList *head = NULL, **p_next = &head;
BlockDriverState *bs = NULL;
while ((bs = bdrv_next(bs))) {
BlockStatsList *info = g_malloc0(sizeof(*info));
info->value = bdrv_query_stats(bs);
*p_next = info;
p_next = &info->next;
}
return head;
}
#define NB_SUFFIXES 4
static char *get_human_readable_size(char *buf, int buf_size, int64_t size)
{
static const char suffixes[NB_SUFFIXES] = "KMGT";
int64_t base;
int i;
if (size <= 999) {
snprintf(buf, buf_size, "%" PRId64, size);
} else {
base = 1024;
for (i = 0; i < NB_SUFFIXES; i++) {
if (size < (10 * base)) {
snprintf(buf, buf_size, "%0.1f%c",
(double)size / base,
suffixes[i]);
break;
} else if (size < (1000 * base) || i == (NB_SUFFIXES - 1)) {
snprintf(buf, buf_size, "%" PRId64 "%c",
((size + (base >> 1)) / base),
suffixes[i]);
break;
}
base = base * 1024;
}
}
return buf;
}
void bdrv_snapshot_dump(fprintf_function func_fprintf, void *f,
QEMUSnapshotInfo *sn)
{
char buf1[128], date_buf[128], clock_buf[128];
struct tm tm;
time_t ti;
int64_t secs;
if (!sn) {
func_fprintf(f,
"%-10s%-20s%7s%20s%15s",
"ID", "TAG", "VM SIZE", "DATE", "VM CLOCK");
} else {
ti = sn->date_sec;
localtime_r(&ti, &tm);
strftime(date_buf, sizeof(date_buf),
"%Y-%m-%d %H:%M:%S", &tm);
secs = sn->vm_clock_nsec / 1000000000;
snprintf(clock_buf, sizeof(clock_buf),
"%02d:%02d:%02d.%03d",
(int)(secs / 3600),
(int)((secs / 60) % 60),
(int)(secs % 60),
(int)((sn->vm_clock_nsec / 1000000) % 1000));
func_fprintf(f,
"%-10s%-20s%7s%20s%15s",
sn->id_str, sn->name,
get_human_readable_size(buf1, sizeof(buf1),
sn->vm_state_size),
date_buf,
clock_buf);
}
}
static void dump_qdict(fprintf_function func_fprintf, void *f, int indentation,
QDict *dict);
static void dump_qlist(fprintf_function func_fprintf, void *f, int indentation,
QList *list);
static void dump_qobject(fprintf_function func_fprintf, void *f,
int comp_indent, QObject *obj)
{
switch (qobject_type(obj)) {
case QTYPE_QINT: {
QInt *value = qobject_to_qint(obj);
func_fprintf(f, "%" PRId64, qint_get_int(value));
break;
}
case QTYPE_QSTRING: {
QString *value = qobject_to_qstring(obj);
func_fprintf(f, "%s", qstring_get_str(value));
break;
}
case QTYPE_QDICT: {
QDict *value = qobject_to_qdict(obj);
dump_qdict(func_fprintf, f, comp_indent, value);
break;
}
case QTYPE_QLIST: {
QList *value = qobject_to_qlist(obj);
dump_qlist(func_fprintf, f, comp_indent, value);
break;
}
case QTYPE_QFLOAT: {
QFloat *value = qobject_to_qfloat(obj);
func_fprintf(f, "%g", qfloat_get_double(value));
break;
}
case QTYPE_QBOOL: {
QBool *value = qobject_to_qbool(obj);
func_fprintf(f, "%s", qbool_get_int(value) ? "true" : "false");
break;
}
case QTYPE_QERROR: {
QString *value = qerror_human((QError *)obj);
func_fprintf(f, "%s", qstring_get_str(value));
QDECREF(value);
break;
}
case QTYPE_NONE:
break;
case QTYPE_MAX:
default:
abort();
}
}
static void dump_qlist(fprintf_function func_fprintf, void *f, int indentation,
QList *list)
{
const QListEntry *entry;
int i = 0;
for (entry = qlist_first(list); entry; entry = qlist_next(entry), i++) {
qtype_code type = qobject_type(entry->value);
bool composite = (type == QTYPE_QDICT || type == QTYPE_QLIST);
const char *format = composite ? "%*s[%i]:\n" : "%*s[%i]: ";
func_fprintf(f, format, indentation * 4, "", i);
dump_qobject(func_fprintf, f, indentation + 1, entry->value);
if (!composite) {
func_fprintf(f, "\n");
}
}
}
static void dump_qdict(fprintf_function func_fprintf, void *f, int indentation,
QDict *dict)
{
const QDictEntry *entry;
for (entry = qdict_first(dict); entry; entry = qdict_next(dict, entry)) {
qtype_code type = qobject_type(entry->value);
bool composite = (type == QTYPE_QDICT || type == QTYPE_QLIST);
const char *format = composite ? "%*s%s:\n" : "%*s%s: ";
char key[strlen(entry->key) + 1];
int i;
/* replace dashes with spaces in key (variable) names */
for (i = 0; entry->key[i]; i++) {
key[i] = entry->key[i] == '-' ? ' ' : entry->key[i];
}
key[i] = 0;
func_fprintf(f, format, indentation * 4, "", key);
dump_qobject(func_fprintf, f, indentation + 1, entry->value);
if (!composite) {
func_fprintf(f, "\n");
}
}
}
void bdrv_image_info_specific_dump(fprintf_function func_fprintf, void *f,
ImageInfoSpecific *info_spec)
{
QmpOutputVisitor *ov = qmp_output_visitor_new();
QObject *obj, *data;
visit_type_ImageInfoSpecific(qmp_output_get_visitor(ov), &info_spec, NULL,
&error_abort);
obj = qmp_output_get_qobject(ov);
assert(qobject_type(obj) == QTYPE_QDICT);
data = qdict_get(qobject_to_qdict(obj), "data");
dump_qobject(func_fprintf, f, 1, data);
qmp_output_visitor_cleanup(ov);
}
void bdrv_image_info_dump(fprintf_function func_fprintf, void *f,
ImageInfo *info)
{
char size_buf[128], dsize_buf[128];
if (!info->has_actual_size) {
snprintf(dsize_buf, sizeof(dsize_buf), "unavailable");
} else {
get_human_readable_size(dsize_buf, sizeof(dsize_buf),
info->actual_size);
}
get_human_readable_size(size_buf, sizeof(size_buf), info->virtual_size);
func_fprintf(f,
"image: %s\n"
"file format: %s\n"
"virtual size: %s (%" PRId64 " bytes)\n"
"disk size: %s\n",
info->filename, info->format, size_buf,
info->virtual_size,
dsize_buf);
if (info->has_encrypted && info->encrypted) {
func_fprintf(f, "encrypted: yes\n");
}
if (info->has_cluster_size) {
func_fprintf(f, "cluster_size: %" PRId64 "\n",
info->cluster_size);
}
if (info->has_dirty_flag && info->dirty_flag) {
func_fprintf(f, "cleanly shut down: no\n");
}
if (info->has_backing_filename) {
func_fprintf(f, "backing file: %s", info->backing_filename);
if (info->has_full_backing_filename) {
func_fprintf(f, " (actual path: %s)", info->full_backing_filename);
}
func_fprintf(f, "\n");
if (info->has_backing_filename_format) {
func_fprintf(f, "backing file format: %s\n",
info->backing_filename_format);
}
}
if (info->has_snapshots) {
SnapshotInfoList *elem;
func_fprintf(f, "Snapshot list:\n");
bdrv_snapshot_dump(func_fprintf, f, NULL);
func_fprintf(f, "\n");
/* Ideally bdrv_snapshot_dump() would operate on SnapshotInfoList but
* we convert to the block layer's native QEMUSnapshotInfo for now.
*/
for (elem = info->snapshots; elem; elem = elem->next) {
QEMUSnapshotInfo sn = {
.vm_state_size = elem->value->vm_state_size,
.date_sec = elem->value->date_sec,
.date_nsec = elem->value->date_nsec,
.vm_clock_nsec = elem->value->vm_clock_sec * 1000000000ULL +
elem->value->vm_clock_nsec,
};
pstrcpy(sn.id_str, sizeof(sn.id_str), elem->value->id);
pstrcpy(sn.name, sizeof(sn.name), elem->value->name);
bdrv_snapshot_dump(func_fprintf, f, &sn);
func_fprintf(f, "\n");
}
}
if (info->has_format_specific) {
func_fprintf(f, "Format specific information:\n");
bdrv_image_info_specific_dump(func_fprintf, f, info->format_specific);
}
}

View File

@@ -48,10 +48,9 @@ typedef struct QCowHeader {
uint64_t size; /* in bytes */
uint8_t cluster_bits;
uint8_t l2_bits;
uint16_t padding;
uint32_t crypt_method;
uint64_t l1_table_offset;
} QEMU_PACKED QCowHeader;
} QCowHeader;
#define L2_CACHE_SIZE 16
@@ -61,7 +60,7 @@ typedef struct BDRVQcowState {
int cluster_sectors;
int l2_bits;
int l2_size;
unsigned int l1_size;
int l1_size;
uint64_t cluster_offset_mask;
uint64_t l1_table_offset;
uint64_t *l1_table;
@@ -93,12 +92,10 @@ static int qcow_probe(const uint8_t *buf, int buf_size, const char *filename)
return 0;
}
static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int qcow_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVQcowState *s = bs->opaque;
unsigned int len, i, shift;
int ret;
int len, i, shift, ret;
QCowHeader header;
ret = bdrv_pread(bs->file, 0, &header, sizeof(header));
@@ -115,41 +112,23 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
be64_to_cpus(&header.l1_table_offset);
if (header.magic != QCOW_MAGIC) {
error_setg(errp, "Image not in qcow format");
ret = -EINVAL;
ret = -EMEDIUMTYPE;
goto fail;
}
if (header.version != QCOW_VERSION) {
char version[64];
snprintf(version, sizeof(version), "QCOW version %" PRIu32,
header.version);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "qcow", version);
snprintf(version, sizeof(version), "QCOW version %d", header.version);
qerror_report(QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "qcow", version);
ret = -ENOTSUP;
goto fail;
}
if (header.size <= 1) {
error_setg(errp, "Image size is too small (must be at least 2 bytes)");
if (header.size <= 1 || header.cluster_bits < 9) {
ret = -EINVAL;
goto fail;
}
if (header.cluster_bits < 9 || header.cluster_bits > 16) {
error_setg(errp, "Cluster size must be between 512 and 64k");
ret = -EINVAL;
goto fail;
}
/* l2_bits specifies number of entries; storing a uint64_t in each entry,
* so bytes = num_entries << 3. */
if (header.l2_bits < 9 - 3 || header.l2_bits > 16 - 3) {
error_setg(errp, "L2 table size must be between 512 and 64k");
ret = -EINVAL;
goto fail;
}
if (header.crypt_method > QCOW_CRYPT_AES) {
error_setg(errp, "invalid encryption method in qcow header");
ret = -EINVAL;
goto fail;
}
@@ -167,19 +146,7 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
/* read the level 1 table */
shift = s->cluster_bits + s->l2_bits;
if (header.size > UINT64_MAX - (1LL << shift)) {
error_setg(errp, "Image too large");
ret = -EINVAL;
goto fail;
} else {
uint64_t l1_size = (header.size + (1LL << shift) - 1) >> shift;
if (l1_size > INT_MAX / sizeof(uint64_t)) {
error_setg(errp, "Image too large");
ret = -EINVAL;
goto fail;
}
s->l1_size = l1_size;
}
s->l1_size = (header.size + (1LL << shift) - 1) >> shift;
s->l1_table_offset = header.l1_table_offset;
s->l1_table = g_malloc(s->l1_size * sizeof(uint64_t));
@@ -203,9 +170,7 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
if (header.backing_file_offset != 0) {
len = header.backing_file_size;
if (len > 1023) {
error_setg(errp, "Backing file name too long");
ret = -EINVAL;
goto fail;
len = 1023;
}
ret = bdrv_pread(bs->file, header.backing_file_offset,
bs->backing_file, len);
@@ -430,7 +395,7 @@ static uint64_t get_cluster_offset(BlockDriverState *bs,
return cluster_offset;
}
static int64_t coroutine_fn qcow_co_get_block_status(BlockDriverState *bs,
static int coroutine_fn qcow_co_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum)
{
BDRVQcowState *s = bs->opaque;
@@ -445,14 +410,7 @@ static int64_t coroutine_fn qcow_co_get_block_status(BlockDriverState *bs,
if (n > nb_sectors)
n = nb_sectors;
*pnum = n;
if (!cluster_offset) {
return 0;
}
if ((cluster_offset & QCOW_OFLAG_COMPRESSED) || s->crypt_method) {
return BDRV_BLOCK_DATA;
}
cluster_offset |= (index_in_cluster << BDRV_SECTOR_BITS);
return BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID | cluster_offset;
return (cluster_offset != 0);
}
static int decompress_buffer(uint8_t *out_buf, int out_buf_size,
@@ -693,8 +651,7 @@ static void qcow_close(BlockDriverState *bs)
error_free(s->migration_blocker);
}
static int qcow_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
static int qcow_create(const char *filename, QEMUOptionParameter *options)
{
int header_size, backing_filename_len, l1_size, shift, i;
QCowHeader header;
@@ -702,7 +659,6 @@ static int qcow_create(const char *filename, QEMUOptionParameter *options,
int64_t total_size = 0;
const char *backing_file = NULL;
int flags = 0;
Error *local_err = NULL;
int ret;
BlockDriverState *qcow_bs;
@@ -718,17 +674,13 @@ static int qcow_create(const char *filename, QEMUOptionParameter *options,
options++;
}
ret = bdrv_create_file(filename, options, &local_err);
ret = bdrv_create_file(filename, options);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
qcow_bs = NULL;
ret = bdrv_open(&qcow_bs, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, NULL, &local_err);
ret = bdrv_file_open(&qcow_bs, filename, NULL, BDRV_O_RDWR);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
@@ -754,7 +706,7 @@ static int qcow_create(const char *filename, QEMUOptionParameter *options,
backing_file = NULL;
}
header.cluster_bits = 9; /* 512 byte cluster to avoid copying
unmodified sectors */
unmodifyed sectors */
header.l2_bits = 12; /* 32 KB L2 tables */
} else {
header.cluster_bits = 12; /* 4 KB clusters */
@@ -799,7 +751,7 @@ static int qcow_create(const char *filename, QEMUOptionParameter *options,
g_free(tmp);
ret = 0;
exit:
bdrv_unref(qcow_bs);
bdrv_delete(qcow_bs);
return ret;
}
@@ -940,11 +892,10 @@ static BlockDriver bdrv_qcow = {
.bdrv_close = qcow_close,
.bdrv_reopen_prepare = qcow_reopen_prepare,
.bdrv_create = qcow_create,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_co_readv = qcow_co_readv,
.bdrv_co_writev = qcow_co_writev,
.bdrv_co_get_block_status = qcow_co_get_block_status,
.bdrv_co_is_allocated = qcow_co_is_allocated,
.bdrv_set_key = qcow_set_key,
.bdrv_make_empty = qcow_make_empty,

View File

@@ -114,21 +114,6 @@ static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
return ret;
}
if (c == s->refcount_block_cache) {
ret = qcow2_pre_write_overlap_check(bs, QCOW2_OL_REFCOUNT_BLOCK,
c->entries[i].offset, s->cluster_size);
} else if (c == s->l2_table_cache) {
ret = qcow2_pre_write_overlap_check(bs, QCOW2_OL_ACTIVE_L2,
c->entries[i].offset, s->cluster_size);
} else {
ret = qcow2_pre_write_overlap_check(bs, 0,
c->entries[i].offset, s->cluster_size);
}
if (ret < 0) {
return ret;
}
if (c == s->refcount_block_cache) {
BLKDBG_EVENT(bs->file, BLKDBG_REFBLOCK_UPDATE_PART);
} else if (c == s->l2_table_cache) {
@@ -200,24 +185,6 @@ void qcow2_cache_depends_on_flush(Qcow2Cache *c)
c->depends_on_flush = true;
}
int qcow2_cache_empty(BlockDriverState *bs, Qcow2Cache *c)
{
int ret, i;
ret = qcow2_cache_flush(bs, c);
if (ret < 0) {
return ret;
}
for (i = 0; i < c->size; i++) {
assert(c->entries[i].ref == 0);
c->entries[i].offset = 0;
c->entries[i].cache_hits = 0;
}
return 0;
}
static int qcow2_cache_find_entry_to_replace(Qcow2Cache *c)
{
int i;

View File

@@ -35,20 +35,12 @@ int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
BDRVQcowState *s = bs->opaque;
int new_l1_size2, ret, i;
uint64_t *new_l1_table;
int64_t old_l1_table_offset, old_l1_size;
int64_t new_l1_table_offset, new_l1_size;
uint8_t data[12];
if (min_size <= s->l1_size)
return 0;
/* Do a sanity check on min_size before trying to calculate new_l1_size
* (this prevents overflows during the while loop for the calculation of
* new_l1_size) */
if (min_size > INT_MAX / sizeof(uint64_t)) {
return -EFBIG;
}
if (exact_size) {
new_l1_size = min_size;
} else {
@@ -62,7 +54,7 @@ int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
}
}
if (new_l1_size > INT_MAX / sizeof(uint64_t)) {
if (new_l1_size > INT_MAX) {
return -EFBIG;
}
@@ -88,14 +80,6 @@ int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
goto fail;
}
/* the L1 position has not yet been updated, so these clusters must
* indeed be completely free */
ret = qcow2_pre_write_overlap_check(bs, 0, new_l1_table_offset,
new_l1_size2);
if (ret < 0) {
goto fail;
}
BLKDBG_EVENT(bs->file, BLKDBG_L1_GROW_WRITE_TABLE);
for(i = 0; i < s->l1_size; i++)
new_l1_table[i] = cpu_to_be64(new_l1_table[i]);
@@ -108,24 +92,20 @@ int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
/* set new table */
BLKDBG_EVENT(bs->file, BLKDBG_L1_GROW_ACTIVATE_TABLE);
cpu_to_be32w((uint32_t*)data, new_l1_size);
stq_be_p(data + 4, new_l1_table_offset);
cpu_to_be64wu((uint64_t*)(data + 4), new_l1_table_offset);
ret = bdrv_pwrite_sync(bs->file, offsetof(QCowHeader, l1_size), data,sizeof(data));
if (ret < 0) {
goto fail;
}
g_free(s->l1_table);
old_l1_table_offset = s->l1_table_offset;
qcow2_free_clusters(bs, s->l1_table_offset, s->l1_size * sizeof(uint64_t));
s->l1_table_offset = new_l1_table_offset;
s->l1_table = new_l1_table;
old_l1_size = s->l1_size;
s->l1_size = new_l1_size;
qcow2_free_clusters(bs, old_l1_table_offset, old_l1_size * sizeof(uint64_t),
QCOW2_DISCARD_OTHER);
return 0;
fail:
g_free(new_l1_table);
qcow2_free_clusters(bs, new_l1_table_offset, new_l1_size2,
QCOW2_DISCARD_OTHER);
qcow2_free_clusters(bs, new_l1_table_offset, new_l1_size2);
return ret;
}
@@ -155,7 +135,7 @@ static int l2_load(BlockDriverState *bs, uint64_t l2_offset,
* and we really don't want bdrv_pread to perform a read-modify-write)
*/
#define L1_ENTRIES_PER_SECTOR (512 / 8)
int qcow2_write_l1_entry(BlockDriverState *bs, int l1_index)
static int write_l1_entry(BlockDriverState *bs, int l1_index)
{
BDRVQcowState *s = bs->opaque;
uint64_t buf[L1_ENTRIES_PER_SECTOR];
@@ -167,12 +147,6 @@ int qcow2_write_l1_entry(BlockDriverState *bs, int l1_index)
buf[i] = cpu_to_be64(s->l1_table[l1_start_index + i]);
}
ret = qcow2_pre_write_overlap_check(bs, QCOW2_OL_ACTIVE_L1,
s->l1_table_offset + 8 * l1_start_index, sizeof(buf));
if (ret < 0) {
return ret;
}
BLKDBG_EVENT(bs->file, BLKDBG_L1_UPDATE);
ret = bdrv_pwrite_sync(bs->file, s->l1_table_offset + 8 * l1_start_index,
buf, sizeof(buf));
@@ -197,7 +171,7 @@ static int l2_allocate(BlockDriverState *bs, int l1_index, uint64_t **table)
{
BDRVQcowState *s = bs->opaque;
uint64_t old_l2_offset;
uint64_t *l2_table = NULL;
uint64_t *l2_table;
int64_t l2_offset;
int ret;
@@ -209,8 +183,7 @@ static int l2_allocate(BlockDriverState *bs, int l1_index, uint64_t **table)
l2_offset = qcow2_alloc_clusters(bs, s->l2_size * sizeof(uint64_t));
if (l2_offset < 0) {
ret = l2_offset;
goto fail;
return l2_offset;
}
ret = qcow2_cache_flush(bs, s->refcount_block_cache);
@@ -223,7 +196,7 @@ static int l2_allocate(BlockDriverState *bs, int l1_index, uint64_t **table)
trace_qcow2_l2_allocate_get_empty(bs, l1_index);
ret = qcow2_cache_get_empty(bs, s->l2_table_cache, l2_offset, (void**) table);
if (ret < 0) {
goto fail;
return ret;
}
l2_table = *table;
@@ -264,7 +237,7 @@ static int l2_allocate(BlockDriverState *bs, int l1_index, uint64_t **table)
/* update the L1 entry */
trace_qcow2_l2_allocate_write_l1(bs, l1_index);
s->l1_table[l1_index] = l2_offset | QCOW_OFLAG_COPIED;
ret = qcow2_write_l1_entry(bs, l1_index);
ret = write_l1_entry(bs, l1_index);
if (ret < 0) {
goto fail;
}
@@ -275,14 +248,8 @@ static int l2_allocate(BlockDriverState *bs, int l1_index, uint64_t **table)
fail:
trace_qcow2_l2_allocate_done(bs, l1_index, ret);
if (l2_table != NULL) {
qcow2_cache_put(bs, s->l2_table_cache, (void**) table);
}
qcow2_cache_put(bs, s->l2_table_cache, (void**) table);
s->l1_table[l1_index] = old_l2_offset;
if (l2_offset > 0) {
qcow2_free_clusters(bs, l2_offset, s->l2_size * sizeof(uint64_t),
QCOW2_DISCARD_ALWAYS);
}
return ret;
}
@@ -294,26 +261,23 @@ fail:
* cluster which may require a different handling)
*/
static int count_contiguous_clusters(uint64_t nb_clusters, int cluster_size,
uint64_t *l2_table, uint64_t stop_flags)
uint64_t *l2_table, uint64_t start, uint64_t stop_flags)
{
int i;
uint64_t mask = stop_flags | L2E_OFFSET_MASK | QCOW_OFLAG_COMPRESSED;
uint64_t first_entry = be64_to_cpu(l2_table[0]);
uint64_t offset = first_entry & mask;
uint64_t mask = stop_flags | L2E_OFFSET_MASK;
uint64_t offset = be64_to_cpu(l2_table[0]) & mask;
if (!offset)
return 0;
assert(qcow2_get_cluster_type(first_entry) != QCOW2_CLUSTER_COMPRESSED);
for (i = 0; i < nb_clusters; i++) {
for (i = start; i < start + nb_clusters; i++) {
uint64_t l2_entry = be64_to_cpu(l2_table[i]) & mask;
if (offset + (uint64_t) i * cluster_size != l2_entry) {
break;
}
}
return i;
return (i - start);
}
static int count_contiguous_free_clusters(uint64_t nb_clusters, uint64_t *l2_table)
@@ -366,6 +330,15 @@ static int coroutine_fn copy_sectors(BlockDriverState *bs,
struct iovec iov;
int n, ret;
/*
* If this is the last cluster and it is only partially used, we must only
* copy until the end of the image, or bdrv_check_request will fail for the
* bdrv_read/write calls below.
*/
if (start_sect + n_end > bs->total_sectors) {
n_end = bs->total_sectors - start_sect;
}
n = n_end - n_start;
if (n <= 0) {
return 0;
@@ -378,11 +351,6 @@ static int coroutine_fn copy_sectors(BlockDriverState *bs,
BLKDBG_EVENT(bs->file, BLKDBG_COW_READ);
if (!bs->drv) {
ret = -ENOMEDIUM;
goto out;
}
/* Call .bdrv_co_readv() directly instead of using the public block-layer
* interface. This avoids double I/O throttling and request tracking,
* which can lead to deadlock when block layer copy-on-read is enabled.
@@ -398,12 +366,6 @@ static int coroutine_fn copy_sectors(BlockDriverState *bs,
&s->aes_encrypt_key);
}
ret = qcow2_pre_write_overlap_check(bs, 0,
cluster_offset + n_start * BDRV_SECTOR_SIZE, n * BDRV_SECTOR_SIZE);
if (ret < 0) {
goto out;
}
BLKDBG_EVENT(bs->file, BLKDBG_COW_WRITE);
ret = bdrv_co_writev(bs->file, (cluster_offset >> 9) + n_start, n, &qiov);
if (ret < 0) {
@@ -499,11 +461,11 @@ int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
break;
case QCOW2_CLUSTER_ZERO:
if (s->qcow_version < 3) {
qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
return -EIO;
}
c = count_contiguous_clusters(nb_clusters, s->cluster_size,
&l2_table[l2_index], QCOW_OFLAG_ZERO);
&l2_table[l2_index], 0,
QCOW_OFLAG_COMPRESSED | QCOW_OFLAG_ZERO);
*cluster_offset = 0;
break;
case QCOW2_CLUSTER_UNALLOCATED:
@@ -514,7 +476,8 @@ int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
case QCOW2_CLUSTER_NORMAL:
/* how many allocated clusters ? */
c = count_contiguous_clusters(nb_clusters, s->cluster_size,
&l2_table[l2_index], QCOW_OFLAG_ZERO);
&l2_table[l2_index], 0,
QCOW_OFLAG_COMPRESSED | QCOW_OFLAG_ZERO);
*cluster_offset &= L2E_OFFSET_MASK;
break;
default:
@@ -585,8 +548,7 @@ static int get_cluster_table(BlockDriverState *bs, uint64_t offset,
/* Then decrease the refcount of the old table */
if (l2_offset) {
qcow2_free_clusters(bs, l2_offset, s->l2_size * sizeof(uint64_t),
QCOW2_DISCARD_OTHER);
qcow2_free_clusters(bs, l2_offset, s->l2_size * sizeof(uint64_t));
}
}
@@ -730,7 +692,6 @@ int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m)
}
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
assert(l2_index + m->nb_clusters <= s->l2_size);
for (i = 0; i < m->nb_clusters; i++) {
/* if two concurrent writes happen to the same unallocated cluster
* each write allocates separate cluster and writes data concurrently.
@@ -754,14 +715,10 @@ int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m)
/*
* If this was a COW, we need to decrease the refcount of the old cluster.
* Also flush bs->file to get the right order for L2 and refcount update.
*
* Don't discard clusters that reach a refcount of 0 (e.g. compressed
* clusters), the next write will reuse them anyway.
*/
if (j != 0) {
for (i = 0; i < j; i++) {
qcow2_free_any_clusters(bs, be64_to_cpu(old_cluster[i]), 1,
QCOW2_DISCARD_NEVER);
qcow2_free_any_clusters(bs, be64_to_cpu(old_cluster[i]), 1);
}
}
@@ -944,7 +901,7 @@ static int handle_copied(BlockDriverState *bs, uint64_t guest_offset,
/* We keep all QCOW_OFLAG_COPIED clusters */
keep_clusters =
count_contiguous_clusters(nb_clusters, s->cluster_size,
&l2_table[l2_index],
&l2_table[l2_index], 0,
QCOW_OFLAG_COPIED | QCOW_OFLAG_ZERO);
assert(keep_clusters <= nb_clusters);
@@ -1186,7 +1143,7 @@ fail:
* Return 0 on success and -errno in error cases
*/
int qcow2_alloc_cluster_offset(BlockDriverState *bs, uint64_t offset,
int *num, uint64_t *host_offset, QCowL2Meta **m)
int n_start, int n_end, int *num, uint64_t *host_offset, QCowL2Meta **m)
{
BDRVQcowState *s = bs->opaque;
uint64_t start, remaining;
@@ -1194,13 +1151,15 @@ int qcow2_alloc_cluster_offset(BlockDriverState *bs, uint64_t offset,
uint64_t cur_bytes;
int ret;
trace_qcow2_alloc_clusters_offset(qemu_coroutine_self(), offset, *num);
trace_qcow2_alloc_clusters_offset(qemu_coroutine_self(), offset,
n_start, n_end);
assert((offset & ~BDRV_SECTOR_MASK) == 0);
assert(n_start * BDRV_SECTOR_SIZE == offset_into_cluster(s, offset));
offset = start_of_cluster(s, offset);
again:
start = offset;
remaining = *num << BDRV_SECTOR_BITS;
start = offset + (n_start << BDRV_SECTOR_BITS);
remaining = (n_end - n_start) << BDRV_SECTOR_BITS;
cluster_offset = 0;
*host_offset = 0;
cur_bytes = 0;
@@ -1286,7 +1245,7 @@ again:
}
}
*num -= remaining >> BDRV_SECTOR_BITS;
*num = (n_end - n_start) - (remaining >> BDRV_SECTOR_BITS);
assert(*num > 0);
assert(*host_offset != 0);
@@ -1351,7 +1310,7 @@ int qcow2_decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset)
* clusters.
*/
static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
unsigned int nb_clusters, enum qcow2_discard_type type)
unsigned int nb_clusters)
{
BDRVQcowState *s = bs->opaque;
uint64_t *l2_table;
@@ -1368,47 +1327,19 @@ static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
nb_clusters = MIN(nb_clusters, s->l2_size - l2_index);
for (i = 0; i < nb_clusters; i++) {
uint64_t old_l2_entry;
uint64_t old_offset;
old_l2_entry = be64_to_cpu(l2_table[l2_index + i]);
/*
* Make sure that a discarded area reads back as zeroes for v3 images
* (we cannot do it for v2 without actually writing a zero-filled
* buffer). We can skip the operation if the cluster is already marked
* as zero, or if it's unallocated and we don't have a backing file.
*
* TODO We might want to use bdrv_get_block_status(bs) here, but we're
* holding s->lock, so that doesn't work today.
*/
switch (qcow2_get_cluster_type(old_l2_entry)) {
case QCOW2_CLUSTER_UNALLOCATED:
if (!bs->backing_hd) {
continue;
}
break;
case QCOW2_CLUSTER_ZERO:
continue;
case QCOW2_CLUSTER_NORMAL:
case QCOW2_CLUSTER_COMPRESSED:
break;
default:
abort();
old_offset = be64_to_cpu(l2_table[l2_index + i]);
if ((old_offset & L2E_OFFSET_MASK) == 0) {
continue;
}
/* First remove L2 entries */
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
if (s->qcow_version >= 3) {
l2_table[l2_index + i] = cpu_to_be64(QCOW_OFLAG_ZERO);
} else {
l2_table[l2_index + i] = cpu_to_be64(0);
}
l2_table[l2_index + i] = cpu_to_be64(0);
/* Then decrease the refcount */
qcow2_free_any_clusters(bs, old_l2_entry, 1, type);
qcow2_free_any_clusters(bs, old_offset, 1);
}
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
@@ -1420,7 +1351,7 @@ static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
}
int qcow2_discard_clusters(BlockDriverState *bs, uint64_t offset,
int nb_sectors, enum qcow2_discard_type type)
int nb_sectors)
{
BDRVQcowState *s = bs->opaque;
uint64_t end_offset;
@@ -1431,7 +1362,7 @@ int qcow2_discard_clusters(BlockDriverState *bs, uint64_t offset,
/* Round start up and end down */
offset = align_offset(offset, s->cluster_size);
end_offset = start_of_cluster(s, end_offset);
end_offset &= ~(s->cluster_size - 1);
if (offset > end_offset) {
return 0;
@@ -1439,25 +1370,18 @@ int qcow2_discard_clusters(BlockDriverState *bs, uint64_t offset,
nb_clusters = size_to_clusters(s, end_offset - offset);
s->cache_discards = true;
/* Each L2 table is handled by its own loop iteration */
while (nb_clusters > 0) {
ret = discard_single_l2(bs, offset, nb_clusters, type);
ret = discard_single_l2(bs, offset, nb_clusters);
if (ret < 0) {
goto fail;
return ret;
}
nb_clusters -= ret;
offset += (ret * s->cluster_size);
}
ret = 0;
fail:
s->cache_discards = false;
qcow2_process_discards(bs, ret);
return ret;
return 0;
}
/*
@@ -1491,7 +1415,7 @@ static int zero_single_l2(BlockDriverState *bs, uint64_t offset,
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
if (old_offset & QCOW_OFLAG_COMPRESSED) {
l2_table[l2_index + i] = cpu_to_be64(QCOW_OFLAG_ZERO);
qcow2_free_any_clusters(bs, old_offset, 1, QCOW2_DISCARD_REQUEST);
qcow2_free_any_clusters(bs, old_offset, 1);
} else {
l2_table[l2_index + i] |= cpu_to_be64(QCOW_OFLAG_ZERO);
}
@@ -1519,274 +1443,15 @@ int qcow2_zero_clusters(BlockDriverState *bs, uint64_t offset, int nb_sectors)
/* Each L2 table is handled by its own loop iteration */
nb_clusters = size_to_clusters(s, nb_sectors << BDRV_SECTOR_BITS);
s->cache_discards = true;
while (nb_clusters > 0) {
ret = zero_single_l2(bs, offset, nb_clusters);
if (ret < 0) {
goto fail;
return ret;
}
nb_clusters -= ret;
offset += (ret * s->cluster_size);
}
ret = 0;
fail:
s->cache_discards = false;
qcow2_process_discards(bs, ret);
return ret;
}
/*
* Expands all zero clusters in a specific L1 table (or deallocates them, for
* non-backed non-pre-allocated zero clusters).
*
* expanded_clusters is a bitmap where every bit corresponds to one cluster in
* the image file; a bit gets set if the corresponding cluster has been used for
* zero expansion (i.e., has been filled with zeroes and is referenced from an
* L2 table). nb_clusters contains the total cluster count of the image file,
* i.e., the number of bits in expanded_clusters.
*/
static int expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
int l1_size, uint8_t **expanded_clusters,
uint64_t *nb_clusters)
{
BDRVQcowState *s = bs->opaque;
bool is_active_l1 = (l1_table == s->l1_table);
uint64_t *l2_table = NULL;
int ret;
int i, j;
if (!is_active_l1) {
/* inactive L2 tables require a buffer to be stored in when loading
* them from disk */
l2_table = qemu_blockalign(bs, s->cluster_size);
}
for (i = 0; i < l1_size; i++) {
uint64_t l2_offset = l1_table[i] & L1E_OFFSET_MASK;
bool l2_dirty = false;
if (!l2_offset) {
/* unallocated */
continue;
}
if (is_active_l1) {
/* get active L2 tables from cache */
ret = qcow2_cache_get(bs, s->l2_table_cache, l2_offset,
(void **)&l2_table);
} else {
/* load inactive L2 tables from disk */
ret = bdrv_read(bs->file, l2_offset / BDRV_SECTOR_SIZE,
(void *)l2_table, s->cluster_sectors);
}
if (ret < 0) {
goto fail;
}
for (j = 0; j < s->l2_size; j++) {
uint64_t l2_entry = be64_to_cpu(l2_table[j]);
int64_t offset = l2_entry & L2E_OFFSET_MASK, cluster_index;
int cluster_type = qcow2_get_cluster_type(l2_entry);
bool preallocated = offset != 0;
if (cluster_type == QCOW2_CLUSTER_NORMAL) {
cluster_index = offset >> s->cluster_bits;
assert((cluster_index >= 0) && (cluster_index < *nb_clusters));
if ((*expanded_clusters)[cluster_index / 8] &
(1 << (cluster_index % 8))) {
/* Probably a shared L2 table; this cluster was a zero
* cluster which has been expanded, its refcount
* therefore most likely requires an update. */
ret = qcow2_update_cluster_refcount(bs, cluster_index, 1,
QCOW2_DISCARD_NEVER);
if (ret < 0) {
goto fail;
}
/* Since we just increased the refcount, the COPIED flag may
* no longer be set. */
l2_table[j] = cpu_to_be64(l2_entry & ~QCOW_OFLAG_COPIED);
l2_dirty = true;
}
continue;
}
else if (qcow2_get_cluster_type(l2_entry) != QCOW2_CLUSTER_ZERO) {
continue;
}
if (!preallocated) {
if (!bs->backing_hd) {
/* not backed; therefore we can simply deallocate the
* cluster */
l2_table[j] = 0;
l2_dirty = true;
continue;
}
offset = qcow2_alloc_clusters(bs, s->cluster_size);
if (offset < 0) {
ret = offset;
goto fail;
}
}
ret = qcow2_pre_write_overlap_check(bs, 0, offset, s->cluster_size);
if (ret < 0) {
if (!preallocated) {
qcow2_free_clusters(bs, offset, s->cluster_size,
QCOW2_DISCARD_ALWAYS);
}
goto fail;
}
ret = bdrv_write_zeroes(bs->file, offset / BDRV_SECTOR_SIZE,
s->cluster_sectors, 0);
if (ret < 0) {
if (!preallocated) {
qcow2_free_clusters(bs, offset, s->cluster_size,
QCOW2_DISCARD_ALWAYS);
}
goto fail;
}
l2_table[j] = cpu_to_be64(offset | QCOW_OFLAG_COPIED);
l2_dirty = true;
cluster_index = offset >> s->cluster_bits;
if (cluster_index >= *nb_clusters) {
uint64_t old_bitmap_size = (*nb_clusters + 7) / 8;
uint64_t new_bitmap_size;
/* The offset may lie beyond the old end of the underlying image
* file for growable files only */
assert(bs->file->growable);
*nb_clusters = size_to_clusters(s, bs->file->total_sectors *
BDRV_SECTOR_SIZE);
new_bitmap_size = (*nb_clusters + 7) / 8;
*expanded_clusters = g_realloc(*expanded_clusters,
new_bitmap_size);
/* clear the newly allocated space */
memset(&(*expanded_clusters)[old_bitmap_size], 0,
new_bitmap_size - old_bitmap_size);
}
assert((cluster_index >= 0) && (cluster_index < *nb_clusters));
(*expanded_clusters)[cluster_index / 8] |= 1 << (cluster_index % 8);
}
if (is_active_l1) {
if (l2_dirty) {
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
qcow2_cache_depends_on_flush(s->l2_table_cache);
}
ret = qcow2_cache_put(bs, s->l2_table_cache, (void **)&l2_table);
if (ret < 0) {
l2_table = NULL;
goto fail;
}
} else {
if (l2_dirty) {
ret = qcow2_pre_write_overlap_check(bs,
QCOW2_OL_INACTIVE_L2 | QCOW2_OL_ACTIVE_L2, l2_offset,
s->cluster_size);
if (ret < 0) {
goto fail;
}
ret = bdrv_write(bs->file, l2_offset / BDRV_SECTOR_SIZE,
(void *)l2_table, s->cluster_sectors);
if (ret < 0) {
goto fail;
}
}
}
}
ret = 0;
fail:
if (l2_table) {
if (!is_active_l1) {
qemu_vfree(l2_table);
} else {
if (ret < 0) {
qcow2_cache_put(bs, s->l2_table_cache, (void **)&l2_table);
} else {
ret = qcow2_cache_put(bs, s->l2_table_cache,
(void **)&l2_table);
}
}
}
return ret;
}
/*
* For backed images, expands all zero clusters on the image. For non-backed
* images, deallocates all non-pre-allocated zero clusters (and claims the
* allocation for pre-allocated ones). This is important for downgrading to a
* qcow2 version which doesn't yet support metadata zero clusters.
*/
int qcow2_expand_zero_clusters(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
uint64_t *l1_table = NULL;
uint64_t nb_clusters;
uint8_t *expanded_clusters;
int ret;
int i, j;
nb_clusters = size_to_clusters(s, bs->file->total_sectors *
BDRV_SECTOR_SIZE);
expanded_clusters = g_malloc0((nb_clusters + 7) / 8);
ret = expand_zero_clusters_in_l1(bs, s->l1_table, s->l1_size,
&expanded_clusters, &nb_clusters);
if (ret < 0) {
goto fail;
}
/* Inactive L1 tables may point to active L2 tables - therefore it is
* necessary to flush the L2 table cache before trying to access the L2
* tables pointed to by inactive L1 entries (else we might try to expand
* zero clusters that have already been expanded); furthermore, it is also
* necessary to empty the L2 table cache, since it may contain tables which
* are now going to be modified directly on disk, bypassing the cache.
* qcow2_cache_empty() does both for us. */
ret = qcow2_cache_empty(bs, s->l2_table_cache);
if (ret < 0) {
goto fail;
}
for (i = 0; i < s->nb_snapshots; i++) {
int l1_sectors = (s->snapshots[i].l1_size * sizeof(uint64_t) +
BDRV_SECTOR_SIZE - 1) / BDRV_SECTOR_SIZE;
l1_table = g_realloc(l1_table, l1_sectors * BDRV_SECTOR_SIZE);
ret = bdrv_read(bs->file, s->snapshots[i].l1_table_offset /
BDRV_SECTOR_SIZE, (void *)l1_table, l1_sectors);
if (ret < 0) {
goto fail;
}
for (j = 0; j < s->snapshots[i].l1_size; j++) {
be64_to_cpus(&l1_table[j]);
}
ret = expand_zero_clusters_in_l1(bs, l1_table, s->snapshots[i].l1_size,
&expanded_clusters, &nb_clusters);
if (ret < 0) {
goto fail;
}
}
ret = 0;
fail:
g_free(expanded_clusters);
g_free(l1_table);
return ret;
return 0;
}

File diff suppressed because it is too large Load Diff

View File

@@ -26,6 +26,31 @@
#include "block/block_int.h"
#include "block/qcow2.h"
typedef struct QEMU_PACKED QCowSnapshotHeader {
/* header is 8 byte aligned */
uint64_t l1_table_offset;
uint32_t l1_size;
uint16_t id_str_size;
uint16_t name_size;
uint32_t date_sec;
uint32_t date_nsec;
uint64_t vm_clock_nsec;
uint32_t vm_state_size;
uint32_t extra_data_size; /* for extension */
/* extra data follows */
/* id_str follows */
/* name follows */
} QCowSnapshotHeader;
typedef struct QEMU_PACKED QCowSnapshotExtraData {
uint64_t vm_state_size_large;
uint64_t disk_size;
} QCowSnapshotExtraData;
void qcow2_free_snapshots(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
@@ -116,14 +141,8 @@ int qcow2_read_snapshots(BlockDriverState *bs)
}
offset += name_size;
sn->name[name_size] = '\0';
if (offset - s->snapshots_offset > QCOW_MAX_SNAPSHOTS_SIZE) {
ret = -EFBIG;
goto fail;
}
}
assert(offset - s->snapshots_offset <= INT_MAX);
s->snapshots_size = offset - s->snapshots_offset;
return 0;
@@ -144,7 +163,7 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
uint32_t nb_snapshots;
uint64_t snapshots_offset;
} QEMU_PACKED header_data;
int64_t offset, snapshots_offset = 0;
int64_t offset, snapshots_offset;
int ret;
/* compute the size of the snapshots */
@@ -156,36 +175,20 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
offset += sizeof(extra);
offset += strlen(sn->id_str);
offset += strlen(sn->name);
if (offset > QCOW_MAX_SNAPSHOTS_SIZE) {
ret = -EFBIG;
goto fail;
}
}
assert(offset <= INT_MAX);
snapshots_size = offset;
/* Allocate space for the new snapshot list */
snapshots_offset = qcow2_alloc_clusters(bs, snapshots_size);
offset = snapshots_offset;
if (offset < 0) {
ret = offset;
goto fail;
return offset;
}
ret = bdrv_flush(bs);
if (ret < 0) {
goto fail;
return ret;
}
/* The snapshot list position has not yet been updated, so these clusters
* must indeed be completely free */
ret = qcow2_pre_write_overlap_check(bs, 0, offset, snapshots_size);
if (ret < 0) {
goto fail;
}
/* Write all snapshots to the new list */
for(i = 0; i < s->nb_snapshots; i++) {
sn = s->snapshots + i;
@@ -208,7 +211,6 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
id_str_size = strlen(sn->id_str);
name_size = strlen(sn->name);
assert(id_str_size <= UINT16_MAX && name_size <= UINT16_MAX);
h.id_str_size = cpu_to_be16(id_str_size);
h.name_size = cpu_to_be16(name_size);
offset = align_offset(offset, 8);
@@ -260,17 +262,12 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
}
/* free the old snapshot table */
qcow2_free_clusters(bs, s->snapshots_offset, s->snapshots_size,
QCOW2_DISCARD_SNAPSHOT);
qcow2_free_clusters(bs, s->snapshots_offset, s->snapshots_size);
s->snapshots_offset = snapshots_offset;
s->snapshots_size = snapshots_size;
return 0;
fail:
if (snapshots_offset > 0) {
qcow2_free_clusters(bs, snapshots_offset, snapshots_size,
QCOW2_DISCARD_ALWAYS);
}
return ret;
}
@@ -279,8 +276,7 @@ static void find_new_snapshot_id(BlockDriverState *bs,
{
BDRVQcowState *s = bs->opaque;
QCowSnapshot *sn;
int i;
unsigned long id, id_max = 0;
int i, id, id_max = 0;
for(i = 0; i < s->nb_snapshots; i++) {
sn = s->snapshots + i;
@@ -288,50 +284,34 @@ static void find_new_snapshot_id(BlockDriverState *bs,
if (id > id_max)
id_max = id;
}
snprintf(id_str, id_str_size, "%lu", id_max + 1);
snprintf(id_str, id_str_size, "%d", id_max + 1);
}
static int find_snapshot_by_id_and_name(BlockDriverState *bs,
const char *id,
const char *name)
static int find_snapshot_by_id(BlockDriverState *bs, const char *id_str)
{
BDRVQcowState *s = bs->opaque;
int i;
if (id && name) {
for (i = 0; i < s->nb_snapshots; i++) {
if (!strcmp(s->snapshots[i].id_str, id) &&
!strcmp(s->snapshots[i].name, name)) {
return i;
}
}
} else if (id) {
for (i = 0; i < s->nb_snapshots; i++) {
if (!strcmp(s->snapshots[i].id_str, id)) {
return i;
}
}
} else if (name) {
for (i = 0; i < s->nb_snapshots; i++) {
if (!strcmp(s->snapshots[i].name, name)) {
return i;
}
}
for(i = 0; i < s->nb_snapshots; i++) {
if (!strcmp(s->snapshots[i].id_str, id_str))
return i;
}
return -1;
}
static int find_snapshot_by_id_or_name(BlockDriverState *bs,
const char *id_or_name)
static int find_snapshot_by_id_or_name(BlockDriverState *bs, const char *name)
{
int ret;
BDRVQcowState *s = bs->opaque;
int i, ret;
ret = find_snapshot_by_id_and_name(bs, id_or_name, NULL);
if (ret >= 0) {
ret = find_snapshot_by_id(bs, name);
if (ret >= 0)
return ret;
for(i = 0; i < s->nb_snapshots; i++) {
if (!strcmp(s->snapshots[i].name, name))
return i;
}
return find_snapshot_by_id_and_name(bs, NULL, id_or_name);
return -1;
}
/* if no id is provided, a new one is constructed */
@@ -345,10 +325,6 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
uint64_t *l1_table = NULL;
int64_t l1_table_offset;
if (s->nb_snapshots >= QCOW_MAX_SNAPSHOTS) {
return -EFBIG;
}
memset(sn, 0, sizeof(*sn));
/* Generate an ID if it wasn't passed */
@@ -357,7 +333,7 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
}
/* Check that the ID is unique */
if (find_snapshot_by_id_and_name(bs, sn_info->id_str, NULL) >= 0) {
if (find_snapshot_by_id(bs, sn_info->id_str) >= 0) {
return -EEXIST;
}
@@ -386,12 +362,6 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
l1_table[i] = cpu_to_be64(s->l1_table[i]);
}
ret = qcow2_pre_write_overlap_check(bs, 0, sn->l1_table_offset,
s->l1_size * sizeof(uint64_t));
if (ret < 0) {
goto fail;
}
ret = bdrv_pwrite(bs->file, sn->l1_table_offset, l1_table,
s->l1_size * sizeof(uint64_t));
if (ret < 0) {
@@ -425,19 +395,11 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
if (ret < 0) {
g_free(s->snapshots);
s->snapshots = old_snapshot_list;
s->nb_snapshots--;
goto fail;
}
g_free(old_snapshot_list);
/* The VM state isn't needed any more in the active L1 table; in fact, it
* hurts by causing expensive COW for the next snapshot. */
qcow2_discard_clusters(bs, qcow2_vm_state_offset(s),
align_offset(sn->vm_state_size, s->cluster_size)
>> BDRV_SECTOR_BITS,
QCOW2_DISCARD_NEVER);
#ifdef DEBUG_ALLOC
{
BdrvCheckResult result = {0};
@@ -512,12 +474,6 @@ int qcow2_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
goto fail;
}
ret = qcow2_pre_write_overlap_check(bs, QCOW2_OL_ACTIVE_L1,
s->l1_table_offset, cur_l1_bytes);
if (ret < 0) {
goto fail;
}
ret = bdrv_pwrite_sync(bs->file, s->l1_table_offset, sn_l1_table,
cur_l1_bytes);
if (ret < 0) {
@@ -574,19 +530,15 @@ fail:
return ret;
}
int qcow2_snapshot_delete(BlockDriverState *bs,
const char *snapshot_id,
const char *name,
Error **errp)
int qcow2_snapshot_delete(BlockDriverState *bs, const char *snapshot_id)
{
BDRVQcowState *s = bs->opaque;
QCowSnapshot sn;
int snapshot_index, ret;
/* Search the snapshot */
snapshot_index = find_snapshot_by_id_and_name(bs, snapshot_id, name);
snapshot_index = find_snapshot_by_id_or_name(bs, snapshot_id);
if (snapshot_index < 0) {
error_setg(errp, "Can't find the snapshot");
return -ENOENT;
}
sn = s->snapshots[snapshot_index];
@@ -598,8 +550,6 @@ int qcow2_snapshot_delete(BlockDriverState *bs,
s->nb_snapshots--;
ret = qcow2_write_snapshots(bs);
if (ret < 0) {
error_setg_errno(errp, -ret,
"Failed to remove snapshot from snapshot list");
return ret;
}
@@ -617,17 +567,13 @@ int qcow2_snapshot_delete(BlockDriverState *bs,
ret = qcow2_update_snapshot_refcount(bs, sn.l1_table_offset,
sn.l1_size, -1);
if (ret < 0) {
error_setg_errno(errp, -ret, "Failed to free the cluster and L1 table");
return ret;
}
qcow2_free_clusters(bs, sn.l1_table_offset, sn.l1_size * sizeof(uint64_t),
QCOW2_DISCARD_SNAPSHOT);
qcow2_free_clusters(bs, sn.l1_table_offset, sn.l1_size * sizeof(uint64_t));
/* must update the copied flag on the current cluster offsets */
ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 0);
if (ret < 0) {
error_setg_errno(errp, -ret,
"Failed to update snapshot status in disk");
return ret;
}
@@ -669,10 +615,7 @@ int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab)
return s->nb_snapshots;
}
int qcow2_snapshot_load_tmp(BlockDriverState *bs,
const char *snapshot_id,
const char *name,
Error **errp)
int qcow2_snapshot_load_tmp(BlockDriverState *bs, const char *snapshot_name)
{
int i, snapshot_index;
BDRVQcowState *s = bs->opaque;
@@ -684,25 +627,18 @@ int qcow2_snapshot_load_tmp(BlockDriverState *bs,
assert(bs->read_only);
/* Search the snapshot */
snapshot_index = find_snapshot_by_id_and_name(bs, snapshot_id, name);
snapshot_index = find_snapshot_by_id_or_name(bs, snapshot_name);
if (snapshot_index < 0) {
error_setg(errp,
"Can't find snapshot");
return -ENOENT;
}
sn = &s->snapshots[snapshot_index];
/* Allocate and read in the snapshot's L1 table */
if (sn->l1_size > QCOW_MAX_L1_SIZE) {
error_setg(errp, "Snapshot L1 table too large");
return -EFBIG;
}
new_l1_bytes = sn->l1_size * sizeof(uint64_t);
new_l1_bytes = s->l1_size * sizeof(uint64_t);
new_l1_table = g_malloc0(align_offset(new_l1_bytes, 512));
ret = bdrv_pread(bs->file, sn->l1_table_offset, new_l1_table, new_l1_bytes);
if (ret < 0) {
error_setg(errp, "Failed to read l1 table for snapshot");
g_free(new_l1_table);
return ret;
}

File diff suppressed because it is too large Load Diff

View File

@@ -38,26 +38,13 @@
#define QCOW_CRYPT_AES 1
#define QCOW_MAX_CRYPT_CLUSTERS 32
#define QCOW_MAX_SNAPSHOTS 65536
/* 8 MB refcount table is enough for 2 PB images at 64k cluster size
* (128 GB for 512 byte clusters, 2 EB for 2 MB clusters) */
#define QCOW_MAX_REFTABLE_SIZE 0x800000
/* 32 MB L1 table is enough for 2 PB images at 64k cluster size
* (128 GB for 512 byte clusters, 2 EB for 2 MB clusters) */
#define QCOW_MAX_L1_SIZE 0x2000000
/* Allow for an average of 1k per snapshot table entry, should be plenty of
* space for snapshot names and IDs */
#define QCOW_MAX_SNAPSHOTS_SIZE (1024 * QCOW_MAX_SNAPSHOTS)
/* indicate that the refcount of the referenced cluster is exactly one. */
#define QCOW_OFLAG_COPIED (1ULL << 63)
#define QCOW_OFLAG_COPIED (1LL << 63)
/* indicate that the cluster is compressed (they never have the copied flag) */
#define QCOW_OFLAG_COMPRESSED (1ULL << 62)
#define QCOW_OFLAG_COMPRESSED (1LL << 62)
/* The cluster reads as all zeros */
#define QCOW_OFLAG_ZERO (1ULL << 0)
#define QCOW_OFLAG_ZERO (1LL << 0)
#define REFCOUNT_SHIFT 1 /* refcount size is 2 bytes */
@@ -72,19 +59,7 @@
#define DEFAULT_CLUSTER_SIZE 65536
#define QCOW2_OPT_LAZY_REFCOUNTS "lazy-refcounts"
#define QCOW2_OPT_DISCARD_REQUEST "pass-discard-request"
#define QCOW2_OPT_DISCARD_SNAPSHOT "pass-discard-snapshot"
#define QCOW2_OPT_DISCARD_OTHER "pass-discard-other"
#define QCOW2_OPT_OVERLAP "overlap-check"
#define QCOW2_OPT_OVERLAP_MAIN_HEADER "overlap-check.main-header"
#define QCOW2_OPT_OVERLAP_ACTIVE_L1 "overlap-check.active-l1"
#define QCOW2_OPT_OVERLAP_ACTIVE_L2 "overlap-check.active-l2"
#define QCOW2_OPT_OVERLAP_REFCOUNT_TABLE "overlap-check.refcount-table"
#define QCOW2_OPT_OVERLAP_REFCOUNT_BLOCK "overlap-check.refcount-block"
#define QCOW2_OPT_OVERLAP_SNAPSHOT_TABLE "overlap-check.snapshot-table"
#define QCOW2_OPT_OVERLAP_INACTIVE_L1 "overlap-check.inactive-l1"
#define QCOW2_OPT_OVERLAP_INACTIVE_L2 "overlap-check.inactive-l2"
#define QCOW2_OPT_LAZY_REFCOUNTS "lazy_refcounts"
typedef struct QCowHeader {
uint32_t magic;
@@ -108,33 +83,7 @@ typedef struct QCowHeader {
uint32_t refcount_order;
uint32_t header_length;
} QEMU_PACKED QCowHeader;
typedef struct QEMU_PACKED QCowSnapshotHeader {
/* header is 8 byte aligned */
uint64_t l1_table_offset;
uint32_t l1_size;
uint16_t id_str_size;
uint16_t name_size;
uint32_t date_sec;
uint32_t date_nsec;
uint64_t vm_clock_nsec;
uint32_t vm_state_size;
uint32_t extra_data_size; /* for extension */
/* extra data follows */
/* id_str follows */
/* name follows */
} QCowSnapshotHeader;
typedef struct QEMU_PACKED QCowSnapshotExtraData {
uint64_t vm_state_size_large;
uint64_t disk_size;
} QCowSnapshotExtraData;
} QCowHeader;
typedef struct QCowSnapshot {
uint64_t l1_table_offset;
@@ -167,12 +116,9 @@ enum {
/* Incompatible feature bits */
enum {
QCOW2_INCOMPAT_DIRTY_BITNR = 0,
QCOW2_INCOMPAT_CORRUPT_BITNR = 1,
QCOW2_INCOMPAT_DIRTY = 1 << QCOW2_INCOMPAT_DIRTY_BITNR,
QCOW2_INCOMPAT_CORRUPT = 1 << QCOW2_INCOMPAT_CORRUPT_BITNR,
QCOW2_INCOMPAT_MASK = QCOW2_INCOMPAT_DIRTY
| QCOW2_INCOMPAT_CORRUPT,
QCOW2_INCOMPAT_MASK = QCOW2_INCOMPAT_DIRTY,
};
/* Compatible feature bits */
@@ -183,28 +129,12 @@ enum {
QCOW2_COMPAT_FEAT_MASK = QCOW2_COMPAT_LAZY_REFCOUNTS,
};
enum qcow2_discard_type {
QCOW2_DISCARD_NEVER = 0,
QCOW2_DISCARD_ALWAYS,
QCOW2_DISCARD_REQUEST,
QCOW2_DISCARD_SNAPSHOT,
QCOW2_DISCARD_OTHER,
QCOW2_DISCARD_MAX
};
typedef struct Qcow2Feature {
uint8_t type;
uint8_t bit;
char name[46];
} QEMU_PACKED Qcow2Feature;
typedef struct Qcow2DiscardRegion {
BlockDriverState *bs;
uint64_t offset;
uint64_t bytes;
QTAILQ_ENTRY(Qcow2DiscardRegion) next;
} Qcow2DiscardRegion;
typedef struct BDRVQcowState {
int cluster_bits;
int cluster_size;
@@ -230,8 +160,8 @@ typedef struct BDRVQcowState {
uint64_t *refcount_table;
uint64_t refcount_table_offset;
uint32_t refcount_table_size;
uint64_t free_cluster_index;
uint64_t free_byte_offset;
int64_t free_cluster_index;
int64_t free_byte_offset;
CoMutex lock;
@@ -241,17 +171,12 @@ typedef struct BDRVQcowState {
AES_KEY aes_decrypt_key;
uint64_t snapshots_offset;
int snapshots_size;
unsigned int nb_snapshots;
int nb_snapshots;
QCowSnapshot *snapshots;
int flags;
int qcow_version;
bool use_lazy_refcounts;
int refcount_order;
bool discard_passthrough[QCOW2_DISCARD_MAX];
int overlap_check; /* bitmask of Qcow2MetadataOverlap values */
uint64_t incompatible_features;
uint64_t compatible_features;
@@ -260,8 +185,6 @@ typedef struct BDRVQcowState {
size_t unknown_header_fields_size;
void* unknown_header_fields;
QLIST_HEAD(, Qcow2UnknownHeaderExtension) unknown_header_ext;
QTAILQ_HEAD (, Qcow2DiscardRegion) discards;
bool cache_discards;
} BDRVQcowState;
/* XXX: use std qcow open function ? */
@@ -340,50 +263,11 @@ enum {
QCOW2_CLUSTER_ZERO
};
typedef enum QCow2MetadataOverlap {
QCOW2_OL_MAIN_HEADER_BITNR = 0,
QCOW2_OL_ACTIVE_L1_BITNR = 1,
QCOW2_OL_ACTIVE_L2_BITNR = 2,
QCOW2_OL_REFCOUNT_TABLE_BITNR = 3,
QCOW2_OL_REFCOUNT_BLOCK_BITNR = 4,
QCOW2_OL_SNAPSHOT_TABLE_BITNR = 5,
QCOW2_OL_INACTIVE_L1_BITNR = 6,
QCOW2_OL_INACTIVE_L2_BITNR = 7,
QCOW2_OL_MAX_BITNR = 8,
QCOW2_OL_NONE = 0,
QCOW2_OL_MAIN_HEADER = (1 << QCOW2_OL_MAIN_HEADER_BITNR),
QCOW2_OL_ACTIVE_L1 = (1 << QCOW2_OL_ACTIVE_L1_BITNR),
QCOW2_OL_ACTIVE_L2 = (1 << QCOW2_OL_ACTIVE_L2_BITNR),
QCOW2_OL_REFCOUNT_TABLE = (1 << QCOW2_OL_REFCOUNT_TABLE_BITNR),
QCOW2_OL_REFCOUNT_BLOCK = (1 << QCOW2_OL_REFCOUNT_BLOCK_BITNR),
QCOW2_OL_SNAPSHOT_TABLE = (1 << QCOW2_OL_SNAPSHOT_TABLE_BITNR),
QCOW2_OL_INACTIVE_L1 = (1 << QCOW2_OL_INACTIVE_L1_BITNR),
/* NOTE: Checking overlaps with inactive L2 tables will result in bdrv
* reads. */
QCOW2_OL_INACTIVE_L2 = (1 << QCOW2_OL_INACTIVE_L2_BITNR),
} QCow2MetadataOverlap;
/* Perform all overlap checks which can be done in constant time */
#define QCOW2_OL_CONSTANT \
(QCOW2_OL_MAIN_HEADER | QCOW2_OL_ACTIVE_L1 | QCOW2_OL_REFCOUNT_TABLE | \
QCOW2_OL_SNAPSHOT_TABLE)
/* Perform all overlap checks which don't require disk access */
#define QCOW2_OL_CACHED \
(QCOW2_OL_CONSTANT | QCOW2_OL_ACTIVE_L2 | QCOW2_OL_REFCOUNT_BLOCK | \
QCOW2_OL_INACTIVE_L1)
/* Perform all overlap checks */
#define QCOW2_OL_ALL \
(QCOW2_OL_CACHED | QCOW2_OL_INACTIVE_L2)
#define L1E_OFFSET_MASK 0x00fffffffffffe00ULL
#define L2E_OFFSET_MASK 0x00fffffffffffe00ULL
#define L1E_OFFSET_MASK 0x00ffffffffffff00ULL
#define L2E_OFFSET_MASK 0x00ffffffffffff00ULL
#define L2E_COMPRESSED_OFFSET_SIZE_MASK 0x3fffffffffffffffULL
#define REFT_OFFSET_MASK 0xfffffffffffffe00ULL
#define REFT_OFFSET_MASK 0xffffffffffffff00ULL
static inline int64_t start_of_cluster(BDRVQcowState *s, int64_t offset)
{
@@ -417,16 +301,6 @@ static inline int64_t align_offset(int64_t offset, int n)
return offset;
}
static inline int64_t qcow2_vm_state_offset(BDRVQcowState *s)
{
return (int64_t)s->l1_vm_state_index << (s->cluster_bits + s->l2_bits);
}
static inline uint64_t qcow2_max_refcount_clusters(BDRVQcowState *s)
{
return QCOW_MAX_REFTABLE_SIZE >> s->cluster_bits;
}
static inline int qcow2_get_cluster_type(uint64_t l2_entry)
{
if (l2_entry & QCOW_OFLAG_COMPRESSED) {
@@ -464,26 +338,20 @@ int qcow2_backing_read1(BlockDriverState *bs, QEMUIOVector *qiov,
int64_t sector_num, int nb_sectors);
int qcow2_mark_dirty(BlockDriverState *bs);
int qcow2_mark_corrupt(BlockDriverState *bs);
int qcow2_mark_consistent(BlockDriverState *bs);
int qcow2_update_header(BlockDriverState *bs);
/* qcow2-refcount.c functions */
int qcow2_refcount_init(BlockDriverState *bs);
void qcow2_refcount_close(BlockDriverState *bs);
int qcow2_update_cluster_refcount(BlockDriverState *bs, int64_t cluster_index,
int addend, enum qcow2_discard_type type);
int64_t qcow2_alloc_clusters(BlockDriverState *bs, uint64_t size);
int64_t qcow2_alloc_clusters(BlockDriverState *bs, int64_t size);
int qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
int nb_clusters);
int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size);
void qcow2_free_clusters(BlockDriverState *bs,
int64_t offset, int64_t size,
enum qcow2_discard_type type);
void qcow2_free_any_clusters(BlockDriverState *bs, uint64_t l2_entry,
int nb_clusters, enum qcow2_discard_type type);
int64_t offset, int64_t size);
void qcow2_free_any_clusters(BlockDriverState *bs,
uint64_t cluster_offset, int nb_clusters);
int qcow2_update_snapshot_refcount(BlockDriverState *bs,
int64_t l1_table_offset, int l1_size, int addend);
@@ -491,17 +359,9 @@ int qcow2_update_snapshot_refcount(BlockDriverState *bs,
int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix);
void qcow2_process_discards(BlockDriverState *bs, int ret);
int qcow2_check_metadata_overlap(BlockDriverState *bs, int ign, int64_t offset,
int64_t size);
int qcow2_pre_write_overlap_check(BlockDriverState *bs, int ign, int64_t offset,
int64_t size);
/* qcow2-cluster.c functions */
int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
bool exact_size);
int qcow2_write_l1_entry(BlockDriverState *bs, int l1_index);
void qcow2_l2_cache_reset(BlockDriverState *bs);
int qcow2_decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset);
void qcow2_encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
@@ -512,30 +372,22 @@ void qcow2_encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
int *num, uint64_t *cluster_offset);
int qcow2_alloc_cluster_offset(BlockDriverState *bs, uint64_t offset,
int *num, uint64_t *host_offset, QCowL2Meta **m);
int n_start, int n_end, int *num, uint64_t *host_offset, QCowL2Meta **m);
uint64_t qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs,
uint64_t offset,
int compressed_size);
int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m);
int qcow2_discard_clusters(BlockDriverState *bs, uint64_t offset,
int nb_sectors, enum qcow2_discard_type type);
int nb_sectors);
int qcow2_zero_clusters(BlockDriverState *bs, uint64_t offset, int nb_sectors);
int qcow2_expand_zero_clusters(BlockDriverState *bs);
/* qcow2-snapshot.c functions */
int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info);
int qcow2_snapshot_goto(BlockDriverState *bs, const char *snapshot_id);
int qcow2_snapshot_delete(BlockDriverState *bs,
const char *snapshot_id,
const char *name,
Error **errp);
int qcow2_snapshot_delete(BlockDriverState *bs, const char *snapshot_id);
int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab);
int qcow2_snapshot_load_tmp(BlockDriverState *bs,
const char *snapshot_id,
const char *name,
Error **errp);
int qcow2_snapshot_load_tmp(BlockDriverState *bs, const char *snapshot_name);
void qcow2_free_snapshots(BlockDriverState *bs);
int qcow2_read_snapshots(BlockDriverState *bs);
@@ -550,8 +402,6 @@ int qcow2_cache_set_dependency(BlockDriverState *bs, Qcow2Cache *c,
Qcow2Cache *dependency);
void qcow2_cache_depends_on_flush(Qcow2Cache *c);
int qcow2_cache_empty(BlockDriverState *bs, Qcow2Cache *c);
int qcow2_cache_get(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,
void **table);
int qcow2_cache_get_empty(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,

View File

@@ -353,10 +353,10 @@ static void qed_start_need_check_timer(BDRVQEDState *s)
{
trace_qed_start_need_check_timer(s);
/* Use QEMU_CLOCK_VIRTUAL so we don't alter the image file while suspended for
/* Use vm_clock so we don't alter the image file while suspended for
* migration.
*/
timer_mod(s->need_check_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
qemu_mod_timer(s->need_check_timer, qemu_get_clock_ns(vm_clock) +
get_ticks_per_sec() * QED_NEED_CHECK_TIMEOUT);
}
@@ -364,7 +364,7 @@ static void qed_start_need_check_timer(BDRVQEDState *s)
static void qed_cancel_need_check_timer(BDRVQEDState *s)
{
trace_qed_cancel_need_check_timer(s);
timer_del(s->need_check_timer);
qemu_del_timer(s->need_check_timer);
}
static void bdrv_qed_rebind(BlockDriverState *bs)
@@ -373,8 +373,7 @@ static void bdrv_qed_rebind(BlockDriverState *bs)
s->bs = bs;
}
static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVQEDState *s = bs->opaque;
QEDHeader le_header;
@@ -391,15 +390,14 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
qed_header_le_to_cpu(&le_header, &s->header);
if (s->header.magic != QED_MAGIC) {
error_setg(errp, "Image not in QED format");
return -EINVAL;
return -EMEDIUMTYPE;
}
if (s->header.features & ~QED_FEATURE_MASK) {
/* image uses unsupported feature bits */
char buf[64];
snprintf(buf, sizeof(buf), "%" PRIx64,
s->header.features & ~QED_FEATURE_MASK);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
qerror_report(QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "QED", buf);
return -ENOTSUP;
}
@@ -496,7 +494,7 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
}
}
s->need_check_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
s->need_check_timer = qemu_new_timer_ns(vm_clock,
qed_need_check_timer_cb, s);
out:
@@ -507,15 +505,6 @@ out:
return ret;
}
static int bdrv_qed_refresh_limits(BlockDriverState *bs)
{
BDRVQEDState *s = bs->opaque;
bs->bl.write_zeroes_alignment = s->header.cluster_size >> BDRV_SECTOR_BITS;
return 0;
}
/* We have nothing to do for QED reopen, stubs just return
* success */
static int bdrv_qed_reopen_prepare(BDRVReopenState *state,
@@ -529,7 +518,7 @@ static void bdrv_qed_close(BlockDriverState *bs)
BDRVQEDState *s = bs->opaque;
qed_cancel_need_check_timer(s);
timer_free(s->need_check_timer);
qemu_free_timer(s->need_check_timer);
/* Ensure writes reach stable storage */
bdrv_flush(bs->file);
@@ -546,8 +535,7 @@ static void bdrv_qed_close(BlockDriverState *bs)
static int qed_create(const char *filename, uint32_t cluster_size,
uint64_t image_size, uint32_t table_size,
const char *backing_file, const char *backing_fmt,
Error **errp)
const char *backing_file, const char *backing_fmt)
{
QEDHeader header = {
.magic = QED_MAGIC,
@@ -562,22 +550,16 @@ static int qed_create(const char *filename, uint32_t cluster_size,
QEDHeader le_header;
uint8_t *l1_table = NULL;
size_t l1_size = header.cluster_size * header.table_size;
Error *local_err = NULL;
int ret = 0;
BlockDriverState *bs;
BlockDriverState *bs = NULL;
ret = bdrv_create_file(filename, NULL, &local_err);
ret = bdrv_create_file(filename, NULL);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
bs = NULL;
ret = bdrv_open(&bs, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_CACHE_WB | BDRV_O_PROTOCOL, NULL,
&local_err);
ret = bdrv_file_open(&bs, filename, NULL, BDRV_O_RDWR | BDRV_O_CACHE_WB);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
@@ -617,12 +599,11 @@ static int qed_create(const char *filename, uint32_t cluster_size,
ret = 0; /* success */
out:
g_free(l1_table);
bdrv_unref(bs);
bdrv_delete(bs);
return ret;
}
static int bdrv_qed_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
static int bdrv_qed_create(const char *filename, QEMUOptionParameter *options)
{
uint64_t image_size = 0;
uint32_t cluster_size = QED_DEFAULT_CLUSTER_SIZE;
@@ -650,89 +631,71 @@ static int bdrv_qed_create(const char *filename, QEMUOptionParameter *options,
}
if (!qed_is_cluster_size_valid(cluster_size)) {
error_setg(errp, "QED cluster size must be within range [%u, %u] "
"and power of 2",
QED_MIN_CLUSTER_SIZE, QED_MAX_CLUSTER_SIZE);
fprintf(stderr, "QED cluster size must be within range [%u, %u] and power of 2\n",
QED_MIN_CLUSTER_SIZE, QED_MAX_CLUSTER_SIZE);
return -EINVAL;
}
if (!qed_is_table_size_valid(table_size)) {
error_setg(errp, "QED table size must be within range [%u, %u] "
"and power of 2",
QED_MIN_TABLE_SIZE, QED_MAX_TABLE_SIZE);
fprintf(stderr, "QED table size must be within range [%u, %u] and power of 2\n",
QED_MIN_TABLE_SIZE, QED_MAX_TABLE_SIZE);
return -EINVAL;
}
if (!qed_is_image_size_valid(image_size, cluster_size, table_size)) {
error_setg(errp, "QED image size must be a non-zero multiple of "
"cluster size and less than %" PRIu64 " bytes",
qed_max_image_size(cluster_size, table_size));
fprintf(stderr, "QED image size must be a non-zero multiple of "
"cluster size and less than %" PRIu64 " bytes\n",
qed_max_image_size(cluster_size, table_size));
return -EINVAL;
}
return qed_create(filename, cluster_size, image_size, table_size,
backing_file, backing_fmt, errp);
backing_file, backing_fmt);
}
typedef struct {
BlockDriverState *bs;
Coroutine *co;
uint64_t pos;
int64_t status;
int is_allocated;
int *pnum;
} QEDIsAllocatedCB;
static void qed_is_allocated_cb(void *opaque, int ret, uint64_t offset, size_t len)
{
QEDIsAllocatedCB *cb = opaque;
BDRVQEDState *s = cb->bs->opaque;
*cb->pnum = len / BDRV_SECTOR_SIZE;
switch (ret) {
case QED_CLUSTER_FOUND:
offset |= qed_offset_into_cluster(s, cb->pos);
cb->status = BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID | offset;
break;
case QED_CLUSTER_ZERO:
cb->status = BDRV_BLOCK_ZERO;
break;
case QED_CLUSTER_L2:
case QED_CLUSTER_L1:
cb->status = 0;
break;
default:
assert(ret < 0);
cb->status = ret;
break;
}
cb->is_allocated = (ret == QED_CLUSTER_FOUND || ret == QED_CLUSTER_ZERO);
if (cb->co) {
qemu_coroutine_enter(cb->co, NULL);
}
}
static int64_t coroutine_fn bdrv_qed_co_get_block_status(BlockDriverState *bs,
static int coroutine_fn bdrv_qed_co_is_allocated(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors, int *pnum)
{
BDRVQEDState *s = bs->opaque;
uint64_t pos = (uint64_t)sector_num * BDRV_SECTOR_SIZE;
size_t len = (size_t)nb_sectors * BDRV_SECTOR_SIZE;
QEDIsAllocatedCB cb = {
.bs = bs,
.pos = (uint64_t)sector_num * BDRV_SECTOR_SIZE,
.status = BDRV_BLOCK_OFFSET_MASK,
.is_allocated = -1,
.pnum = pnum,
};
QEDRequest request = { .l2_table = NULL };
qed_find_cluster(s, &request, cb.pos, len, qed_is_allocated_cb, &cb);
qed_find_cluster(s, &request, pos, len, qed_is_allocated_cb, &cb);
/* Now sleep if the callback wasn't invoked immediately */
while (cb.status == BDRV_BLOCK_OFFSET_MASK) {
while (cb.is_allocated == -1) {
cb.co = qemu_coroutine_self();
qemu_coroutine_yield();
}
qed_unref_l2_cache_entry(request.l2_table);
return cb.status;
return cb.is_allocated;
}
static int bdrv_qed_make_empty(BlockDriverState *bs)
{
return -ENOTSUP;
}
static BDRVQEDState *acb_to_s(QEDAIOCB *acb)
@@ -1405,8 +1368,7 @@ static void coroutine_fn qed_co_write_zeroes_cb(void *opaque, int ret)
static int coroutine_fn bdrv_qed_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors,
BdrvRequestFlags flags)
int nb_sectors)
{
BlockDriverAIOCB *blockacb;
BDRVQEDState *s = bs->opaque;
@@ -1483,8 +1445,6 @@ static int bdrv_qed_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
memset(bdi, 0, sizeof(*bdi));
bdi->cluster_size = s->header.cluster_size;
bdi->is_dirty = s->header.features & QED_F_NEED_CHECK;
bdi->unallocated_blocks_are_zero = true;
bdi->can_write_zeroes_with_unmap = true;
return 0;
}
@@ -1560,31 +1520,13 @@ static int bdrv_qed_change_backing_file(BlockDriverState *bs,
return ret;
}
static void bdrv_qed_invalidate_cache(BlockDriverState *bs, Error **errp)
static void bdrv_qed_invalidate_cache(BlockDriverState *bs)
{
BDRVQEDState *s = bs->opaque;
Error *local_err = NULL;
int ret;
bdrv_qed_close(bs);
bdrv_invalidate_cache(bs->file, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
memset(s, 0, sizeof(BDRVQEDState));
ret = bdrv_qed_open(bs, NULL, bs->open_flags, &local_err);
if (local_err) {
error_setg(errp, "Could not reopen qed layer: %s",
error_get_pretty(local_err));
error_free(local_err);
return;
} else if (ret < 0) {
error_setg_errno(errp, -ret, "Could not reopen qed layer");
return;
}
bdrv_qed_open(bs, NULL, bs->open_flags);
}
static int bdrv_qed_check(BlockDriverState *bs, BdrvCheckResult *result,
@@ -1632,15 +1574,14 @@ static BlockDriver bdrv_qed = {
.bdrv_close = bdrv_qed_close,
.bdrv_reopen_prepare = bdrv_qed_reopen_prepare,
.bdrv_create = bdrv_qed_create,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_co_get_block_status = bdrv_qed_co_get_block_status,
.bdrv_co_is_allocated = bdrv_qed_co_is_allocated,
.bdrv_make_empty = bdrv_qed_make_empty,
.bdrv_aio_readv = bdrv_qed_aio_readv,
.bdrv_aio_writev = bdrv_qed_aio_writev,
.bdrv_co_write_zeroes = bdrv_qed_co_write_zeroes,
.bdrv_truncate = bdrv_qed_truncate,
.bdrv_getlength = bdrv_qed_getlength,
.bdrv_get_info = bdrv_qed_get_info,
.bdrv_refresh_limits = bdrv_qed_refresh_limits,
.bdrv_change_backing_file = bdrv_qed_change_backing_file,
.bdrv_invalidate_cache = bdrv_qed_invalidate_cache,
.bdrv_check = bdrv_qed_check,

View File

@@ -100,7 +100,7 @@ typedef struct {
/* if (features & QED_F_BACKING_FILE) */
uint32_t backing_filename_offset; /* in bytes from start of header */
uint32_t backing_filename_size; /* in bytes */
} QEMU_PACKED QEDHeader;
} QEDHeader;
typedef struct {
uint64_t offsets[0]; /* in bytes */

View File

@@ -1,877 +0,0 @@
/*
* Quorum Block filter
*
* Copyright (C) 2012-2014 Nodalink, EURL.
*
* Author:
* Benoît Canet <benoit.canet@irqsave.net>
*
* Based on the design and code of blkverify.c (Copyright (C) 2010 IBM, Corp)
* and blkmirror.c (Copyright (C) 2011 Red Hat, Inc).
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include <gnutls/gnutls.h>
#include <gnutls/crypto.h>
#include "block/block_int.h"
#include "qapi/qmp/qjson.h"
#define HASH_LENGTH 32
#define QUORUM_OPT_VOTE_THRESHOLD "vote-threshold"
#define QUORUM_OPT_BLKVERIFY "blkverify"
/* This union holds a vote hash value */
typedef union QuorumVoteValue {
char h[HASH_LENGTH]; /* SHA-256 hash */
int64_t l; /* simpler 64 bits hash */
} QuorumVoteValue;
/* A vote item */
typedef struct QuorumVoteItem {
int index;
QLIST_ENTRY(QuorumVoteItem) next;
} QuorumVoteItem;
/* this structure is a vote version. A version is the set of votes sharing the
* same vote value.
* The set of votes will be tracked with the items field and its cardinality is
* vote_count.
*/
typedef struct QuorumVoteVersion {
QuorumVoteValue value;
int index;
int vote_count;
QLIST_HEAD(, QuorumVoteItem) items;
QLIST_ENTRY(QuorumVoteVersion) next;
} QuorumVoteVersion;
/* this structure holds a group of vote versions together */
typedef struct QuorumVotes {
QLIST_HEAD(, QuorumVoteVersion) vote_list;
bool (*compare)(QuorumVoteValue *a, QuorumVoteValue *b);
} QuorumVotes;
/* the following structure holds the state of one quorum instance */
typedef struct BDRVQuorumState {
BlockDriverState **bs; /* children BlockDriverStates */
int num_children; /* children count */
int threshold; /* if less than threshold children reads gave the
* same result a quorum error occurs.
*/
bool is_blkverify; /* true if the driver is in blkverify mode
* Writes are mirrored on two children devices.
* On reads the two children devices' contents are
* compared and if a difference is spotted its
* location is printed and the code aborts.
* It is useful to debug other block drivers by
* comparing them with a reference one.
*/
} BDRVQuorumState;
typedef struct QuorumAIOCB QuorumAIOCB;
/* Quorum will create one instance of the following structure per operation it
* performs on its children.
* So for each read/write operation coming from the upper layer there will be
* $children_count QuorumChildRequest.
*/
typedef struct QuorumChildRequest {
BlockDriverAIOCB *aiocb;
QEMUIOVector qiov;
uint8_t *buf;
int ret;
QuorumAIOCB *parent;
} QuorumChildRequest;
/* Quorum will use the following structure to track progress of each read/write
* operation received by the upper layer.
* This structure hold pointers to the QuorumChildRequest structures instances
* used to do operations on each children and track overall progress.
*/
struct QuorumAIOCB {
BlockDriverAIOCB common;
/* Request metadata */
uint64_t sector_num;
int nb_sectors;
QEMUIOVector *qiov; /* calling IOV */
QuorumChildRequest *qcrs; /* individual child requests */
int count; /* number of completed AIOCB */
int success_count; /* number of successfully completed AIOCB */
QuorumVotes votes;
bool is_read;
int vote_ret;
};
static void quorum_vote(QuorumAIOCB *acb);
static void quorum_aio_cancel(BlockDriverAIOCB *blockacb)
{
QuorumAIOCB *acb = container_of(blockacb, QuorumAIOCB, common);
BDRVQuorumState *s = acb->common.bs->opaque;
int i;
/* cancel all callbacks */
for (i = 0; i < s->num_children; i++) {
bdrv_aio_cancel(acb->qcrs[i].aiocb);
}
g_free(acb->qcrs);
qemu_aio_release(acb);
}
static AIOCBInfo quorum_aiocb_info = {
.aiocb_size = sizeof(QuorumAIOCB),
.cancel = quorum_aio_cancel,
};
static void quorum_aio_finalize(QuorumAIOCB *acb)
{
BDRVQuorumState *s = acb->common.bs->opaque;
int i, ret = 0;
if (acb->vote_ret) {
ret = acb->vote_ret;
}
acb->common.cb(acb->common.opaque, ret);
if (acb->is_read) {
for (i = 0; i < s->num_children; i++) {
qemu_vfree(acb->qcrs[i].buf);
qemu_iovec_destroy(&acb->qcrs[i].qiov);
}
}
g_free(acb->qcrs);
qemu_aio_release(acb);
}
static bool quorum_sha256_compare(QuorumVoteValue *a, QuorumVoteValue *b)
{
return !memcmp(a->h, b->h, HASH_LENGTH);
}
static bool quorum_64bits_compare(QuorumVoteValue *a, QuorumVoteValue *b)
{
return a->l == b->l;
}
static QuorumAIOCB *quorum_aio_get(BDRVQuorumState *s,
BlockDriverState *bs,
QEMUIOVector *qiov,
uint64_t sector_num,
int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque)
{
QuorumAIOCB *acb = qemu_aio_get(&quorum_aiocb_info, bs, cb, opaque);
int i;
acb->common.bs->opaque = s;
acb->sector_num = sector_num;
acb->nb_sectors = nb_sectors;
acb->qiov = qiov;
acb->qcrs = g_new0(QuorumChildRequest, s->num_children);
acb->count = 0;
acb->success_count = 0;
acb->votes.compare = quorum_sha256_compare;
QLIST_INIT(&acb->votes.vote_list);
acb->is_read = false;
acb->vote_ret = 0;
for (i = 0; i < s->num_children; i++) {
acb->qcrs[i].buf = NULL;
acb->qcrs[i].ret = 0;
acb->qcrs[i].parent = acb;
}
return acb;
}
static void quorum_report_bad(QuorumAIOCB *acb, char *node_name, int ret)
{
QObject *data;
assert(node_name);
data = qobject_from_jsonf("{ 'node-name': %s"
", 'sector-num': %" PRId64
", 'sectors-count': %d }",
node_name, acb->sector_num, acb->nb_sectors);
if (ret < 0) {
QDict *dict = qobject_to_qdict(data);
qdict_put(dict, "error", qstring_from_str(strerror(-ret)));
}
monitor_protocol_event(QEVENT_QUORUM_REPORT_BAD, data);
qobject_decref(data);
}
static void quorum_report_failure(QuorumAIOCB *acb)
{
QObject *data;
const char *reference = acb->common.bs->device_name[0] ?
acb->common.bs->device_name :
acb->common.bs->node_name;
data = qobject_from_jsonf("{ 'reference': %s"
", 'sector-num': %" PRId64
", 'sectors-count': %d }",
reference, acb->sector_num, acb->nb_sectors);
monitor_protocol_event(QEVENT_QUORUM_FAILURE, data);
qobject_decref(data);
}
static int quorum_vote_error(QuorumAIOCB *acb);
static bool quorum_has_too_much_io_failed(QuorumAIOCB *acb)
{
BDRVQuorumState *s = acb->common.bs->opaque;
if (acb->success_count < s->threshold) {
acb->vote_ret = quorum_vote_error(acb);
quorum_report_failure(acb);
return true;
}
return false;
}
static void quorum_aio_cb(void *opaque, int ret)
{
QuorumChildRequest *sacb = opaque;
QuorumAIOCB *acb = sacb->parent;
BDRVQuorumState *s = acb->common.bs->opaque;
sacb->ret = ret;
acb->count++;
if (ret == 0) {
acb->success_count++;
} else {
quorum_report_bad(acb, sacb->aiocb->bs->node_name, ret);
}
assert(acb->count <= s->num_children);
assert(acb->success_count <= s->num_children);
if (acb->count < s->num_children) {
return;
}
/* Do the vote on read */
if (acb->is_read) {
quorum_vote(acb);
} else {
quorum_has_too_much_io_failed(acb);
}
quorum_aio_finalize(acb);
}
static void quorum_report_bad_versions(BDRVQuorumState *s,
QuorumAIOCB *acb,
QuorumVoteValue *value)
{
QuorumVoteVersion *version;
QuorumVoteItem *item;
QLIST_FOREACH(version, &acb->votes.vote_list, next) {
if (acb->votes.compare(&version->value, value)) {
continue;
}
QLIST_FOREACH(item, &version->items, next) {
quorum_report_bad(acb, s->bs[item->index]->node_name, 0);
}
}
}
static void quorum_copy_qiov(QEMUIOVector *dest, QEMUIOVector *source)
{
int i;
assert(dest->niov == source->niov);
assert(dest->size == source->size);
for (i = 0; i < source->niov; i++) {
assert(dest->iov[i].iov_len == source->iov[i].iov_len);
memcpy(dest->iov[i].iov_base,
source->iov[i].iov_base,
source->iov[i].iov_len);
}
}
static void quorum_count_vote(QuorumVotes *votes,
QuorumVoteValue *value,
int index)
{
QuorumVoteVersion *v = NULL, *version = NULL;
QuorumVoteItem *item;
/* look if we have something with this hash */
QLIST_FOREACH(v, &votes->vote_list, next) {
if (votes->compare(&v->value, value)) {
version = v;
break;
}
}
/* It's a version not yet in the list add it */
if (!version) {
version = g_new0(QuorumVoteVersion, 1);
QLIST_INIT(&version->items);
memcpy(&version->value, value, sizeof(version->value));
version->index = index;
version->vote_count = 0;
QLIST_INSERT_HEAD(&votes->vote_list, version, next);
}
version->vote_count++;
item = g_new0(QuorumVoteItem, 1);
item->index = index;
QLIST_INSERT_HEAD(&version->items, item, next);
}
static void quorum_free_vote_list(QuorumVotes *votes)
{
QuorumVoteVersion *version, *next_version;
QuorumVoteItem *item, *next_item;
QLIST_FOREACH_SAFE(version, &votes->vote_list, next, next_version) {
QLIST_REMOVE(version, next);
QLIST_FOREACH_SAFE(item, &version->items, next, next_item) {
QLIST_REMOVE(item, next);
g_free(item);
}
g_free(version);
}
}
static int quorum_compute_hash(QuorumAIOCB *acb, int i, QuorumVoteValue *hash)
{
int j, ret;
gnutls_hash_hd_t dig;
QEMUIOVector *qiov = &acb->qcrs[i].qiov;
ret = gnutls_hash_init(&dig, GNUTLS_DIG_SHA256);
if (ret < 0) {
return ret;
}
for (j = 0; j < qiov->niov; j++) {
ret = gnutls_hash(dig, qiov->iov[j].iov_base, qiov->iov[j].iov_len);
if (ret < 0) {
break;
}
}
gnutls_hash_deinit(dig, (void *) hash);
return ret;
}
static QuorumVoteVersion *quorum_get_vote_winner(QuorumVotes *votes)
{
int max = 0;
QuorumVoteVersion *candidate, *winner = NULL;
QLIST_FOREACH(candidate, &votes->vote_list, next) {
if (candidate->vote_count > max) {
max = candidate->vote_count;
winner = candidate;
}
}
return winner;
}
/* qemu_iovec_compare is handy for blkverify mode because it returns the first
* differing byte location. Yet it is handcoded to compare vectors one byte
* after another so it does not benefit from the libc SIMD optimizations.
* quorum_iovec_compare is written for speed and should be used in the non
* blkverify mode of quorum.
*/
static bool quorum_iovec_compare(QEMUIOVector *a, QEMUIOVector *b)
{
int i;
int result;
assert(a->niov == b->niov);
for (i = 0; i < a->niov; i++) {
assert(a->iov[i].iov_len == b->iov[i].iov_len);
result = memcmp(a->iov[i].iov_base,
b->iov[i].iov_base,
a->iov[i].iov_len);
if (result) {
return false;
}
}
return true;
}
static void GCC_FMT_ATTR(2, 3) quorum_err(QuorumAIOCB *acb,
const char *fmt, ...)
{
va_list ap;
va_start(ap, fmt);
fprintf(stderr, "quorum: sector_num=%" PRId64 " nb_sectors=%d ",
acb->sector_num, acb->nb_sectors);
vfprintf(stderr, fmt, ap);
fprintf(stderr, "\n");
va_end(ap);
exit(1);
}
static bool quorum_compare(QuorumAIOCB *acb,
QEMUIOVector *a,
QEMUIOVector *b)
{
BDRVQuorumState *s = acb->common.bs->opaque;
ssize_t offset;
/* This driver will replace blkverify in this particular case */
if (s->is_blkverify) {
offset = qemu_iovec_compare(a, b);
if (offset != -1) {
quorum_err(acb, "contents mismatch in sector %" PRId64,
acb->sector_num +
(uint64_t)(offset / BDRV_SECTOR_SIZE));
}
return true;
}
return quorum_iovec_compare(a, b);
}
/* Do a vote to get the error code */
static int quorum_vote_error(QuorumAIOCB *acb)
{
BDRVQuorumState *s = acb->common.bs->opaque;
QuorumVoteVersion *winner = NULL;
QuorumVotes error_votes;
QuorumVoteValue result_value;
int i, ret = 0;
bool error = false;
QLIST_INIT(&error_votes.vote_list);
error_votes.compare = quorum_64bits_compare;
for (i = 0; i < s->num_children; i++) {
ret = acb->qcrs[i].ret;
if (ret) {
error = true;
result_value.l = ret;
quorum_count_vote(&error_votes, &result_value, i);
}
}
if (error) {
winner = quorum_get_vote_winner(&error_votes);
ret = winner->value.l;
}
quorum_free_vote_list(&error_votes);
return ret;
}
static void quorum_vote(QuorumAIOCB *acb)
{
bool quorum = true;
int i, j, ret;
QuorumVoteValue hash;
BDRVQuorumState *s = acb->common.bs->opaque;
QuorumVoteVersion *winner;
if (quorum_has_too_much_io_failed(acb)) {
return;
}
/* get the index of the first successful read */
for (i = 0; i < s->num_children; i++) {
if (!acb->qcrs[i].ret) {
break;
}
}
assert(i < s->num_children);
/* compare this read with all other successful reads stopping at quorum
* failure
*/
for (j = i + 1; j < s->num_children; j++) {
if (acb->qcrs[j].ret) {
continue;
}
quorum = quorum_compare(acb, &acb->qcrs[i].qiov, &acb->qcrs[j].qiov);
if (!quorum) {
break;
}
}
/* Every successful read agrees */
if (quorum) {
quorum_copy_qiov(acb->qiov, &acb->qcrs[i].qiov);
return;
}
/* compute hashes for each successful read, also store indexes */
for (i = 0; i < s->num_children; i++) {
if (acb->qcrs[i].ret) {
continue;
}
ret = quorum_compute_hash(acb, i, &hash);
/* if ever the hash computation failed */
if (ret < 0) {
acb->vote_ret = ret;
goto free_exit;
}
quorum_count_vote(&acb->votes, &hash, i);
}
/* vote to select the most represented version */
winner = quorum_get_vote_winner(&acb->votes);
/* if the winner count is smaller than threshold the read fails */
if (winner->vote_count < s->threshold) {
quorum_report_failure(acb);
acb->vote_ret = -EIO;
goto free_exit;
}
/* we have a winner: copy it */
quorum_copy_qiov(acb->qiov, &acb->qcrs[winner->index].qiov);
/* some versions are bad print them */
quorum_report_bad_versions(s, acb, &winner->value);
free_exit:
/* free lists */
quorum_free_vote_list(&acb->votes);
}
static BlockDriverAIOCB *quorum_aio_readv(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov,
int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque)
{
BDRVQuorumState *s = bs->opaque;
QuorumAIOCB *acb = quorum_aio_get(s, bs, qiov, sector_num,
nb_sectors, cb, opaque);
int i;
acb->is_read = true;
for (i = 0; i < s->num_children; i++) {
acb->qcrs[i].buf = qemu_blockalign(s->bs[i], qiov->size);
qemu_iovec_init(&acb->qcrs[i].qiov, qiov->niov);
qemu_iovec_clone(&acb->qcrs[i].qiov, qiov, acb->qcrs[i].buf);
}
for (i = 0; i < s->num_children; i++) {
bdrv_aio_readv(s->bs[i], sector_num, &acb->qcrs[i].qiov, nb_sectors,
quorum_aio_cb, &acb->qcrs[i]);
}
return &acb->common;
}
static BlockDriverAIOCB *quorum_aio_writev(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov,
int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque)
{
BDRVQuorumState *s = bs->opaque;
QuorumAIOCB *acb = quorum_aio_get(s, bs, qiov, sector_num, nb_sectors,
cb, opaque);
int i;
for (i = 0; i < s->num_children; i++) {
acb->qcrs[i].aiocb = bdrv_aio_writev(s->bs[i], sector_num, qiov,
nb_sectors, &quorum_aio_cb,
&acb->qcrs[i]);
}
return &acb->common;
}
static int64_t quorum_getlength(BlockDriverState *bs)
{
BDRVQuorumState *s = bs->opaque;
int64_t result;
int i;
/* check that all file have the same length */
result = bdrv_getlength(s->bs[0]);
if (result < 0) {
return result;
}
for (i = 1; i < s->num_children; i++) {
int64_t value = bdrv_getlength(s->bs[i]);
if (value < 0) {
return value;
}
if (value != result) {
return -EIO;
}
}
return result;
}
static void quorum_invalidate_cache(BlockDriverState *bs, Error **errp)
{
BDRVQuorumState *s = bs->opaque;
Error *local_err = NULL;
int i;
for (i = 0; i < s->num_children; i++) {
bdrv_invalidate_cache(s->bs[i], &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
}
}
static coroutine_fn int quorum_co_flush(BlockDriverState *bs)
{
BDRVQuorumState *s = bs->opaque;
QuorumVoteVersion *winner = NULL;
QuorumVotes error_votes;
QuorumVoteValue result_value;
int i;
int result = 0;
QLIST_INIT(&error_votes.vote_list);
error_votes.compare = quorum_64bits_compare;
for (i = 0; i < s->num_children; i++) {
result = bdrv_co_flush(s->bs[i]);
result_value.l = result;
quorum_count_vote(&error_votes, &result_value, i);
}
winner = quorum_get_vote_winner(&error_votes);
result = winner->value.l;
quorum_free_vote_list(&error_votes);
return result;
}
static bool quorum_recurse_is_first_non_filter(BlockDriverState *bs,
BlockDriverState *candidate)
{
BDRVQuorumState *s = bs->opaque;
int i;
for (i = 0; i < s->num_children; i++) {
bool perm = bdrv_recurse_is_first_non_filter(s->bs[i],
candidate);
if (perm) {
return true;
}
}
return false;
}
static int quorum_valid_threshold(int threshold, int num_children, Error **errp)
{
if (threshold < 1) {
error_set(errp, QERR_INVALID_PARAMETER_VALUE,
"vote-threshold", "value >= 1");
return -ERANGE;
}
if (threshold > num_children) {
error_setg(errp, "threshold may not exceed children count");
return -ERANGE;
}
return 0;
}
static QemuOptsList quorum_runtime_opts = {
.name = "quorum",
.head = QTAILQ_HEAD_INITIALIZER(quorum_runtime_opts.head),
.desc = {
{
.name = QUORUM_OPT_VOTE_THRESHOLD,
.type = QEMU_OPT_NUMBER,
.help = "The number of vote needed for reaching quorum",
},
{
.name = QUORUM_OPT_BLKVERIFY,
.type = QEMU_OPT_BOOL,
.help = "Trigger block verify mode if set",
},
{ /* end of list */ }
},
};
static int quorum_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVQuorumState *s = bs->opaque;
Error *local_err = NULL;
QemuOpts *opts;
bool *opened;
QDict *sub = NULL;
QList *list = NULL;
const QListEntry *lentry;
int i;
int ret = 0;
qdict_flatten(options);
qdict_extract_subqdict(options, &sub, "children.");
qdict_array_split(sub, &list);
if (qdict_size(sub)) {
error_setg(&local_err, "Invalid option children.%s",
qdict_first(sub)->key);
ret = -EINVAL;
goto exit;
}
/* count how many different children are present */
s->num_children = qlist_size(list);
if (s->num_children < 2) {
error_setg(&local_err,
"Number of provided children must be greater than 1");
ret = -EINVAL;
goto exit;
}
opts = qemu_opts_create(&quorum_runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
ret = -EINVAL;
goto exit;
}
s->threshold = qemu_opt_get_number(opts, QUORUM_OPT_VOTE_THRESHOLD, 0);
/* and validate it against s->num_children */
ret = quorum_valid_threshold(s->threshold, s->num_children, &local_err);
if (ret < 0) {
goto exit;
}
/* is the driver in blkverify mode */
if (qemu_opt_get_bool(opts, QUORUM_OPT_BLKVERIFY, false) &&
s->num_children == 2 && s->threshold == 2) {
s->is_blkverify = true;
} else if (qemu_opt_get_bool(opts, QUORUM_OPT_BLKVERIFY, false)) {
fprintf(stderr, "blkverify mode is set by setting blkverify=on "
"and using two files with vote_threshold=2\n");
}
/* allocate the children BlockDriverState array */
s->bs = g_new0(BlockDriverState *, s->num_children);
opened = g_new0(bool, s->num_children);
for (i = 0, lentry = qlist_first(list); lentry;
lentry = qlist_next(lentry), i++) {
QDict *d;
QString *string;
switch (qobject_type(lentry->value))
{
/* List of options */
case QTYPE_QDICT:
d = qobject_to_qdict(lentry->value);
QINCREF(d);
ret = bdrv_open(&s->bs[i], NULL, NULL, d, flags, NULL,
&local_err);
break;
/* QMP reference */
case QTYPE_QSTRING:
string = qobject_to_qstring(lentry->value);
ret = bdrv_open(&s->bs[i], NULL, qstring_get_str(string), NULL,
flags, NULL, &local_err);
break;
default:
error_setg(&local_err, "Specification of child block device %i "
"is invalid", i);
ret = -EINVAL;
}
if (ret < 0) {
goto close_exit;
}
opened[i] = true;
}
g_free(opened);
goto exit;
close_exit:
/* cleanup on error */
for (i = 0; i < s->num_children; i++) {
if (!opened[i]) {
continue;
}
bdrv_unref(s->bs[i]);
}
g_free(s->bs);
g_free(opened);
exit:
/* propagate error */
if (local_err) {
error_propagate(errp, local_err);
}
QDECREF(list);
QDECREF(sub);
return ret;
}
static void quorum_close(BlockDriverState *bs)
{
BDRVQuorumState *s = bs->opaque;
int i;
for (i = 0; i < s->num_children; i++) {
bdrv_unref(s->bs[i]);
}
g_free(s->bs);
}
static BlockDriver bdrv_quorum = {
.format_name = "quorum",
.protocol_name = "quorum",
.instance_size = sizeof(BDRVQuorumState),
.bdrv_file_open = quorum_open,
.bdrv_close = quorum_close,
.bdrv_co_flush_to_disk = quorum_co_flush,
.bdrv_getlength = quorum_getlength,
.bdrv_aio_readv = quorum_aio_readv,
.bdrv_aio_writev = quorum_aio_writev,
.bdrv_invalidate_cache = quorum_invalidate_cache,
.is_filter = true,
.bdrv_recurse_is_first_non_filter = quorum_recurse_is_first_non_filter,
};
static void bdrv_quorum_init(void)
{
bdrv_register(&bdrv_quorum);
}
block_init(bdrv_quorum_init);

View File

@@ -21,10 +21,9 @@
#define QEMU_AIO_IOCTL 0x0004
#define QEMU_AIO_FLUSH 0x0008
#define QEMU_AIO_DISCARD 0x0010
#define QEMU_AIO_WRITE_ZEROES 0x0020
#define QEMU_AIO_TYPE_MASK \
(QEMU_AIO_READ|QEMU_AIO_WRITE|QEMU_AIO_IOCTL|QEMU_AIO_FLUSH| \
QEMU_AIO_DISCARD|QEMU_AIO_WRITE_ZEROES)
QEMU_AIO_DISCARD)
/* AIO flags */
#define QEMU_AIO_MISALIGNED 0x1000

File diff suppressed because it is too large Load Diff

View File

@@ -85,7 +85,6 @@ static size_t handle_aiocb_rw(RawWin32AIOData *aiocb)
ret_count = 0;
}
if (ret_count != len) {
offset += ret_count;
break;
}
offset += len;
@@ -202,35 +201,6 @@ static int set_sparse(int fd)
NULL, 0, NULL, 0, &returned, NULL);
}
static void raw_probe_alignment(BlockDriverState *bs)
{
BDRVRawState *s = bs->opaque;
DWORD sectorsPerCluster, freeClusters, totalClusters, count;
DISK_GEOMETRY_EX dg;
BOOL status;
if (s->type == FTYPE_CD) {
bs->request_alignment = 2048;
return;
}
if (s->type == FTYPE_HARDDISK) {
status = DeviceIoControl(s->hfile, IOCTL_DISK_GET_DRIVE_GEOMETRY_EX,
NULL, 0, &dg, sizeof(dg), &count, NULL);
if (status != 0) {
bs->request_alignment = dg.Geometry.BytesPerSector;
return;
}
/* try GetDiskFreeSpace too */
}
if (s->drive_path[0]) {
GetDiskFreeSpace(s->drive_path, &sectorsPerCluster,
&dg.Geometry.BytesPerSector,
&freeClusters, &totalClusters);
bs->request_alignment = dg.Geometry.BytesPerSector;
}
}
static void raw_parse_flags(int flags, int *access_flags, DWORD *overlapped)
{
assert(access_flags != NULL);
@@ -251,17 +221,6 @@ static void raw_parse_flags(int flags, int *access_flags, DWORD *overlapped)
}
}
static void raw_parse_filename(const char *filename, QDict *options,
Error **errp)
{
/* The filename does not have to be prefixed by the protocol name, since
* "file" is the default protocol; therefore, the return value of this
* function call can be ignored. */
strstart(filename, "file:", &filename);
qdict_put_obj(options, "filename", QOBJECT(qstring_from_str(filename)));
}
static QemuOptsList raw_runtime_opts = {
.name = "raw",
.head = QTAILQ_HEAD_INITIALIZER(raw_runtime_opts.head),
@@ -275,8 +234,7 @@ static QemuOptsList raw_runtime_opts = {
},
};
static int raw_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int raw_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVRawState *s = bs->opaque;
int access_flags;
@@ -288,10 +246,11 @@ static int raw_open(BlockDriverState *bs, QDict *options, int flags,
s->type = FTYPE_FILE;
opts = qemu_opts_create(&raw_runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&raw_runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
ret = -EINVAL;
goto fail;
}
@@ -303,23 +262,11 @@ static int raw_open(BlockDriverState *bs, QDict *options, int flags,
if ((flags & BDRV_O_NATIVE_AIO) && aio == NULL) {
aio = win32_aio_init();
if (aio == NULL) {
error_setg(errp, "Could not initialize AIO");
ret = -EINVAL;
goto fail;
}
}
if (filename[0] && filename[1] == ':') {
snprintf(s->drive_path, sizeof(s->drive_path), "%c:\\", filename[0]);
} else if (filename[0] == '\\' && filename[1] == '\\') {
s->drive_path[0] = 0;
} else {
/* Relative path. */
char buf[MAX_PATH];
GetCurrentDirectory(MAX_PATH, buf);
snprintf(s->drive_path, sizeof(s->drive_path), "%c:\\", buf[0]);
}
s->hfile = CreateFile(filename, access_flags,
FILE_SHARE_READ, NULL,
OPEN_EXISTING, overlapped, NULL);
@@ -338,13 +285,11 @@ static int raw_open(BlockDriverState *bs, QDict *options, int flags,
ret = win32_aio_attach(aio, s->hfile);
if (ret < 0) {
CloseHandle(s->hfile);
error_setg_errno(errp, -ret, "Could not enable AIO");
goto fail;
}
s->aio = aio;
}
raw_probe_alignment(bs);
ret = 0;
fail:
qemu_opts_del(opts);
@@ -390,9 +335,6 @@ static void raw_close(BlockDriverState *bs)
{
BDRVRawState *s = bs->opaque;
CloseHandle(s->hfile);
if (bs->open_flags & BDRV_O_TEMPORARY) {
unlink(bs->filename);
}
}
static int raw_truncate(BlockDriverState *bs, int64_t offset)
@@ -478,14 +420,11 @@ static int64_t raw_get_allocated_file_size(BlockDriverState *bs)
return st.st_size;
}
static int raw_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
static int raw_create(const char *filename, QEMUOptionParameter *options)
{
int fd;
int64_t total_size = 0;
strstart(filename, "file:", &filename);
/* Read out options */
while (options && options->name) {
if (!strcmp(options->name, BLOCK_OPT_SIZE)) {
@@ -496,10 +435,8 @@ static int raw_create(const char *filename, QEMUOptionParameter *options,
fd = qemu_open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY,
0644);
if (fd < 0) {
error_setg_errno(errp, errno, "Could not create file");
if (fd < 0)
return -EIO;
}
set_sparse(fd);
ftruncate(fd, total_size * 512);
qemu_close(fd);
@@ -519,12 +456,9 @@ static BlockDriver bdrv_file = {
.format_name = "file",
.protocol_name = "file",
.instance_size = sizeof(BDRVRawState),
.bdrv_needs_filename = true,
.bdrv_parse_filename = raw_parse_filename,
.bdrv_file_open = raw_open,
.bdrv_close = raw_close,
.bdrv_create = raw_create,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
@@ -596,44 +530,17 @@ static int hdev_probe_device(const char *filename)
return 0;
}
static void hdev_parse_filename(const char *filename, QDict *options,
Error **errp)
{
/* The prefix is optional, just as for "file". */
strstart(filename, "host_device:", &filename);
qdict_put_obj(options, "filename", QOBJECT(qstring_from_str(filename)));
}
static int hdev_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int hdev_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVRawState *s = bs->opaque;
int access_flags, create_flags;
int ret = 0;
DWORD overlapped;
char device_name[64];
Error *local_err = NULL;
const char *filename;
QemuOpts *opts = qemu_opts_create(&raw_runtime_opts, NULL, 0,
&error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto done;
}
filename = qemu_opt_get(opts, "filename");
const char *filename = qdict_get_str(options, "filename");
if (strstart(filename, "/dev/cdrom", NULL)) {
if (find_cdrom(device_name, sizeof(device_name)) < 0) {
error_setg(errp, "Could not open CD-ROM drive");
ret = -ENOENT;
goto done;
}
if (find_cdrom(device_name, sizeof(device_name)) < 0)
return -ENOENT;
filename = device_name;
} else {
/* transform drive letters into device name */
@@ -656,37 +563,32 @@ static int hdev_open(BlockDriverState *bs, QDict *options, int flags,
if (s->hfile == INVALID_HANDLE_VALUE) {
int err = GetLastError();
if (err == ERROR_ACCESS_DENIED) {
ret = -EACCES;
} else {
ret = -EINVAL;
}
error_setg_errno(errp, -ret, "Could not open device");
goto done;
if (err == ERROR_ACCESS_DENIED)
return -EACCES;
return -1;
}
return 0;
}
done:
qemu_opts_del(opts);
return ret;
static int hdev_has_zero_init(BlockDriverState *bs)
{
return 0;
}
static BlockDriver bdrv_host_device = {
.format_name = "host_device",
.protocol_name = "host_device",
.instance_size = sizeof(BDRVRawState),
.bdrv_needs_filename = true,
.bdrv_parse_filename = hdev_parse_filename,
.bdrv_probe_device = hdev_probe_device,
.bdrv_file_open = hdev_open,
.bdrv_close = raw_close,
.bdrv_has_zero_init = hdev_has_zero_init,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_getlength = raw_getlength,
.has_variable_length = true,
.bdrv_getlength = raw_getlength,
.bdrv_get_allocated_file_size
= raw_get_allocated_file_size,
};

155
block/raw.c Normal file
View File

@@ -0,0 +1,155 @@
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/module.h"
static int raw_open(BlockDriverState *bs, QDict *options, int flags)
{
bs->sg = bs->file->sg;
return 0;
}
/* We have nothing to do for raw reopen, stubs just return
* success */
static int raw_reopen_prepare(BDRVReopenState *state,
BlockReopenQueue *queue, Error **errp)
{
return 0;
}
static int coroutine_fn raw_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
return bdrv_co_readv(bs->file, sector_num, nb_sectors, qiov);
}
static int coroutine_fn raw_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BLKDBG_EVENT(bs->file, BLKDBG_WRITE_AIO);
return bdrv_co_writev(bs->file, sector_num, nb_sectors, qiov);
}
static void raw_close(BlockDriverState *bs)
{
}
static int coroutine_fn raw_co_is_allocated(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors, int *pnum)
{
return bdrv_co_is_allocated(bs->file, sector_num, nb_sectors, pnum);
}
static int64_t raw_getlength(BlockDriverState *bs)
{
return bdrv_getlength(bs->file);
}
static int raw_truncate(BlockDriverState *bs, int64_t offset)
{
return bdrv_truncate(bs->file, offset);
}
static int raw_probe(const uint8_t *buf, int buf_size, const char *filename)
{
return 1; /* everything can be opened as raw image */
}
static int coroutine_fn raw_co_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors)
{
return bdrv_co_discard(bs->file, sector_num, nb_sectors);
}
static int raw_is_inserted(BlockDriverState *bs)
{
return bdrv_is_inserted(bs->file);
}
static int raw_media_changed(BlockDriverState *bs)
{
return bdrv_media_changed(bs->file);
}
static void raw_eject(BlockDriverState *bs, bool eject_flag)
{
bdrv_eject(bs->file, eject_flag);
}
static void raw_lock_medium(BlockDriverState *bs, bool locked)
{
bdrv_lock_medium(bs->file, locked);
}
static int raw_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
{
return bdrv_ioctl(bs->file, req, buf);
}
static BlockDriverAIOCB *raw_aio_ioctl(BlockDriverState *bs,
unsigned long int req, void *buf,
BlockDriverCompletionFunc *cb, void *opaque)
{
return bdrv_aio_ioctl(bs->file, req, buf, cb, opaque);
}
static int raw_create(const char *filename, QEMUOptionParameter *options)
{
return bdrv_create_file(filename, options);
}
static QEMUOptionParameter raw_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
.type = OPT_SIZE,
.help = "Virtual disk size"
},
{ NULL }
};
static int raw_has_zero_init(BlockDriverState *bs)
{
return bdrv_has_zero_init(bs->file);
}
static BlockDriver bdrv_raw = {
.format_name = "raw",
/* It's really 0, but we need to make g_malloc() happy */
.instance_size = 1,
.bdrv_open = raw_open,
.bdrv_close = raw_close,
.bdrv_reopen_prepare = raw_reopen_prepare,
.bdrv_co_readv = raw_co_readv,
.bdrv_co_writev = raw_co_writev,
.bdrv_co_is_allocated = raw_co_is_allocated,
.bdrv_co_discard = raw_co_discard,
.bdrv_probe = raw_probe,
.bdrv_getlength = raw_getlength,
.bdrv_truncate = raw_truncate,
.bdrv_is_inserted = raw_is_inserted,
.bdrv_media_changed = raw_media_changed,
.bdrv_eject = raw_eject,
.bdrv_lock_medium = raw_lock_medium,
.bdrv_ioctl = raw_ioctl,
.bdrv_aio_ioctl = raw_aio_ioctl,
.bdrv_create = raw_create,
.create_options = raw_create_options,
.bdrv_has_zero_init = raw_has_zero_init,
};
static void bdrv_raw_init(void)
{
bdrv_register(&bdrv_raw);
}
block_init(bdrv_raw_init);

View File

@@ -1,206 +0,0 @@
/* BlockDriver implementation for "raw"
*
* Copyright (C) 2010, 2013, Red Hat, Inc.
* Copyright (C) 2010, Blue Swirl <blauwirbel@gmail.com>
* Copyright (C) 2009, Anthony Liguori <aliguori@us.ibm.com>
*
* Author:
* Laszlo Ersek <lersek@redhat.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to
* deal in the Software without restriction, including without limitation the
* rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
* sell copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
#include "block/block_int.h"
#include "qemu/option.h"
static QEMUOptionParameter raw_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
.type = OPT_SIZE,
.help = "Virtual disk size"
},
{ 0 }
};
static int raw_reopen_prepare(BDRVReopenState *reopen_state,
BlockReopenQueue *queue, Error **errp)
{
return 0;
}
static int coroutine_fn raw_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
return bdrv_co_readv(bs->file, sector_num, nb_sectors, qiov);
}
static int coroutine_fn raw_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BLKDBG_EVENT(bs->file, BLKDBG_WRITE_AIO);
return bdrv_co_writev(bs->file, sector_num, nb_sectors, qiov);
}
static int64_t coroutine_fn raw_co_get_block_status(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors, int *pnum)
{
*pnum = nb_sectors;
return BDRV_BLOCK_RAW | BDRV_BLOCK_OFFSET_VALID | BDRV_BLOCK_DATA |
(sector_num << BDRV_SECTOR_BITS);
}
static int coroutine_fn raw_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
BdrvRequestFlags flags)
{
return bdrv_co_write_zeroes(bs->file, sector_num, nb_sectors, flags);
}
static int coroutine_fn raw_co_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors)
{
return bdrv_co_discard(bs->file, sector_num, nb_sectors);
}
static int64_t raw_getlength(BlockDriverState *bs)
{
return bdrv_getlength(bs->file);
}
static int raw_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
{
return bdrv_get_info(bs->file, bdi);
}
static int raw_refresh_limits(BlockDriverState *bs)
{
bs->bl = bs->file->bl;
return 0;
}
static int raw_truncate(BlockDriverState *bs, int64_t offset)
{
return bdrv_truncate(bs->file, offset);
}
static int raw_is_inserted(BlockDriverState *bs)
{
return bdrv_is_inserted(bs->file);
}
static int raw_media_changed(BlockDriverState *bs)
{
return bdrv_media_changed(bs->file);
}
static void raw_eject(BlockDriverState *bs, bool eject_flag)
{
bdrv_eject(bs->file, eject_flag);
}
static void raw_lock_medium(BlockDriverState *bs, bool locked)
{
bdrv_lock_medium(bs->file, locked);
}
static int raw_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
{
return bdrv_ioctl(bs->file, req, buf);
}
static BlockDriverAIOCB *raw_aio_ioctl(BlockDriverState *bs,
unsigned long int req, void *buf,
BlockDriverCompletionFunc *cb,
void *opaque)
{
return bdrv_aio_ioctl(bs->file, req, buf, cb, opaque);
}
static int raw_has_zero_init(BlockDriverState *bs)
{
return bdrv_has_zero_init(bs->file);
}
static int raw_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
{
Error *local_err = NULL;
int ret;
ret = bdrv_create_file(filename, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
}
return ret;
}
static int raw_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
bs->sg = bs->file->sg;
return 0;
}
static void raw_close(BlockDriverState *bs)
{
}
static int raw_probe(const uint8_t *buf, int buf_size, const char *filename)
{
/* smallest possible positive score so that raw is used if and only if no
* other block driver works
*/
return 1;
}
static BlockDriver bdrv_raw = {
.format_name = "raw",
.bdrv_probe = &raw_probe,
.bdrv_reopen_prepare = &raw_reopen_prepare,
.bdrv_open = &raw_open,
.bdrv_close = &raw_close,
.bdrv_create = &raw_create,
.bdrv_co_readv = &raw_co_readv,
.bdrv_co_writev = &raw_co_writev,
.bdrv_co_write_zeroes = &raw_co_write_zeroes,
.bdrv_co_discard = &raw_co_discard,
.bdrv_co_get_block_status = &raw_co_get_block_status,
.bdrv_truncate = &raw_truncate,
.bdrv_getlength = &raw_getlength,
.has_variable_length = true,
.bdrv_get_info = &raw_get_info,
.bdrv_refresh_limits = &raw_refresh_limits,
.bdrv_is_inserted = &raw_is_inserted,
.bdrv_media_changed = &raw_media_changed,
.bdrv_eject = &raw_eject,
.bdrv_lock_medium = &raw_lock_medium,
.bdrv_ioctl = &raw_ioctl,
.bdrv_aio_ioctl = &raw_aio_ioctl,
.create_options = &raw_create_options[0],
.bdrv_has_zero_init = &raw_has_zero_init
};
static void bdrv_raw_init(void)
{
bdrv_register(&bdrv_raw);
}
block_init(bdrv_raw_init);

View File

@@ -95,17 +95,23 @@ typedef struct RADOSCB {
#define RBD_FD_WRITE 1
typedef struct BDRVRBDState {
int fds[2];
rados_t cluster;
rados_ioctx_t io_ctx;
rbd_image_t image;
char name[RBD_MAX_IMAGE_NAME_SIZE];
int qemu_aio_count;
char *snap;
int event_reader_pos;
RADOSCB *event_rcb;
} BDRVRBDState;
static void rbd_aio_bh_cb(void *opaque);
static int qemu_rbd_next_tok(char *dst, int dst_len,
char *src, char delim,
const char *name,
char **p, Error **errp)
char **p)
{
int l;
char *end;
@@ -128,10 +134,10 @@ static int qemu_rbd_next_tok(char *dst, int dst_len,
}
l = strlen(src);
if (l >= dst_len) {
error_setg(errp, "%s too long", name);
error_report("%s too long", name);
return -EINVAL;
} else if (l == 0) {
error_setg(errp, "%s too short", name);
error_report("%s too short", name);
return -EINVAL;
}
@@ -157,15 +163,13 @@ static int qemu_rbd_parsename(const char *filename,
char *pool, int pool_len,
char *snap, int snap_len,
char *name, int name_len,
char *conf, int conf_len,
Error **errp)
char *conf, int conf_len)
{
const char *start;
char *p, *buf;
int ret;
if (!strstart(filename, "rbd:", &start)) {
error_setg(errp, "File name must start with 'rbd:'");
return -EINVAL;
}
@@ -174,8 +178,7 @@ static int qemu_rbd_parsename(const char *filename,
*snap = '\0';
*conf = '\0';
ret = qemu_rbd_next_tok(pool, pool_len, p,
'/', "pool name", &p, errp);
ret = qemu_rbd_next_tok(pool, pool_len, p, '/', "pool name", &p);
if (ret < 0 || !p) {
ret = -EINVAL;
goto done;
@@ -183,25 +186,21 @@ static int qemu_rbd_parsename(const char *filename,
qemu_rbd_unescape(pool);
if (strchr(p, '@')) {
ret = qemu_rbd_next_tok(name, name_len, p,
'@', "object name", &p, errp);
ret = qemu_rbd_next_tok(name, name_len, p, '@', "object name", &p);
if (ret < 0) {
goto done;
}
ret = qemu_rbd_next_tok(snap, snap_len, p,
':', "snap name", &p, errp);
ret = qemu_rbd_next_tok(snap, snap_len, p, ':', "snap name", &p);
qemu_rbd_unescape(snap);
} else {
ret = qemu_rbd_next_tok(name, name_len, p,
':', "object name", &p, errp);
ret = qemu_rbd_next_tok(name, name_len, p, ':', "object name", &p);
}
qemu_rbd_unescape(name);
if (ret < 0 || !p) {
goto done;
}
ret = qemu_rbd_next_tok(conf, conf_len, p,
'\0', "configuration", &p, errp);
ret = qemu_rbd_next_tok(conf, conf_len, p, '\0', "configuration", &p);
done:
g_free(buf);
@@ -236,7 +235,7 @@ static char *qemu_rbd_parse_clientname(const char *conf, char *clientname)
return NULL;
}
static int qemu_rbd_set_conf(rados_t cluster, const char *conf, Error **errp)
static int qemu_rbd_set_conf(rados_t cluster, const char *conf)
{
char *p, *buf;
char name[RBD_MAX_CONF_NAME_SIZE];
@@ -248,20 +247,20 @@ static int qemu_rbd_set_conf(rados_t cluster, const char *conf, Error **errp)
while (p) {
ret = qemu_rbd_next_tok(name, sizeof(name), p,
'=', "conf option name", &p, errp);
'=', "conf option name", &p);
if (ret < 0) {
break;
}
qemu_rbd_unescape(name);
if (!p) {
error_setg(errp, "conf option %s has no value", name);
error_report("conf option %s has no value", name);
ret = -EINVAL;
break;
}
ret = qemu_rbd_next_tok(value, sizeof(value), p,
':', "conf option value", &p, errp);
':', "conf option value", &p);
if (ret < 0) {
break;
}
@@ -270,7 +269,7 @@ static int qemu_rbd_set_conf(rados_t cluster, const char *conf, Error **errp)
if (strcmp(name, "conf") == 0) {
ret = rados_conf_read_file(cluster, value);
if (ret < 0) {
error_setg(errp, "error reading conf file %s", value);
error_report("error reading conf file %s", value);
break;
}
} else if (strcmp(name, "id") == 0) {
@@ -278,7 +277,7 @@ static int qemu_rbd_set_conf(rados_t cluster, const char *conf, Error **errp)
} else {
ret = rados_conf_set(cluster, name, value);
if (ret < 0) {
error_setg(errp, "invalid conf option %s", name);
error_report("invalid conf option %s", name);
ret = -EINVAL;
break;
}
@@ -289,10 +288,8 @@ static int qemu_rbd_set_conf(rados_t cluster, const char *conf, Error **errp)
return ret;
}
static int qemu_rbd_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
static int qemu_rbd_create(const char *filename, QEMUOptionParameter *options)
{
Error *local_err = NULL;
int64_t bytes = 0;
int64_t objsize;
int obj_order = 0;
@@ -309,8 +306,7 @@ static int qemu_rbd_create(const char *filename, QEMUOptionParameter *options,
if (qemu_rbd_parsename(filename, pool, sizeof(pool),
snap_buf, sizeof(snap_buf),
name, sizeof(name),
conf, sizeof(conf), &local_err) < 0) {
error_propagate(errp, local_err);
conf, sizeof(conf)) < 0) {
return -EINVAL;
}
@@ -322,11 +318,11 @@ static int qemu_rbd_create(const char *filename, QEMUOptionParameter *options,
if (options->value.n) {
objsize = options->value.n;
if ((objsize - 1) & objsize) { /* not a power of 2? */
error_setg(errp, "obj size needs to be power of 2");
error_report("obj size needs to be power of 2");
return -EINVAL;
}
if (objsize < 4096) {
error_setg(errp, "obj size too small");
error_report("obj size too small");
return -EINVAL;
}
obj_order = ffs(objsize) - 1;
@@ -337,7 +333,7 @@ static int qemu_rbd_create(const char *filename, QEMUOptionParameter *options,
clientname = qemu_rbd_parse_clientname(conf, clientname_buf);
if (rados_create(&cluster, clientname) < 0) {
error_setg(errp, "error initializing");
error_report("error initializing");
return -EIO;
}
@@ -347,20 +343,20 @@ static int qemu_rbd_create(const char *filename, QEMUOptionParameter *options,
}
if (conf[0] != '\0' &&
qemu_rbd_set_conf(cluster, conf, &local_err) < 0) {
qemu_rbd_set_conf(cluster, conf) < 0) {
error_report("error setting config options");
rados_shutdown(cluster);
error_propagate(errp, local_err);
return -EIO;
}
if (rados_connect(cluster) < 0) {
error_setg(errp, "error connecting");
error_report("error connecting");
rados_shutdown(cluster);
return -EIO;
}
if (rados_ioctx_create(cluster, pool, &io_ctx) < 0) {
error_setg(errp, "error opening pool %s", pool);
error_report("error opening pool %s", pool);
rados_shutdown(cluster);
return -EIO;
}
@@ -373,8 +369,9 @@ static int qemu_rbd_create(const char *filename, QEMUOptionParameter *options,
}
/*
* This aio completion is being called from rbd_finish_bh() and runs in qemu
* BH context.
* This aio completion is being called from qemu_rbd_aio_event_reader()
* and runs in qemu context. It schedules a bh, but just in case the aio
* was not cancelled before.
*/
static void qemu_rbd_complete_aio(RADOSCB *rcb)
{
@@ -404,19 +401,44 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb)
acb->ret = r;
}
}
/* Note that acb->bh can be NULL in case where the aio was cancelled */
acb->bh = qemu_bh_new(rbd_aio_bh_cb, acb);
qemu_bh_schedule(acb->bh);
g_free(rcb);
}
if (acb->cmd == RBD_AIO_READ) {
qemu_iovec_from_buf(acb->qiov, 0, acb->bounce, acb->qiov->size);
}
qemu_vfree(acb->bounce);
acb->common.cb(acb->common.opaque, (acb->ret > 0 ? 0 : acb->ret));
acb->status = 0;
/*
* aio fd read handler. It runs in the qemu context and calls the
* completion handling of completed rados aio operations.
*/
static void qemu_rbd_aio_event_reader(void *opaque)
{
BDRVRBDState *s = opaque;
if (!acb->cancelled) {
qemu_aio_release(acb);
}
ssize_t ret;
do {
char *p = (char *)&s->event_rcb;
/* now read the rcb pointer that was sent from a non qemu thread */
ret = read(s->fds[RBD_FD_READ], p + s->event_reader_pos,
sizeof(s->event_rcb) - s->event_reader_pos);
if (ret > 0) {
s->event_reader_pos += ret;
if (s->event_reader_pos == sizeof(s->event_rcb)) {
s->event_reader_pos = 0;
qemu_rbd_complete_aio(s->event_rcb);
s->qemu_aio_count--;
}
}
} while (ret < 0 && errno == EINTR);
}
static int qemu_rbd_aio_flush_cb(void *opaque)
{
BDRVRBDState *s = opaque;
return (s->qemu_aio_count > 0);
}
/* TODO Convert to fine grained options */
@@ -433,8 +455,7 @@ static QemuOptsList runtime_opts = {
},
};
static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVRBDState *s = bs->opaque;
char pool[RBD_MAX_POOL_NAME_SIZE];
@@ -447,10 +468,11 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags,
const char *filename;
int r;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
qemu_opts_del(opts);
return -EINVAL;
}
@@ -460,7 +482,7 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags,
if (qemu_rbd_parsename(filename, pool, sizeof(pool),
snap_buf, sizeof(snap_buf),
s->name, sizeof(s->name),
conf, sizeof(conf), errp) < 0) {
conf, sizeof(conf)) < 0) {
r = -EINVAL;
goto failed_opts;
}
@@ -468,7 +490,7 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags,
clientname = qemu_rbd_parse_clientname(conf, clientname_buf);
r = rados_create(&s->cluster, clientname);
if (r < 0) {
error_setg(&local_err, "error initializing");
error_report("error initializing");
goto failed_opts;
}
@@ -496,35 +518,50 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags,
}
if (conf[0] != '\0') {
r = qemu_rbd_set_conf(s->cluster, conf, errp);
r = qemu_rbd_set_conf(s->cluster, conf);
if (r < 0) {
error_report("error setting config options");
goto failed_shutdown;
}
}
r = rados_connect(s->cluster);
if (r < 0) {
error_setg(&local_err, "error connecting");
error_report("error connecting");
goto failed_shutdown;
}
r = rados_ioctx_create(s->cluster, pool, &s->io_ctx);
if (r < 0) {
error_setg(&local_err, "error opening pool %s", pool);
error_report("error opening pool %s", pool);
goto failed_shutdown;
}
r = rbd_open(s->io_ctx, s->name, &s->image, s->snap);
if (r < 0) {
error_setg(&local_err, "error reading header from %s", s->name);
error_report("error reading header from %s", s->name);
goto failed_open;
}
bs->read_only = (s->snap != NULL);
s->event_reader_pos = 0;
r = qemu_pipe(s->fds);
if (r < 0) {
error_report("error opening eventfd");
goto failed;
}
fcntl(s->fds[0], F_SETFL, O_NONBLOCK);
fcntl(s->fds[1], F_SETFL, O_NONBLOCK);
qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], qemu_rbd_aio_event_reader,
NULL, qemu_rbd_aio_flush_cb, s);
qemu_opts_del(opts);
return 0;
failed:
rbd_close(s->image);
failed_open:
rados_ioctx_destroy(s->io_ctx);
failed_shutdown:
@@ -539,6 +576,10 @@ static void qemu_rbd_close(BlockDriverState *bs)
{
BDRVRBDState *s = bs->opaque;
close(s->fds[0]);
close(s->fds[1]);
qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], NULL, NULL, NULL, NULL);
rbd_close(s->image);
rados_ioctx_destroy(s->io_ctx);
g_free(s->snap);
@@ -566,11 +607,34 @@ static const AIOCBInfo rbd_aiocb_info = {
.cancel = qemu_rbd_aio_cancel,
};
static void rbd_finish_bh(void *opaque)
static int qemu_rbd_send_pipe(BDRVRBDState *s, RADOSCB *rcb)
{
RADOSCB *rcb = opaque;
qemu_bh_delete(rcb->acb->bh);
qemu_rbd_complete_aio(rcb);
int ret = 0;
while (1) {
fd_set wfd;
int fd = s->fds[RBD_FD_WRITE];
/* send the op pointer to the qemu thread that is responsible
for the aio/op completion. Must do it in a qemu thread context */
ret = write(fd, (void *)&rcb, sizeof(rcb));
if (ret >= 0) {
break;
}
if (errno == EINTR) {
continue;
}
if (errno != EAGAIN) {
break;
}
FD_ZERO(&wfd);
FD_SET(fd, &wfd);
do {
ret = select(fd + 1, NULL, &wfd, NULL, NULL);
} while (ret < 0 && errno == EINTR);
}
return ret;
}
/*
@@ -578,18 +642,40 @@ static void rbd_finish_bh(void *opaque)
*
* Note: this function is being called from a non qemu thread so
* we need to be careful about what we do here. Generally we only
* schedule a BH, and do the rest of the io completion handling
* from rbd_finish_bh() which runs in a qemu context.
* write to the block notification pipe, and do the rest of the
* io completion handling from qemu_rbd_aio_event_reader() which
* runs in a qemu context.
*/
static void rbd_finish_aiocb(rbd_completion_t c, RADOSCB *rcb)
{
RBDAIOCB *acb = rcb->acb;
int ret;
rcb->ret = rbd_aio_get_return_value(c);
rbd_aio_release(c);
ret = qemu_rbd_send_pipe(rcb->s, rcb);
if (ret < 0) {
error_report("failed writing to acb->s->fds");
g_free(rcb);
}
}
acb->bh = qemu_bh_new(rbd_finish_bh, rcb);
qemu_bh_schedule(acb->bh);
/* Callback when all queued rbd_aio requests are complete */
static void rbd_aio_bh_cb(void *opaque)
{
RBDAIOCB *acb = opaque;
if (acb->cmd == RBD_AIO_READ) {
qemu_iovec_from_buf(acb->qiov, 0, acb->bounce, acb->qiov->size);
}
qemu_vfree(acb->bounce);
acb->common.cb(acb->common.opaque, (acb->ret > 0 ? 0 : acb->ret));
qemu_bh_delete(acb->bh);
acb->bh = NULL;
acb->status = 0;
if (!acb->cancelled) {
qemu_aio_release(acb);
}
}
static int rbd_aio_discard_wrapper(rbd_image_t image,
@@ -655,6 +741,8 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
off = sector_num * BDRV_SECTOR_SIZE;
size = nb_sectors * BDRV_SECTOR_SIZE;
s->qemu_aio_count++; /* All the RADOSCB */
rcb = g_malloc(sizeof(RADOSCB));
rcb->done = 0;
rcb->acb = acb;
@@ -691,6 +779,7 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
failed:
g_free(rcb);
s->qemu_aio_count--;
qemu_aio_release(acb);
return NULL;
}
@@ -814,31 +903,12 @@ static int qemu_rbd_snap_create(BlockDriverState *bs,
}
static int qemu_rbd_snap_remove(BlockDriverState *bs,
const char *snapshot_id,
const char *snapshot_name,
Error **errp)
const char *snapshot_name)
{
BDRVRBDState *s = bs->opaque;
int r;
if (!snapshot_name) {
error_setg(errp, "rbd need a valid snapshot name");
return -EINVAL;
}
/* If snapshot_id is specified, it must be equal to name, see
qemu_rbd_snap_list() */
if (snapshot_id && strcmp(snapshot_id, snapshot_name)) {
error_setg(errp,
"rbd do not support snapshot id, it should be NULL or "
"equal to snapshot name");
return -EINVAL;
}
r = rbd_snap_remove(s->image, snapshot_name);
if (r < 0) {
error_setg_errno(errp, -r, "Failed to remove the snapshot");
}
return r;
}
@@ -864,7 +934,7 @@ static int qemu_rbd_snap_list(BlockDriverState *bs,
do {
snaps = g_malloc(sizeof(*snaps) * max_snaps);
snap_count = rbd_snap_list(s->image, snaps, &max_snaps);
if (snap_count <= 0) {
if (snap_count < 0) {
g_free(snaps);
}
} while (snap_count == -ERANGE);
@@ -888,7 +958,6 @@ static int qemu_rbd_snap_list(BlockDriverState *bs,
sn_info->vm_clock_nsec = 0;
}
rbd_snap_list_end(snaps);
g_free(snaps);
done:
*psn_tab = sn_tab;
@@ -924,11 +993,9 @@ static QEMUOptionParameter qemu_rbd_create_options[] = {
static BlockDriver bdrv_rbd = {
.format_name = "rbd",
.instance_size = sizeof(BDRVRBDState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_rbd_open,
.bdrv_close = qemu_rbd_close,
.bdrv_create = qemu_rbd_create,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_get_info = qemu_rbd_getinfo,
.create_options = qemu_rbd_create_options,
.bdrv_getlength = qemu_rbd_getlength,

File diff suppressed because it is too large Load Diff

View File

@@ -1,353 +0,0 @@
/*
* Block layer snapshot related functions
*
* Copyright (c) 2003-2008 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "block/snapshot.h"
#include "block/block_int.h"
QemuOptsList internal_snapshot_opts = {
.name = "snapshot",
.head = QTAILQ_HEAD_INITIALIZER(internal_snapshot_opts.head),
.desc = {
{
.name = SNAPSHOT_OPT_ID,
.type = QEMU_OPT_STRING,
.help = "snapshot id"
},{
.name = SNAPSHOT_OPT_NAME,
.type = QEMU_OPT_STRING,
.help = "snapshot name"
},{
/* end of list */
}
},
};
int bdrv_snapshot_find(BlockDriverState *bs, QEMUSnapshotInfo *sn_info,
const char *name)
{
QEMUSnapshotInfo *sn_tab, *sn;
int nb_sns, i, ret;
ret = -ENOENT;
nb_sns = bdrv_snapshot_list(bs, &sn_tab);
if (nb_sns < 0) {
return ret;
}
for (i = 0; i < nb_sns; i++) {
sn = &sn_tab[i];
if (!strcmp(sn->id_str, name) || !strcmp(sn->name, name)) {
*sn_info = *sn;
ret = 0;
break;
}
}
g_free(sn_tab);
return ret;
}
/**
* Look up an internal snapshot by @id and @name.
* @bs: block device to search
* @id: unique snapshot ID, or NULL
* @name: snapshot name, or NULL
* @sn_info: location to store information on the snapshot found
* @errp: location to store error, will be set only for exception
*
* This function will traverse snapshot list in @bs to search the matching
* one, @id and @name are the matching condition:
* If both @id and @name are specified, find the first one with id @id and
* name @name.
* If only @id is specified, find the first one with id @id.
* If only @name is specified, find the first one with name @name.
* if none is specified, abort().
*
* Returns: true when a snapshot is found and @sn_info will be filled, false
* when error or not found. If all operation succeed but no matching one is
* found, @errp will NOT be set.
*/
bool bdrv_snapshot_find_by_id_and_name(BlockDriverState *bs,
const char *id,
const char *name,
QEMUSnapshotInfo *sn_info,
Error **errp)
{
QEMUSnapshotInfo *sn_tab, *sn;
int nb_sns, i;
bool ret = false;
assert(id || name);
nb_sns = bdrv_snapshot_list(bs, &sn_tab);
if (nb_sns < 0) {
error_setg_errno(errp, -nb_sns, "Failed to get a snapshot list");
return false;
} else if (nb_sns == 0) {
return false;
}
if (id && name) {
for (i = 0; i < nb_sns; i++) {
sn = &sn_tab[i];
if (!strcmp(sn->id_str, id) && !strcmp(sn->name, name)) {
*sn_info = *sn;
ret = true;
break;
}
}
} else if (id) {
for (i = 0; i < nb_sns; i++) {
sn = &sn_tab[i];
if (!strcmp(sn->id_str, id)) {
*sn_info = *sn;
ret = true;
break;
}
}
} else if (name) {
for (i = 0; i < nb_sns; i++) {
sn = &sn_tab[i];
if (!strcmp(sn->name, name)) {
*sn_info = *sn;
ret = true;
break;
}
}
}
g_free(sn_tab);
return ret;
}
int bdrv_can_snapshot(BlockDriverState *bs)
{
BlockDriver *drv = bs->drv;
if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) {
return 0;
}
if (!drv->bdrv_snapshot_create) {
if (bs->file != NULL) {
return bdrv_can_snapshot(bs->file);
}
return 0;
}
return 1;
}
int bdrv_snapshot_create(BlockDriverState *bs,
QEMUSnapshotInfo *sn_info)
{
BlockDriver *drv = bs->drv;
if (!drv) {
return -ENOMEDIUM;
}
if (drv->bdrv_snapshot_create) {
return drv->bdrv_snapshot_create(bs, sn_info);
}
if (bs->file) {
return bdrv_snapshot_create(bs->file, sn_info);
}
return -ENOTSUP;
}
int bdrv_snapshot_goto(BlockDriverState *bs,
const char *snapshot_id)
{
BlockDriver *drv = bs->drv;
int ret, open_ret;
if (!drv) {
return -ENOMEDIUM;
}
if (drv->bdrv_snapshot_goto) {
return drv->bdrv_snapshot_goto(bs, snapshot_id);
}
if (bs->file) {
drv->bdrv_close(bs);
ret = bdrv_snapshot_goto(bs->file, snapshot_id);
open_ret = drv->bdrv_open(bs, NULL, bs->open_flags, NULL);
if (open_ret < 0) {
bdrv_unref(bs->file);
bs->drv = NULL;
return open_ret;
}
return ret;
}
return -ENOTSUP;
}
/**
* Delete an internal snapshot by @snapshot_id and @name.
* @bs: block device used in the operation
* @snapshot_id: unique snapshot ID, or NULL
* @name: snapshot name, or NULL
* @errp: location to store error
*
* If both @snapshot_id and @name are specified, delete the first one with
* id @snapshot_id and name @name.
* If only @snapshot_id is specified, delete the first one with id
* @snapshot_id.
* If only @name is specified, delete the first one with name @name.
* if none is specified, return -EINVAL.
*
* Returns: 0 on success, -errno on failure. If @bs is not inserted, return
* -ENOMEDIUM. If @snapshot_id and @name are both NULL, return -EINVAL. If @bs
* does not support internal snapshot deletion, return -ENOTSUP. If @bs does
* not support parameter @snapshot_id or @name, or one of them is not correctly
* specified, return -EINVAL. If @bs can't find one matching @id and @name,
* return -ENOENT. If @errp != NULL, it will always be filled with error
* message on failure.
*/
int bdrv_snapshot_delete(BlockDriverState *bs,
const char *snapshot_id,
const char *name,
Error **errp)
{
BlockDriver *drv = bs->drv;
if (!drv) {
error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, bdrv_get_device_name(bs));
return -ENOMEDIUM;
}
if (!snapshot_id && !name) {
error_setg(errp, "snapshot_id and name are both NULL");
return -EINVAL;
}
if (drv->bdrv_snapshot_delete) {
return drv->bdrv_snapshot_delete(bs, snapshot_id, name, errp);
}
if (bs->file) {
return bdrv_snapshot_delete(bs->file, snapshot_id, name, errp);
}
error_set(errp, QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
drv->format_name, bdrv_get_device_name(bs),
"internal snapshot deletion");
return -ENOTSUP;
}
void bdrv_snapshot_delete_by_id_or_name(BlockDriverState *bs,
const char *id_or_name,
Error **errp)
{
int ret;
Error *local_err = NULL;
ret = bdrv_snapshot_delete(bs, id_or_name, NULL, &local_err);
if (ret == -ENOENT || ret == -EINVAL) {
error_free(local_err);
local_err = NULL;
ret = bdrv_snapshot_delete(bs, NULL, id_or_name, &local_err);
}
if (ret < 0) {
error_propagate(errp, local_err);
}
}
int bdrv_snapshot_list(BlockDriverState *bs,
QEMUSnapshotInfo **psn_info)
{
BlockDriver *drv = bs->drv;
if (!drv) {
return -ENOMEDIUM;
}
if (drv->bdrv_snapshot_list) {
return drv->bdrv_snapshot_list(bs, psn_info);
}
if (bs->file) {
return bdrv_snapshot_list(bs->file, psn_info);
}
return -ENOTSUP;
}
/**
* Temporarily load an internal snapshot by @snapshot_id and @name.
* @bs: block device used in the operation
* @snapshot_id: unique snapshot ID, or NULL
* @name: snapshot name, or NULL
* @errp: location to store error
*
* If both @snapshot_id and @name are specified, load the first one with
* id @snapshot_id and name @name.
* If only @snapshot_id is specified, load the first one with id
* @snapshot_id.
* If only @name is specified, load the first one with name @name.
* if none is specified, return -EINVAL.
*
* Returns: 0 on success, -errno on fail. If @bs is not inserted, return
* -ENOMEDIUM. If @bs is not readonly, return -EINVAL. If @bs did not support
* internal snapshot, return -ENOTSUP. If qemu can't find a matching @id and
* @name, return -ENOENT. If @errp != NULL, it will always be filled on
* failure.
*/
int bdrv_snapshot_load_tmp(BlockDriverState *bs,
const char *snapshot_id,
const char *name,
Error **errp)
{
BlockDriver *drv = bs->drv;
if (!drv) {
error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, bdrv_get_device_name(bs));
return -ENOMEDIUM;
}
if (!snapshot_id && !name) {
error_setg(errp, "snapshot_id and name are both NULL");
return -EINVAL;
}
if (!bs->read_only) {
error_setg(errp, "Device is not readonly");
return -EINVAL;
}
if (drv->bdrv_snapshot_load_tmp) {
return drv->bdrv_snapshot_load_tmp(bs, snapshot_id, name, errp);
}
error_set(errp, QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
drv->format_name, bdrv_get_device_name(bs),
"temporarily load internal snapshot");
return -ENOTSUP;
}
int bdrv_snapshot_load_tmp_by_id_or_name(BlockDriverState *bs,
const char *id_or_name,
Error **errp)
{
int ret;
Error *local_err = NULL;
ret = bdrv_snapshot_load_tmp(bs, id_or_name, NULL, &local_err);
if (ret == -ENOENT || ret == -EINVAL) {
error_free(local_err);
local_err = NULL;
ret = bdrv_snapshot_load_tmp(bs, NULL, id_or_name, &local_err);
}
if (local_err) {
error_propagate(errp, local_err);
}
return ret;
}

View File

@@ -106,59 +106,30 @@ static void ssh_state_free(BDRVSSHState *s)
}
}
static void GCC_FMT_ATTR(3, 4)
session_error_setg(Error **errp, BDRVSSHState *s, const char *fs, ...)
/* Wrappers around error_report which make sure to dump as much
* information from libssh2 as possible.
*/
static void GCC_FMT_ATTR(2, 3)
session_error_report(BDRVSSHState *s, const char *fs, ...)
{
va_list args;
char *msg;
va_start(args, fs);
msg = g_strdup_vprintf(fs, args);
va_end(args);
error_vprintf(fs, args);
if (s->session) {
if ((s)->session) {
char *ssh_err;
int ssh_err_code;
libssh2_session_last_error((s)->session, &ssh_err, NULL, 0);
/* This is not an errno. See <libssh2.h>. */
ssh_err_code = libssh2_session_last_error(s->session,
&ssh_err, NULL, 0);
error_setg(errp, "%s: %s (libssh2 error code: %d)",
msg, ssh_err, ssh_err_code);
} else {
error_setg(errp, "%s", msg);
ssh_err_code = libssh2_session_last_errno((s)->session);
error_printf(": %s (libssh2 error code: %d)", ssh_err, ssh_err_code);
}
g_free(msg);
}
static void GCC_FMT_ATTR(3, 4)
sftp_error_setg(Error **errp, BDRVSSHState *s, const char *fs, ...)
{
va_list args;
char *msg;
va_start(args, fs);
msg = g_strdup_vprintf(fs, args);
va_end(args);
if (s->sftp) {
char *ssh_err;
int ssh_err_code;
unsigned long sftp_err_code;
/* This is not an errno. See <libssh2.h>. */
ssh_err_code = libssh2_session_last_error(s->session,
&ssh_err, NULL, 0);
/* See <libssh2_sftp.h>. */
sftp_err_code = libssh2_sftp_last_error((s)->sftp);
error_setg(errp,
"%s: %s (libssh2 error code: %d, sftp error code: %lu)",
msg, ssh_err, ssh_err_code, sftp_err_code);
} else {
error_setg(errp, "%s", msg);
}
g_free(msg);
error_printf("\n");
}
static void GCC_FMT_ATTR(2, 3)
@@ -174,9 +145,9 @@ sftp_error_report(BDRVSSHState *s, const char *fs, ...)
int ssh_err_code;
unsigned long sftp_err_code;
libssh2_session_last_error((s)->session, &ssh_err, NULL, 0);
/* This is not an errno. See <libssh2.h>. */
ssh_err_code = libssh2_session_last_error(s->session,
&ssh_err, NULL, 0);
ssh_err_code = libssh2_session_last_errno((s)->session);
/* See <libssh2_sftp.h>. */
sftp_err_code = libssh2_sftp_last_error((s)->sftp);
@@ -272,7 +243,7 @@ static void ssh_parse_filename(const char *filename, QDict *options,
}
static int check_host_key_knownhosts(BDRVSSHState *s,
const char *host, int port, Error **errp)
const char *host, int port)
{
const char *home;
char *knh_file = NULL;
@@ -286,15 +257,14 @@ static int check_host_key_knownhosts(BDRVSSHState *s,
hostkey = libssh2_session_hostkey(s->session, &len, &type);
if (!hostkey) {
ret = -EINVAL;
session_error_setg(errp, s, "failed to read remote host key");
session_error_report(s, "failed to read remote host key");
goto out;
}
knh = libssh2_knownhost_init(s->session);
if (!knh) {
ret = -EINVAL;
session_error_setg(errp, s,
"failed to initialize known hosts support");
session_error_report(s, "failed to initialize known hosts support");
goto out;
}
@@ -319,23 +289,21 @@ static int check_host_key_knownhosts(BDRVSSHState *s,
break;
case LIBSSH2_KNOWNHOST_CHECK_MISMATCH:
ret = -EINVAL;
session_error_setg(errp, s,
"host key does not match the one in known_hosts"
" (found key %s)", found->key);
session_error_report(s, "host key does not match the one in known_hosts (found key %s)",
found->key);
goto out;
case LIBSSH2_KNOWNHOST_CHECK_NOTFOUND:
ret = -EINVAL;
session_error_setg(errp, s, "no host key was found in known_hosts");
session_error_report(s, "no host key was found in known_hosts");
goto out;
case LIBSSH2_KNOWNHOST_CHECK_FAILURE:
ret = -EINVAL;
session_error_setg(errp, s,
"failure matching the host key with known_hosts");
session_error_report(s, "failure matching the host key with known_hosts");
goto out;
default:
ret = -EINVAL;
session_error_setg(errp, s, "unknown error matching the host key"
" with known_hosts (%d)", r);
session_error_report(s, "unknown error matching the host key with known_hosts (%d)",
r);
goto out;
}
@@ -390,20 +358,20 @@ static int compare_fingerprint(const unsigned char *fingerprint, size_t len,
static int
check_host_key_hash(BDRVSSHState *s, const char *hash,
int hash_type, size_t fingerprint_len, Error **errp)
int hash_type, size_t fingerprint_len)
{
const char *fingerprint;
fingerprint = libssh2_hostkey_hash(s->session, hash_type);
if (!fingerprint) {
session_error_setg(errp, s, "failed to read remote host key");
session_error_report(s, "failed to read remote host key");
return -EINVAL;
}
if(compare_fingerprint((unsigned char *) fingerprint, fingerprint_len,
hash) != 0) {
error_setg(errp, "remote host key does not match host_key_check '%s'",
hash);
error_report("remote host key does not match host_key_check '%s'",
hash);
return -EPERM;
}
@@ -411,7 +379,7 @@ check_host_key_hash(BDRVSSHState *s, const char *hash,
}
static int check_host_key(BDRVSSHState *s, const char *host, int port,
const char *host_key_check, Error **errp)
const char *host_key_check)
{
/* host_key_check=no */
if (strcmp(host_key_check, "no") == 0) {
@@ -421,25 +389,25 @@ static int check_host_key(BDRVSSHState *s, const char *host, int port,
/* host_key_check=md5:xx:yy:zz:... */
if (strncmp(host_key_check, "md5:", 4) == 0) {
return check_host_key_hash(s, &host_key_check[4],
LIBSSH2_HOSTKEY_HASH_MD5, 16, errp);
LIBSSH2_HOSTKEY_HASH_MD5, 16);
}
/* host_key_check=sha1:xx:yy:zz:... */
if (strncmp(host_key_check, "sha1:", 5) == 0) {
return check_host_key_hash(s, &host_key_check[5],
LIBSSH2_HOSTKEY_HASH_SHA1, 20, errp);
LIBSSH2_HOSTKEY_HASH_SHA1, 20);
}
/* host_key_check=yes */
if (strcmp(host_key_check, "yes") == 0) {
return check_host_key_knownhosts(s, host, port, errp);
return check_host_key_knownhosts(s, host, port);
}
error_setg(errp, "unknown host_key_check setting (%s)", host_key_check);
error_report("unknown host_key_check setting (%s)", host_key_check);
return -EINVAL;
}
static int authenticate(BDRVSSHState *s, const char *user, Error **errp)
static int authenticate(BDRVSSHState *s, const char *user)
{
int r, ret;
const char *userauthlist;
@@ -450,8 +418,7 @@ static int authenticate(BDRVSSHState *s, const char *user, Error **errp)
userauthlist = libssh2_userauth_list(s->session, user, strlen(user));
if (strstr(userauthlist, "publickey") == NULL) {
ret = -EPERM;
error_setg(errp,
"remote server does not support \"publickey\" authentication");
error_report("remote server does not support \"publickey\" authentication");
goto out;
}
@@ -459,18 +426,17 @@ static int authenticate(BDRVSSHState *s, const char *user, Error **errp)
agent = libssh2_agent_init(s->session);
if (!agent) {
ret = -EINVAL;
session_error_setg(errp, s, "failed to initialize ssh-agent support");
session_error_report(s, "failed to initialize ssh-agent support");
goto out;
}
if (libssh2_agent_connect(agent)) {
ret = -ECONNREFUSED;
session_error_setg(errp, s, "failed to connect to ssh-agent");
session_error_report(s, "failed to connect to ssh-agent");
goto out;
}
if (libssh2_agent_list_identities(agent)) {
ret = -EINVAL;
session_error_setg(errp, s,
"failed requesting identities from ssh-agent");
session_error_report(s, "failed requesting identities from ssh-agent");
goto out;
}
@@ -481,8 +447,7 @@ static int authenticate(BDRVSSHState *s, const char *user, Error **errp)
}
if (r < 0) {
ret = -EINVAL;
session_error_setg(errp, s,
"failed to obtain identity from ssh-agent");
session_error_report(s, "failed to obtain identity from ssh-agent");
goto out;
}
r = libssh2_agent_userauth(agent, user, identity);
@@ -496,8 +461,8 @@ static int authenticate(BDRVSSHState *s, const char *user, Error **errp)
}
ret = -EPERM;
error_setg(errp, "failed to authenticate using publickey authentication "
"and the identities held by your ssh-agent");
error_report("failed to authenticate using publickey authentication "
"and the identities held by your ssh-agent");
out:
if (agent != NULL) {
@@ -511,9 +476,10 @@ static int authenticate(BDRVSSHState *s, const char *user, Error **errp)
}
static int connect_to_ssh(BDRVSSHState *s, QDict *options,
int ssh_flags, int creat_mode, Error **errp)
int ssh_flags, int creat_mode)
{
int r, ret;
Error *err = NULL;
const char *host, *user, *path, *host_key_check;
int port;
@@ -532,7 +498,6 @@ static int connect_to_ssh(BDRVSSHState *s, QDict *options,
} else {
user = g_get_user_name();
if (!user) {
error_setg_errno(errp, errno, "Can't get user name");
ret = -errno;
goto err;
}
@@ -549,9 +514,11 @@ static int connect_to_ssh(BDRVSSHState *s, QDict *options,
s->hostport = g_strdup_printf("%s:%d", host, port);
/* Open the socket and connect. */
s->sock = inet_connect(s->hostport, errp);
if (s->sock < 0) {
s->sock = inet_connect(s->hostport, &err);
if (err != NULL) {
ret = -errno;
qerror_report_err(err);
error_free(err);
goto err;
}
@@ -559,7 +526,7 @@ static int connect_to_ssh(BDRVSSHState *s, QDict *options,
s->session = libssh2_session_init();
if (!s->session) {
ret = -EINVAL;
session_error_setg(errp, s, "failed to initialize libssh2 session");
session_error_report(s, "failed to initialize libssh2 session");
goto err;
}
@@ -570,18 +537,18 @@ static int connect_to_ssh(BDRVSSHState *s, QDict *options,
r = libssh2_session_handshake(s->session, s->sock);
if (r != 0) {
ret = -EINVAL;
session_error_setg(errp, s, "failed to establish SSH session");
session_error_report(s, "failed to establish SSH session");
goto err;
}
/* Check the remote host's key against known_hosts. */
ret = check_host_key(s, host, port, host_key_check, errp);
ret = check_host_key(s, host, port, host_key_check);
if (ret < 0) {
goto err;
}
/* Authenticate. */
ret = authenticate(s, user, errp);
ret = authenticate(s, user);
if (ret < 0) {
goto err;
}
@@ -589,7 +556,7 @@ static int connect_to_ssh(BDRVSSHState *s, QDict *options,
/* Start SFTP. */
s->sftp = libssh2_sftp_init(s->session);
if (!s->sftp) {
session_error_setg(errp, s, "failed to initialize sftp handle");
session_error_report(s, "failed to initialize sftp handle");
ret = -EINVAL;
goto err;
}
@@ -599,14 +566,14 @@ static int connect_to_ssh(BDRVSSHState *s, QDict *options,
path, ssh_flags, creat_mode);
s->sftp_handle = libssh2_sftp_open(s->sftp, path, ssh_flags, creat_mode);
if (!s->sftp_handle) {
session_error_setg(errp, s, "failed to open remote file '%s'", path);
session_error_report(s, "failed to open remote file '%s'", path);
ret = -EINVAL;
goto err;
}
r = libssh2_sftp_fstat(s->sftp_handle, &s->attrs);
if (r < 0) {
sftp_error_setg(errp, s, "failed to read file attributes");
sftp_error_report(s, "failed to read file attributes");
return -EINVAL;
}
@@ -641,8 +608,7 @@ static int connect_to_ssh(BDRVSSHState *s, QDict *options,
return ret;
}
static int ssh_file_open(BlockDriverState *bs, QDict *options, int bdrv_flags,
Error **errp)
static int ssh_file_open(BlockDriverState *bs, QDict *options, int bdrv_flags)
{
BDRVSSHState *s = bs->opaque;
int ret;
@@ -656,7 +622,7 @@ static int ssh_file_open(BlockDriverState *bs, QDict *options, int bdrv_flags,
}
/* Start up SSH. */
ret = connect_to_ssh(s, options, ssh_flags, 0, errp);
ret = connect_to_ssh(s, options, ssh_flags, 0);
if (ret < 0) {
goto err;
}
@@ -684,10 +650,10 @@ static QEMUOptionParameter ssh_create_options[] = {
{ NULL }
};
static int ssh_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
static int ssh_create(const char *filename, QEMUOptionParameter *options)
{
int r, ret;
Error *local_err = NULL;
int64_t total_size = 0;
QDict *uri_options = NULL;
BDRVSSHState s;
@@ -706,16 +672,17 @@ static int ssh_create(const char *filename, QEMUOptionParameter *options,
DPRINTF("total_size=%" PRIi64, total_size);
uri_options = qdict_new();
r = parse_uri(filename, uri_options, errp);
r = parse_uri(filename, uri_options, &local_err);
if (r < 0) {
qerror_report_err(local_err);
error_free(local_err);
ret = r;
goto out;
}
r = connect_to_ssh(&s, uri_options,
LIBSSH2_FXF_READ|LIBSSH2_FXF_WRITE|
LIBSSH2_FXF_CREAT|LIBSSH2_FXF_TRUNC,
0644, errp);
LIBSSH2_FXF_CREAT|LIBSSH2_FXF_TRUNC, 0644);
if (r < 0) {
ret = r;
goto out;
@@ -725,7 +692,7 @@ static int ssh_create(const char *filename, QEMUOptionParameter *options,
libssh2_sftp_seek64(s.sftp_handle, total_size-1);
r2 = libssh2_sftp_write(s.sftp_handle, c, 1);
if (r2 < 0) {
sftp_error_setg(errp, &s, "truncate failed");
sftp_error_report(&s, "truncate failed");
ret = -EINVAL;
goto out;
}
@@ -773,6 +740,14 @@ static void restart_coroutine(void *opaque)
qemu_coroutine_enter(co, NULL);
}
/* Always true because when we have called set_fd_handler there is
* always a request being processed.
*/
static int return_true(void *opaque)
{
return 1;
}
static coroutine_fn void set_fd_handler(BDRVSSHState *s)
{
int r;
@@ -791,13 +766,13 @@ static coroutine_fn void set_fd_handler(BDRVSSHState *s)
DPRINTF("s->sock=%d rd_handler=%p wr_handler=%p", s->sock,
rd_handler, wr_handler);
qemu_aio_set_fd_handler(s->sock, rd_handler, wr_handler, co);
qemu_aio_set_fd_handler(s->sock, rd_handler, wr_handler, return_true, co);
}
static coroutine_fn void clear_fd_handler(BDRVSSHState *s)
{
DPRINTF("s->sock=%d", s->sock);
qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL);
qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL, NULL);
}
/* A non-blocking call returned EAGAIN, so yield, ensuring the

View File

@@ -57,11 +57,6 @@ static void close_unused_images(BlockDriverState *top, BlockDriverState *base,
BlockDriverState *intermediate;
intermediate = top->backing_hd;
/* Must assign before bdrv_delete() to prevent traversing dangling pointer
* while we delete backing image instances.
*/
bdrv_set_backing_hd(top, base);
while (intermediate) {
BlockDriverState *unused;
@@ -72,11 +67,10 @@ static void close_unused_images(BlockDriverState *top, BlockDriverState *base,
unused = intermediate;
intermediate = intermediate->backing_hd;
bdrv_set_backing_hd(unused, NULL);
bdrv_unref(unused);
unused->backing_hd = NULL;
bdrv_delete(unused);
}
bdrv_refresh_limits(top);
top->backing_hd = base;
}
static void coroutine_fn stream_run(void *opaque)
@@ -90,11 +84,6 @@ static void coroutine_fn stream_run(void *opaque)
int n = 0;
void *buf;
if (!bs->backing_hd) {
block_job_completed(&s->common, 0);
return;
}
s->common.len = bdrv_getlength(bs);
if (s->common.len < 0) {
block_job_completed(&s->common, s->common.len);
@@ -121,22 +110,21 @@ wait:
/* Note that even when no rate limit is applied we need to yield
* with no pending I/O here so that bdrv_drain_all() returns.
*/
block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, delay_ns);
block_job_sleep_ns(&s->common, rt_clock, delay_ns);
if (block_job_is_cancelled(&s->common)) {
break;
}
copy = false;
ret = bdrv_is_allocated(bs, sector_num,
STREAM_BUFFER_SIZE / BDRV_SECTOR_SIZE, &n);
ret = bdrv_co_is_allocated(bs, sector_num,
STREAM_BUFFER_SIZE / BDRV_SECTOR_SIZE, &n);
if (ret == 1) {
/* Allocated in the top, no need to copy. */
} else if (ret >= 0) {
copy = false;
} else {
/* Copy if allocated in the intermediate images. Limit to the
* known-unallocated area [sector_num, sector_num+n). */
ret = bdrv_is_allocated_above(bs->backing_hd, base,
sector_num, n, &n);
ret = bdrv_co_is_allocated_above(bs->backing_hd, base,
sector_num, n, &n);
/* Finish early if end of backing file has been reached */
if (ret == 0 && n == 0) {
@@ -146,7 +134,7 @@ wait:
copy = (ret == 1);
}
trace_stream_one_iteration(s, sector_num, n, ret);
if (copy) {
if (ret >= 0 && copy) {
if (s->common.speed) {
delay_ns = ratelimit_calculate_delay(&s->limit, n);
if (delay_ns > 0) {
@@ -210,9 +198,9 @@ static void stream_set_speed(BlockJob *job, int64_t speed, Error **errp)
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
}
static const BlockJobDriver stream_job_driver = {
static BlockJobType stream_job_type = {
.instance_size = sizeof(StreamBlockJob),
.job_type = BLOCK_JOB_TYPE_STREAM,
.job_type = "stream",
.set_speed = stream_set_speed,
};
@@ -231,7 +219,7 @@ void stream_start(BlockDriverState *bs, BlockDriverState *base,
return;
}
s = block_job_create(&stream_job_driver, bs, speed, cb, opaque, errp);
s = block_job_create(&stream_job_type, bs, speed, cb, opaque, errp);
if (!s) {
return;
}

View File

@@ -31,7 +31,7 @@
* Allocation of blocks could be optimized (less writes to block map and
* header).
*
* Read and write of adjacent blocks could be done in one operation
* Read and write of adjacents blocks could be done in one operation
* (current code uses one operation per block (1 MiB).
*
* The code is not thread safe (missing locks for changes in header and
@@ -120,11 +120,6 @@ typedef unsigned char uuid_t[16];
#define VDI_IS_ALLOCATED(X) ((X) < VDI_DISCARDED)
/* max blocks in image is (0xffffffff / 4) */
#define VDI_BLOCKS_IN_IMAGE_MAX 0x3fffffff
#define VDI_DISK_SIZE_MAX ((uint64_t)VDI_BLOCKS_IN_IMAGE_MAX * \
(uint64_t)DEFAULT_CLUSTER_SIZE)
#if !defined(CONFIG_UUID)
static inline void uuid_generate(uuid_t out)
{
@@ -170,7 +165,7 @@ typedef struct {
uuid_t uuid_link;
uuid_t uuid_parent;
uint64_t unused2[7];
} QEMU_PACKED VdiHeader;
} VdiHeader;
typedef struct {
/* The block map entries are little endian (even in memory). */
@@ -336,7 +331,6 @@ static int vdi_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
logout("\n");
bdi->cluster_size = s->block_size;
bdi->vm_state_offset = 0;
bdi->unallocated_blocks_are_zero = true;
return 0;
}
@@ -370,8 +364,7 @@ static int vdi_probe(const uint8_t *buf, int buf_size, const char *filename)
return result;
}
static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int vdi_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVVdiState *s = bs->opaque;
VdiHeader header;
@@ -390,14 +383,6 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
vdi_header_print(&header);
#endif
if (header.disk_size > VDI_DISK_SIZE_MAX) {
error_setg(errp, "Unsupported VDI image size (size is 0x%" PRIx64
", max supported is 0x%" PRIx64 ")",
header.disk_size, VDI_DISK_SIZE_MAX);
ret = -ENOTSUP;
goto fail;
}
if (header.disk_size % SECTOR_SIZE != 0) {
/* 'VBoxManage convertfromraw' can create images with odd disk sizes.
We accept them but round the disk size to the next multiple of
@@ -408,57 +393,43 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
}
if (header.signature != VDI_SIGNATURE) {
error_setg(errp, "Image not in VDI format (bad signature %08" PRIx32
")", header.signature);
ret = -EINVAL;
logout("bad vdi signature %08x\n", header.signature);
ret = -EMEDIUMTYPE;
goto fail;
} else if (header.version != VDI_VERSION_1_1) {
error_setg(errp, "unsupported VDI image (version %" PRIu32 ".%" PRIu32
")", header.version >> 16, header.version & 0xffff);
logout("unsupported version %u.%u\n",
header.version >> 16, header.version & 0xffff);
ret = -ENOTSUP;
goto fail;
} else if (header.offset_bmap % SECTOR_SIZE != 0) {
/* We only support block maps which start on a sector boundary. */
error_setg(errp, "unsupported VDI image (unaligned block map offset "
"0x%" PRIx32 ")", header.offset_bmap);
logout("unsupported block map offset 0x%x B\n", header.offset_bmap);
ret = -ENOTSUP;
goto fail;
} else if (header.offset_data % SECTOR_SIZE != 0) {
/* We only support data blocks which start on a sector boundary. */
error_setg(errp, "unsupported VDI image (unaligned data offset 0x%"
PRIx32 ")", header.offset_data);
logout("unsupported data offset 0x%x B\n", header.offset_data);
ret = -ENOTSUP;
goto fail;
} else if (header.sector_size != SECTOR_SIZE) {
error_setg(errp, "unsupported VDI image (sector size %" PRIu32
" is not %u)", header.sector_size, SECTOR_SIZE);
logout("unsupported sector size %u B\n", header.sector_size);
ret = -ENOTSUP;
goto fail;
} else if (header.block_size != DEFAULT_CLUSTER_SIZE) {
error_setg(errp, "unsupported VDI image (block size %" PRIu32
" is not %u)", header.block_size, DEFAULT_CLUSTER_SIZE);
} else if (header.block_size != 1 * MiB) {
logout("unsupported block size %u B\n", header.block_size);
ret = -ENOTSUP;
goto fail;
} else if (header.disk_size >
(uint64_t)header.blocks_in_image * header.block_size) {
error_setg(errp, "unsupported VDI image (disk size %" PRIu64 ", "
"image bitmap has room for %" PRIu64 ")",
header.disk_size,
(uint64_t)header.blocks_in_image * header.block_size);
logout("unsupported disk size %" PRIu64 " B\n", header.disk_size);
ret = -ENOTSUP;
goto fail;
} else if (!uuid_is_null(header.uuid_link)) {
error_setg(errp, "unsupported VDI image (non-NULL link UUID)");
logout("link uuid != 0, unsupported\n");
ret = -ENOTSUP;
goto fail;
} else if (!uuid_is_null(header.uuid_parent)) {
error_setg(errp, "unsupported VDI image (non-NULL parent UUID)");
ret = -ENOTSUP;
goto fail;
} else if (header.blocks_in_image > VDI_BLOCKS_IN_IMAGE_MAX) {
error_setg(errp, "unsupported VDI image "
"(too many blocks %u, max is %u)",
header.blocks_in_image, VDI_BLOCKS_IN_IMAGE_MAX);
logout("parent uuid != 0, unsupported\n");
ret = -ENOTSUP;
goto fail;
}
@@ -499,7 +470,7 @@ static int vdi_reopen_prepare(BDRVReopenState *state,
return 0;
}
static int64_t coroutine_fn vdi_co_get_block_status(BlockDriverState *bs,
static int coroutine_fn vdi_co_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum)
{
/* TODO: Check for too large sector_num (in bdrv_is_allocated or here). */
@@ -508,23 +479,12 @@ static int64_t coroutine_fn vdi_co_get_block_status(BlockDriverState *bs,
size_t sector_in_block = sector_num % s->block_sectors;
int n_sectors = s->block_sectors - sector_in_block;
uint32_t bmap_entry = le32_to_cpu(s->bmap[bmap_index]);
uint64_t offset;
int result;
logout("%p, %" PRId64 ", %d, %p\n", bs, sector_num, nb_sectors, pnum);
if (n_sectors > nb_sectors) {
n_sectors = nb_sectors;
}
*pnum = n_sectors;
result = VDI_IS_ALLOCATED(bmap_entry);
if (!result) {
return 0;
}
offset = s->header.offset_data +
(uint64_t)bmap_entry * s->block_size +
sector_in_block * SECTOR_SIZE;
return BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID | offset;
return VDI_IS_ALLOCATED(bmap_entry);
}
static int vdi_co_read(BlockDriverState *bs,
@@ -673,8 +633,7 @@ static int vdi_co_write(BlockDriverState *bs,
return ret;
}
static int vdi_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
static int vdi_create(const char *filename, QEMUOptionParameter *options)
{
int fd;
int result = 0;
@@ -709,20 +668,11 @@ static int vdi_create(const char *filename, QEMUOptionParameter *options,
options++;
}
if (bytes > VDI_DISK_SIZE_MAX) {
result = -ENOTSUP;
error_setg(errp, "Unsupported VDI image size (size is 0x%" PRIx64
", max supported is 0x%" PRIx64 ")",
bytes, VDI_DISK_SIZE_MAX);
goto exit;
}
fd = qemu_open(filename,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY | O_LARGEFILE,
0644);
if (fd < 0) {
result = -errno;
goto exit;
return -errno;
}
/* We need enough blocks to store the given disk size,
@@ -756,7 +706,6 @@ static int vdi_create(const char *filename, QEMUOptionParameter *options,
vdi_header_to_le(&header);
if (write(fd, &header, sizeof(header)) < 0) {
result = -errno;
goto close_and_exit;
}
if (bmap_size > 0) {
@@ -770,8 +719,6 @@ static int vdi_create(const char *filename, QEMUOptionParameter *options,
}
if (write(fd, bmap, bmap_size) < 0) {
result = -errno;
g_free(bmap);
goto close_and_exit;
}
g_free(bmap);
}
@@ -779,16 +726,13 @@ static int vdi_create(const char *filename, QEMUOptionParameter *options,
if (image_type == VDI_TYPE_STATIC) {
if (ftruncate(fd, sizeof(header) + bmap_size + blocks * block_size)) {
result = -errno;
goto close_and_exit;
}
}
close_and_exit:
if ((close(fd) < 0) && !result) {
if (close(fd) < 0) {
result = -errno;
}
exit:
return result;
}
@@ -835,8 +779,7 @@ static BlockDriver bdrv_vdi = {
.bdrv_close = vdi_close,
.bdrv_reopen_prepare = vdi_reopen_prepare,
.bdrv_create = vdi_create,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_co_get_block_status = vdi_co_get_block_status,
.bdrv_co_is_allocated = vdi_co_is_allocated,
.bdrv_make_empty = vdi_make_empty,
.bdrv_read = vdi_co_read,

View File

@@ -1,216 +0,0 @@
/*
* Block driver for Hyper-V VHDX Images
*
* Copyright (c) 2013 Red Hat, Inc.,
*
* Authors:
* Jeff Cody <jcody@redhat.com>
*
* This is based on the "VHDX Format Specification v1.00", published 8/25/2012
* by Microsoft:
* https://www.microsoft.com/en-us/download/details.aspx?id=34750
*
* This work is licensed under the terms of the GNU LGPL, version 2 or later.
* See the COPYING.LIB file in the top-level directory.
*
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "block/vhdx.h"
#include <uuid/uuid.h>
/*
* All the VHDX formats on disk are little endian - the following
* are helper import/export functions to correctly convert
* endianness from disk read to native cpu format, and back again.
*/
/* VHDX File Header */
void vhdx_header_le_import(VHDXHeader *h)
{
assert(h != NULL);
le32_to_cpus(&h->signature);
le32_to_cpus(&h->checksum);
le64_to_cpus(&h->sequence_number);
leguid_to_cpus(&h->file_write_guid);
leguid_to_cpus(&h->data_write_guid);
leguid_to_cpus(&h->log_guid);
le16_to_cpus(&h->log_version);
le16_to_cpus(&h->version);
le32_to_cpus(&h->log_length);
le64_to_cpus(&h->log_offset);
}
void vhdx_header_le_export(VHDXHeader *orig_h, VHDXHeader *new_h)
{
assert(orig_h != NULL);
assert(new_h != NULL);
new_h->signature = cpu_to_le32(orig_h->signature);
new_h->checksum = cpu_to_le32(orig_h->checksum);
new_h->sequence_number = cpu_to_le64(orig_h->sequence_number);
new_h->file_write_guid = orig_h->file_write_guid;
new_h->data_write_guid = orig_h->data_write_guid;
new_h->log_guid = orig_h->log_guid;
cpu_to_leguids(&new_h->file_write_guid);
cpu_to_leguids(&new_h->data_write_guid);
cpu_to_leguids(&new_h->log_guid);
new_h->log_version = cpu_to_le16(orig_h->log_version);
new_h->version = cpu_to_le16(orig_h->version);
new_h->log_length = cpu_to_le32(orig_h->log_length);
new_h->log_offset = cpu_to_le64(orig_h->log_offset);
}
/* VHDX Log Headers */
void vhdx_log_desc_le_import(VHDXLogDescriptor *d)
{
assert(d != NULL);
le32_to_cpus(&d->signature);
le32_to_cpus(&d->trailing_bytes);
le64_to_cpus(&d->leading_bytes);
le64_to_cpus(&d->file_offset);
le64_to_cpus(&d->sequence_number);
}
void vhdx_log_desc_le_export(VHDXLogDescriptor *d)
{
assert(d != NULL);
cpu_to_le32s(&d->signature);
cpu_to_le32s(&d->trailing_bytes);
cpu_to_le64s(&d->leading_bytes);
cpu_to_le64s(&d->file_offset);
cpu_to_le64s(&d->sequence_number);
}
void vhdx_log_data_le_export(VHDXLogDataSector *d)
{
assert(d != NULL);
cpu_to_le32s(&d->data_signature);
cpu_to_le32s(&d->sequence_high);
cpu_to_le32s(&d->sequence_low);
}
void vhdx_log_entry_hdr_le_import(VHDXLogEntryHeader *hdr)
{
assert(hdr != NULL);
le32_to_cpus(&hdr->signature);
le32_to_cpus(&hdr->checksum);
le32_to_cpus(&hdr->entry_length);
le32_to_cpus(&hdr->tail);
le64_to_cpus(&hdr->sequence_number);
le32_to_cpus(&hdr->descriptor_count);
leguid_to_cpus(&hdr->log_guid);
le64_to_cpus(&hdr->flushed_file_offset);
le64_to_cpus(&hdr->last_file_offset);
}
void vhdx_log_entry_hdr_le_export(VHDXLogEntryHeader *hdr)
{
assert(hdr != NULL);
cpu_to_le32s(&hdr->signature);
cpu_to_le32s(&hdr->checksum);
cpu_to_le32s(&hdr->entry_length);
cpu_to_le32s(&hdr->tail);
cpu_to_le64s(&hdr->sequence_number);
cpu_to_le32s(&hdr->descriptor_count);
cpu_to_leguids(&hdr->log_guid);
cpu_to_le64s(&hdr->flushed_file_offset);
cpu_to_le64s(&hdr->last_file_offset);
}
/* Region table entries */
void vhdx_region_header_le_import(VHDXRegionTableHeader *hdr)
{
assert(hdr != NULL);
le32_to_cpus(&hdr->signature);
le32_to_cpus(&hdr->checksum);
le32_to_cpus(&hdr->entry_count);
}
void vhdx_region_header_le_export(VHDXRegionTableHeader *hdr)
{
assert(hdr != NULL);
cpu_to_le32s(&hdr->signature);
cpu_to_le32s(&hdr->checksum);
cpu_to_le32s(&hdr->entry_count);
}
void vhdx_region_entry_le_import(VHDXRegionTableEntry *e)
{
assert(e != NULL);
leguid_to_cpus(&e->guid);
le64_to_cpus(&e->file_offset);
le32_to_cpus(&e->length);
le32_to_cpus(&e->data_bits);
}
void vhdx_region_entry_le_export(VHDXRegionTableEntry *e)
{
assert(e != NULL);
cpu_to_leguids(&e->guid);
cpu_to_le64s(&e->file_offset);
cpu_to_le32s(&e->length);
cpu_to_le32s(&e->data_bits);
}
/* Metadata headers & table */
void vhdx_metadata_header_le_import(VHDXMetadataTableHeader *hdr)
{
assert(hdr != NULL);
le64_to_cpus(&hdr->signature);
le16_to_cpus(&hdr->entry_count);
}
void vhdx_metadata_header_le_export(VHDXMetadataTableHeader *hdr)
{
assert(hdr != NULL);
cpu_to_le64s(&hdr->signature);
cpu_to_le16s(&hdr->entry_count);
}
void vhdx_metadata_entry_le_import(VHDXMetadataTableEntry *e)
{
assert(e != NULL);
leguid_to_cpus(&e->item_id);
le32_to_cpus(&e->offset);
le32_to_cpus(&e->length);
le32_to_cpus(&e->data_bits);
}
void vhdx_metadata_entry_le_export(VHDXMetadataTableEntry *e)
{
assert(e != NULL);
cpu_to_leguids(&e->item_id);
cpu_to_le32s(&e->offset);
cpu_to_le32s(&e->length);
cpu_to_le32s(&e->data_bits);
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -6,9 +6,9 @@
* Authors:
* Jeff Cody <jcody@redhat.com>
*
* This is based on the "VHDX Format Specification v1.00", published 8/25/2012
* This is based on the "VHDX Format Specification v0.95", published 4/12/2012
* by Microsoft:
* https://www.microsoft.com/en-us/download/details.aspx?id=34750
* https://www.microsoft.com/en-us/download/details.aspx?id=29681
*
* This work is licensed under the terms of the GNU LGPL, version 2 or later.
* See the COPYING.LIB file in the top-level directory.
@@ -18,11 +18,6 @@
#ifndef BLOCK_VHDX_H
#define BLOCK_VHDX_H
#define KiB (1 * 1024)
#define MiB (KiB * 1024)
#define GiB (MiB * 1024)
#define TiB ((uint64_t) GiB * 1024)
/* Structures and fields present in the VHDX file */
/* The header section has the following blocks,
@@ -35,15 +30,14 @@
* 0.........64KB...........128KB........192KB..........256KB................1MB
*/
#define VHDX_HEADER_BLOCK_SIZE (64 * 1024)
#define VHDX_HEADER_BLOCK_SIZE (64*1024)
#define VHDX_FILE_ID_OFFSET 0
#define VHDX_HEADER1_OFFSET (VHDX_HEADER_BLOCK_SIZE * 1)
#define VHDX_HEADER2_OFFSET (VHDX_HEADER_BLOCK_SIZE * 2)
#define VHDX_REGION_TABLE_OFFSET (VHDX_HEADER_BLOCK_SIZE * 3)
#define VHDX_REGION_TABLE2_OFFSET (VHDX_HEADER_BLOCK_SIZE * 4)
#define VHDX_HEADER1_OFFSET (VHDX_HEADER_BLOCK_SIZE*1)
#define VHDX_HEADER2_OFFSET (VHDX_HEADER_BLOCK_SIZE*2)
#define VHDX_REGION_TABLE_OFFSET (VHDX_HEADER_BLOCK_SIZE*3)
#define VHDX_HEADER_SECTION_END (1 * MiB)
/*
* A note on the use of MS-GUID fields. For more details on the GUID,
* please see: https://en.wikipedia.org/wiki/Globally_unique_identifier.
@@ -61,11 +55,10 @@
/* These structures are ones that are defined in the VHDX specification
* document */
#define VHDX_FILE_SIGNATURE 0x656C696678646876ULL /* "vhdxfile" in ASCII */
typedef struct VHDXFileIdentifier {
uint64_t signature; /* "vhdxfile" in ASCII */
uint16_t creator[256]; /* optional; utf-16 string to identify
the vhdx file creator. Diagnostic
the vhdx file creator. Diagnotistic
only */
} VHDXFileIdentifier;
@@ -74,7 +67,7 @@ typedef struct VHDXFileIdentifier {
* Microsoft is not just 16 bytes though - it is a structure that is defined,
* so we need to follow it here so that endianness does not trip us up */
typedef struct QEMU_PACKED MSGUID {
typedef struct MSGUID {
uint32_t data1;
uint16_t data2;
uint16_t data3;
@@ -84,15 +77,14 @@ typedef struct QEMU_PACKED MSGUID {
#define guid_eq(a, b) \
(memcmp(&(a), &(b), sizeof(MSGUID)) == 0)
#define VHDX_HEADER_SIZE (4 * 1024) /* although the vhdx_header struct in disk
is only 582 bytes, for purposes of crc
the header is the first 4KB of the 64KB
block */
#define VHDX_HEADER_SIZE (4*1024) /* although the vhdx_header struct in disk
is only 582 bytes, for purposes of crc
the header is the first 4KB of the 64KB
block */
/* The full header is 4KB, although the actual header data is much smaller.
* But for the checksum calculation, it is over the entire 4KB structure,
* not just the defined portion of it */
#define VHDX_HEADER_SIGNATURE 0x64616568
typedef struct QEMU_PACKED VHDXHeader {
uint32_t signature; /* "head" in ASCII */
uint32_t checksum; /* CRC-32C hash of the whole header */
@@ -100,7 +92,7 @@ typedef struct QEMU_PACKED VHDXHeader {
VHDX file has 2 of these headers,
and only the header with the highest
sequence number is valid */
MSGUID file_write_guid; /* 128 bit unique identifier. Must be
MSGUID file_write_guid; /* 128 bit unique identifier. Must be
updated to new, unique value before
the first modification is made to
file */
@@ -122,9 +114,9 @@ typedef struct QEMU_PACKED VHDXHeader {
there is no valid log. If non-zero,
log entries with this guid are
valid. */
uint16_t log_version; /* version of the log format. Must be
set to zero */
uint16_t version; /* version of the vhdx file. Currently,
uint16_t log_version; /* version of the log format. Mustn't be
zero, unless log_guid is also zero */
uint16_t version; /* version of th evhdx file. Currently,
only supported version is "1" */
uint32_t log_length; /* length of the log. Must be multiple
of 1MB */
@@ -133,7 +125,6 @@ typedef struct QEMU_PACKED VHDXHeader {
} VHDXHeader;
/* Header for the region table block */
#define VHDX_REGION_SIGNATURE 0x69676572 /* "regi" in ASCII */
typedef struct QEMU_PACKED VHDXRegionTableHeader {
uint32_t signature; /* "regi" in ASCII */
uint32_t checksum; /* CRC-32C hash of the 64KB table */
@@ -160,10 +151,7 @@ typedef struct QEMU_PACKED VHDXRegionTableEntry {
/* ---- LOG ENTRY STRUCTURES ---- */
#define VHDX_LOG_MIN_SIZE (1024 * 1024)
#define VHDX_LOG_SECTOR_SIZE 4096
#define VHDX_LOG_HDR_SIZE 64
#define VHDX_LOG_SIGNATURE 0x65676f6c
typedef struct QEMU_PACKED VHDXLogEntryHeader {
uint32_t signature; /* "loge" in ASCII */
uint32_t checksum; /* CRC-32C hash of the 64KB table */
@@ -180,14 +168,13 @@ typedef struct QEMU_PACKED VHDXLogEntryHeader {
vhdx_header. If not found in
vhdx_header, it is invalid */
uint64_t flushed_file_offset; /* see spec for full details - this
should be vhdx file size in bytes */
sould be vhdx file size in bytes */
uint64_t last_file_offset; /* size in bytes that all allocated
file structures fit into */
} VHDXLogEntryHeader;
#define VHDX_LOG_DESC_SIZE 32
#define VHDX_LOG_DESC_SIGNATURE 0x63736564
#define VHDX_LOG_ZERO_SIGNATURE 0x6f72657a
typedef struct QEMU_PACKED VHDXLogDescriptor {
uint32_t signature; /* "zero" or "desc" in ASCII */
union {
@@ -207,7 +194,6 @@ typedef struct QEMU_PACKED VHDXLogDescriptor {
vhdx_log_entry_header */
} VHDXLogDescriptor;
#define VHDX_LOG_DATA_SIGNATURE 0x61746164
typedef struct QEMU_PACKED VHDXLogDataSector {
uint32_t data_signature; /* "data" in ASCII */
uint32_t sequence_high; /* 4 MSB of 8 byte sequence_number */
@@ -226,19 +212,19 @@ typedef struct QEMU_PACKED VHDXLogDataSector {
#define PAYLOAD_BLOCK_UNDEFINED 1
#define PAYLOAD_BLOCK_ZERO 2
#define PAYLOAD_BLOCK_UNMAPPED 5
#define PAYLOAD_BLOCK_FULLY_PRESENT 6
#define PAYLOAD_BLOCK_FULL_PRESENT 6
#define PAYLOAD_BLOCK_PARTIALLY_PRESENT 7
#define SB_BLOCK_NOT_PRESENT 0
#define SB_BLOCK_PRESENT 6
/* per the spec */
#define VHDX_MAX_SECTORS_PER_BLOCK (1 << 23)
#define VHDX_MAX_SECTORS_PER_BLOCK (1<<23)
/* upper 44 bits are the file offset in 1MB units lower 3 bits are the state
other bits are reserved */
#define VHDX_BAT_STATE_BIT_MASK 0x07
#define VHDX_BAT_FILE_OFF_MASK 0xFFFFFFFFFFF00000ULL /* upper 44 bits */
#define VHDX_BAT_FILE_OFF_BITS (64-44)
typedef uint64_t VHDXBatEntry;
/* ---- METADATA REGION STRUCTURES ---- */
@@ -247,7 +233,6 @@ typedef uint64_t VHDXBatEntry;
#define VHDX_METADATA_MAX_ENTRIES 2047 /* not including the header */
#define VHDX_METADATA_TABLE_MAX_SIZE \
(VHDX_METADATA_ENTRY_SIZE * (VHDX_METADATA_MAX_ENTRIES+1))
#define VHDX_METADATA_SIGNATURE 0x617461646174656DULL /* "metadata" in ASCII */
typedef struct QEMU_PACKED VHDXMetadataTableHeader {
uint64_t signature; /* "metadata" in ASCII */
uint16_t reserved;
@@ -267,8 +252,8 @@ typedef struct QEMU_PACKED VHDXMetadataTableEntry {
metadata region */
/* note: if length = 0, so is offset */
uint32_t length; /* length of metadata. <= 1MB. */
uint32_t data_bits; /* least-significant 3 bits are flags,
the rest are reserved (see above) */
uint32_t data_bits; /* least-significant 3 bits are flags, the
rest are reserved (see above) */
uint32_t reserved2;
} VHDXMetadataTableEntry;
@@ -277,16 +262,13 @@ typedef struct QEMU_PACKED VHDXMetadataTableEntry {
If set indicates a fixed
size VHDX file */
#define VHDX_PARAMS_HAS_PARENT 0x02 /* has parent / backing file */
#define VHDX_BLOCK_SIZE_MIN (1 * MiB)
#define VHDX_BLOCK_SIZE_MAX (256 * MiB)
typedef struct QEMU_PACKED VHDXFileParameters {
uint32_t block_size; /* size of each payload block, always
power of 2, <= 256MB and >= 1MB. */
uint32_t data_bits; /* least-significant 2 bits are flags,
the rest are reserved (see above) */
uint32_t data_bits; /* least-significant 2 bits are flags, the rest
are reserved (see above) */
} VHDXFileParameters;
#define VHDX_MAX_IMAGE_SIZE ((uint64_t) 64 * TiB)
typedef struct QEMU_PACKED VHDXVirtualDiskSize {
uint64_t virtual_disk_size; /* Size of the virtual disk, in bytes.
Must be multiple of the sector size,
@@ -294,7 +276,7 @@ typedef struct QEMU_PACKED VHDXVirtualDiskSize {
} VHDXVirtualDiskSize;
typedef struct QEMU_PACKED VHDXPage83Data {
MSGUID page_83_data; /* unique id for scsi devices that
MSGUID page_83_data[16]; /* unique id for scsi devices that
support page 0x83 */
} VHDXPage83Data;
@@ -309,7 +291,7 @@ typedef struct QEMU_PACKED VHDXVirtualDiskPhysicalSectorSize {
} VHDXVirtualDiskPhysicalSectorSize;
typedef struct QEMU_PACKED VHDXParentLocatorHeader {
MSGUID locator_type; /* type of the parent virtual disk. */
MSGUID locator_type[16]; /* type of the parent virtual disk. */
uint16_t reserved;
uint16_t key_value_count; /* number of key/value pairs for this
locator */
@@ -326,125 +308,18 @@ typedef struct QEMU_PACKED VHDXParentLocatorEntry {
/* ----- END VHDX SPECIFICATION STRUCTURES ---- */
typedef struct VHDXMetadataEntries {
VHDXMetadataTableEntry file_parameters_entry;
VHDXMetadataTableEntry virtual_disk_size_entry;
VHDXMetadataTableEntry page83_data_entry;
VHDXMetadataTableEntry logical_sector_size_entry;
VHDXMetadataTableEntry phys_sector_size_entry;
VHDXMetadataTableEntry parent_locator_entry;
uint16_t present;
} VHDXMetadataEntries;
typedef struct VHDXLogEntries {
uint64_t offset;
uint64_t length;
uint32_t write;
uint32_t read;
VHDXLogEntryHeader *hdr;
void *desc_buffer;
uint64_t sequence;
uint32_t tail;
} VHDXLogEntries;
typedef struct VHDXRegionEntry {
uint64_t start;
uint64_t end;
QLIST_ENTRY(VHDXRegionEntry) entries;
} VHDXRegionEntry;
typedef struct BDRVVHDXState {
CoMutex lock;
int curr_header;
VHDXHeader *headers[2];
VHDXRegionTableHeader rt;
VHDXRegionTableEntry bat_rt; /* region table for the BAT */
VHDXRegionTableEntry metadata_rt; /* region table for the metadata */
VHDXMetadataTableHeader metadata_hdr;
VHDXMetadataEntries metadata_entries;
VHDXFileParameters params;
uint32_t block_size;
uint32_t block_size_bits;
uint32_t sectors_per_block;
uint32_t sectors_per_block_bits;
uint64_t virtual_disk_size;
uint32_t logical_sector_size;
uint32_t physical_sector_size;
uint64_t chunk_ratio;
uint32_t chunk_ratio_bits;
uint32_t logical_sector_size_bits;
uint32_t bat_entries;
VHDXBatEntry *bat;
uint64_t bat_offset;
bool first_visible_write;
MSGUID session_guid;
VHDXLogEntries log;
VHDXParentLocatorHeader parent_header;
VHDXParentLocatorEntry *parent_entries;
Error *migration_blocker;
bool log_replayed_on_open;
QLIST_HEAD(VHDXRegionHead, VHDXRegionEntry) regions;
} BDRVVHDXState;
void vhdx_guid_generate(MSGUID *guid);
int vhdx_update_headers(BlockDriverState *bs, BDRVVHDXState *s, bool rw,
MSGUID *log_guid);
uint32_t vhdx_update_checksum(uint8_t *buf, size_t size, int crc_offset);
uint32_t vhdx_checksum_calc(uint32_t crc, uint8_t *buf, size_t size,
int crc_offset);
bool vhdx_checksum_is_valid(uint8_t *buf, size_t size, int crc_offset);
int vhdx_parse_log(BlockDriverState *bs, BDRVVHDXState *s, bool *flushed,
Error **errp);
int vhdx_log_write_and_flush(BlockDriverState *bs, BDRVVHDXState *s,
void *data, uint32_t length, uint64_t offset);
static inline void leguid_to_cpus(MSGUID *guid)
static void leguid_to_cpus(MSGUID *guid)
{
le32_to_cpus(&guid->data1);
le16_to_cpus(&guid->data2);
le16_to_cpus(&guid->data3);
}
static inline void cpu_to_leguids(MSGUID *guid)
{
cpu_to_le32s(&guid->data1);
cpu_to_le16s(&guid->data2);
cpu_to_le16s(&guid->data3);
}
void vhdx_header_le_import(VHDXHeader *h);
void vhdx_header_le_export(VHDXHeader *orig_h, VHDXHeader *new_h);
void vhdx_log_desc_le_import(VHDXLogDescriptor *d);
void vhdx_log_desc_le_export(VHDXLogDescriptor *d);
void vhdx_log_data_le_export(VHDXLogDataSector *d);
void vhdx_log_entry_hdr_le_import(VHDXLogEntryHeader *hdr);
void vhdx_log_entry_hdr_le_export(VHDXLogEntryHeader *hdr);
void vhdx_region_header_le_import(VHDXRegionTableHeader *hdr);
void vhdx_region_header_le_export(VHDXRegionTableHeader *hdr);
void vhdx_region_entry_le_import(VHDXRegionTableEntry *e);
void vhdx_region_entry_le_export(VHDXRegionTableEntry *e);
void vhdx_metadata_header_le_import(VHDXMetadataTableHeader *hdr);
void vhdx_metadata_header_le_export(VHDXMetadataTableHeader *hdr);
void vhdx_metadata_entry_le_import(VHDXMetadataTableEntry *e);
void vhdx_metadata_entry_le_export(VHDXMetadataTableEntry *e);
int vhdx_user_visible_write(BlockDriverState *bs, BDRVVHDXState *s);
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -45,10 +45,8 @@ enum vhd_type {
// Seconds since Jan 1, 2000 0:00:00 (UTC)
#define VHD_TIMESTAMP_BASE 946684800
#define VHD_MAX_SECTORS (65535LL * 255 * 255)
// always big-endian
typedef struct vhd_footer {
struct vhd_footer {
char creator[8]; // "conectix"
uint32_t features;
uint32_t version;
@@ -81,9 +79,9 @@ typedef struct vhd_footer {
uint8_t uuid[16];
uint8_t in_saved_state;
} QEMU_PACKED VHDFooter;
};
typedef struct vhd_dyndisk_header {
struct vhd_dyndisk_header {
char magic[8]; // "cxsparse"
// Offset of next header structure, 0xFFFFFFFF if none
@@ -113,7 +111,7 @@ typedef struct vhd_dyndisk_header {
uint32_t reserved;
uint64_t data_offset;
} parent_locator[8];
} QEMU_PACKED VHDDynDiskHeader;
};
typedef struct BDRVVPCState {
CoMutex lock;
@@ -157,16 +155,14 @@ static int vpc_probe(const uint8_t *buf, int buf_size, const char *filename)
return 0;
}
static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int vpc_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVVPCState *s = bs->opaque;
int i;
VHDFooter *footer;
VHDDynDiskHeader *dyndisk_header;
struct vhd_footer* footer;
struct vhd_dyndisk_header* dyndisk_header;
uint8_t buf[HEADER_SIZE];
uint32_t checksum;
uint64_t computed_size;
int disk_type = VHD_DYNAMIC;
int ret;
@@ -175,7 +171,7 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
goto fail;
}
footer = (VHDFooter *) s->footer_buf;
footer = (struct vhd_footer*) s->footer_buf;
if (strncmp(footer->creator, "conectix", 8)) {
int64_t offset = bdrv_getlength(bs->file);
if (offset < 0) {
@@ -193,8 +189,7 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
goto fail;
}
if (strncmp(footer->creator, "conectix", 8)) {
error_setg(errp, "invalid VPC image");
ret = -EINVAL;
ret = -EMEDIUMTYPE;
goto fail;
}
disk_type = VHD_FIXED;
@@ -215,17 +210,8 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
bs->total_sectors = (int64_t)
be16_to_cpu(footer->cyls) * footer->heads * footer->secs_per_cyl;
/* images created with disk2vhd report a far higher virtual size
* than expected with the cyls * heads * sectors_per_cyl formula.
* use the footer->size instead if the image was created with
* disk2vhd.
*/
if (!strncmp(footer->creator_app, "d2v", 4)) {
bs->total_sectors = be64_to_cpu(footer->size) / BDRV_SECTOR_SIZE;
}
/* Allow a maximum disk size of approximately 2 TB */
if (bs->total_sectors >= VHD_MAX_SECTORS) {
if (bs->total_sectors >= 65535LL * 255 * 255) {
ret = -EFBIG;
goto fail;
}
@@ -237,7 +223,7 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
goto fail;
}
dyndisk_header = (VHDDynDiskHeader *) buf;
dyndisk_header = (struct vhd_dyndisk_header *) buf;
if (strncmp(dyndisk_header->magic, "cxsparse", 8)) {
ret = -EINVAL;
@@ -245,31 +231,10 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
}
s->block_size = be32_to_cpu(dyndisk_header->block_size);
if (!is_power_of_2(s->block_size) || s->block_size < BDRV_SECTOR_SIZE) {
error_setg(errp, "Invalid block size %" PRIu32, s->block_size);
ret = -EINVAL;
goto fail;
}
s->bitmap_size = ((s->block_size / (8 * 512)) + 511) & ~511;
s->max_table_entries = be32_to_cpu(dyndisk_header->max_table_entries);
if ((bs->total_sectors * 512) / s->block_size > 0xffffffffU) {
ret = -EINVAL;
goto fail;
}
if (s->max_table_entries > (VHD_MAX_SECTORS * 512) / s->block_size) {
ret = -EINVAL;
goto fail;
}
computed_size = (uint64_t) s->max_table_entries * s->block_size;
if (computed_size < bs->total_sectors * 512) {
ret = -EINVAL;
goto fail;
}
s->pagetable = qemu_blockalign(bs, s->max_table_entries * 4);
s->pagetable = g_malloc(s->max_table_entries * 4);
s->bat_offset = be64_to_cpu(dyndisk_header->table_offset);
@@ -294,13 +259,6 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
}
}
if (s->free_data_block_offset > bdrv_getlength(bs->file)) {
error_setg(errp, "block-vpc: free_data_block_offset points after "
"the end of file. The image has been truncated.");
ret = -EINVAL;
goto fail;
}
s->last_bitmap_offset = (int64_t) -1;
#ifdef CACHE
@@ -322,7 +280,7 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
return 0;
fail:
qemu_vfree(s->pagetable);
g_free(s->pagetable);
#ifdef CACHE
g_free(s->pageentry_u8);
#endif
@@ -480,19 +438,6 @@ fail:
return -1;
}
static int vpc_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
{
BDRVVPCState *s = (BDRVVPCState *)bs->opaque;
VHDFooter *footer = (VHDFooter *) s->footer_buf;
if (cpu_to_be32(footer->type) != VHD_FIXED) {
bdi->cluster_size = s->block_size;
}
bdi->unallocated_blocks_are_zero = true;
return 0;
}
static int vpc_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
@@ -500,7 +445,7 @@ static int vpc_read(BlockDriverState *bs, int64_t sector_num,
int ret;
int64_t offset;
int64_t sectors, sectors_per_block;
VHDFooter *footer = (VHDFooter *) s->footer_buf;
struct vhd_footer *footer = (struct vhd_footer *) s->footer_buf;
if (cpu_to_be32(footer->type) == VHD_FIXED) {
return bdrv_read(bs->file, sector_num, buf, nb_sectors);
@@ -549,7 +494,7 @@ static int vpc_write(BlockDriverState *bs, int64_t sector_num,
int64_t offset;
int64_t sectors, sectors_per_block;
int ret;
VHDFooter *footer = (VHDFooter *) s->footer_buf;
struct vhd_footer *footer = (struct vhd_footer *) s->footer_buf;
if (cpu_to_be32(footer->type) == VHD_FIXED) {
return bdrv_write(bs->file, sector_num, buf, nb_sectors);
@@ -651,8 +596,8 @@ static int calculate_geometry(int64_t total_sectors, uint16_t* cyls,
static int create_dynamic_disk(int fd, uint8_t *buf, int64_t total_sectors)
{
VHDDynDiskHeader *dyndisk_header =
(VHDDynDiskHeader *) buf;
struct vhd_dyndisk_header* dyndisk_header =
(struct vhd_dyndisk_header*) buf;
size_t block_size, num_bat_entries;
int i;
int ret = -EIO;
@@ -738,11 +683,10 @@ static int create_fixed_disk(int fd, uint8_t *buf, int64_t total_size)
return ret;
}
static int vpc_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
static int vpc_create(const char *filename, QEMUOptionParameter *options)
{
uint8_t buf[1024];
VHDFooter *footer = (VHDFooter *) buf;
struct vhd_footer *footer = (struct vhd_footer *) buf;
QEMUOptionParameter *disk_type_param;
int fd, i;
uint16_t cyls = 0;
@@ -842,22 +786,10 @@ static int vpc_create(const char *filename, QEMUOptionParameter *options,
return ret;
}
static int vpc_has_zero_init(BlockDriverState *bs)
{
BDRVVPCState *s = bs->opaque;
VHDFooter *footer = (VHDFooter *) s->footer_buf;
if (cpu_to_be32(footer->type) == VHD_FIXED) {
return bdrv_has_zero_init(bs->file);
} else {
return 1;
}
}
static void vpc_close(BlockDriverState *bs)
{
BDRVVPCState *s = bs->opaque;
qemu_vfree(s->pagetable);
g_free(s->pagetable);
#ifdef CACHE
g_free(s->pageentry_u8);
#endif
@@ -886,19 +818,16 @@ static BlockDriver bdrv_vpc = {
.format_name = "vpc",
.instance_size = sizeof(BDRVVPCState),
.bdrv_probe = vpc_probe,
.bdrv_open = vpc_open,
.bdrv_close = vpc_close,
.bdrv_reopen_prepare = vpc_reopen_prepare,
.bdrv_create = vpc_create,
.bdrv_probe = vpc_probe,
.bdrv_open = vpc_open,
.bdrv_close = vpc_close,
.bdrv_reopen_prepare = vpc_reopen_prepare,
.bdrv_create = vpc_create,
.bdrv_read = vpc_co_read,
.bdrv_write = vpc_co_write,
.bdrv_get_info = vpc_get_info,
.create_options = vpc_create_options,
.bdrv_has_zero_init = vpc_has_zero_init,
.create_options = vpc_create_options,
};
static void bdrv_vpc_init(void)

View File

@@ -266,7 +266,8 @@ typedef struct mbr_t {
} QEMU_PACKED mbr_t;
typedef struct direntry_t {
uint8_t name[8 + 3];
uint8_t name[8];
uint8_t extension[3];
uint8_t attributes;
uint8_t reserved[2];
uint16_t ctime;
@@ -517,9 +518,11 @@ static inline uint8_t fat_chksum(const direntry_t* entry)
uint8_t chksum=0;
int i;
for (i = 0; i < ARRAY_SIZE(entry->name); i++) {
chksum = (((chksum & 0xfe) >> 1) |
((chksum & 0x01) ? 0x80 : 0)) + entry->name[i];
for(i=0;i<11;i++) {
unsigned char c;
c = (i < 8) ? entry->name[i] : entry->extension[i-8];
chksum=(((chksum&0xfe)>>1)|((chksum&0x01)?0x80:0)) + c;
}
return chksum;
@@ -614,7 +617,7 @@ static inline direntry_t* create_short_and_long_name(BDRVVVFATState* s,
if(is_dot) {
entry=array_get_next(&(s->directory));
memset(entry->name, 0x20, sizeof(entry->name));
memset(entry->name,0x20,11);
memcpy(entry->name,filename,strlen(filename));
return entry;
}
@@ -629,14 +632,12 @@ static inline direntry_t* create_short_and_long_name(BDRVVVFATState* s,
i = 8;
entry=array_get_next(&(s->directory));
memset(entry->name, 0x20, sizeof(entry->name));
memset(entry->name,0x20,11);
memcpy(entry->name, filename, i);
if (j > 0) {
for (i = 0; i < 3 && filename[j + 1 + i]; i++) {
entry->name[8 + i] = filename[j + 1 + i];
}
}
if(j > 0)
for (i = 0; i < 3 && filename[j+1+i]; i++)
entry->extension[i] = filename[j+1+i];
/* upcase & remove unwanted characters */
for(i=10;i>=0;i--) {
@@ -787,9 +788,7 @@ static int read_directory(BDRVVVFATState* s, int mapping_index)
s->current_mapping->path=buffer;
s->current_mapping->read_only =
(st.st_mode & (S_IWUSR | S_IWGRP | S_IWOTH)) == 0;
} else {
g_free(buffer);
}
}
}
closedir(dir);
@@ -833,8 +832,7 @@ static inline off_t cluster2sector(BDRVVVFATState* s, uint32_t cluster_num)
}
static int init_directories(BDRVVVFATState* s,
const char *dirname, int heads, int secs,
Error **errp)
const char *dirname, int heads, int secs)
{
bootsector_t* bootsector;
mapping_t* mapping;
@@ -863,7 +861,8 @@ static int init_directories(BDRVVVFATState* s,
{
direntry_t* entry=array_get_next(&(s->directory));
entry->attributes=0x28; /* archive | volume label */
memcpy(entry->name, "QEMU VVFAT ", sizeof(entry->name));
memcpy(entry->name,"QEMU VVF",8);
memcpy(entry->extension,"AT ",3);
}
/* Now build FAT, and write back information into directory */
@@ -895,8 +894,8 @@ static int init_directories(BDRVVVFATState* s,
if (mapping->mode & MODE_DIRECTORY) {
mapping->begin = cluster;
if(read_directory(s, i)) {
error_setg(errp, "Could not read directory %s",
mapping->path);
fprintf(stderr, "Could not read directory %s\n",
mapping->path);
return -1;
}
mapping = array_get(&(s->mapping), i);
@@ -922,10 +921,9 @@ static int init_directories(BDRVVVFATState* s,
cluster = mapping->end;
if(cluster > s->cluster_count) {
error_setg(errp,
"Directory does not fit in FAT%d (capacity %.2f MB)",
s->fat_type, s->sector_count / 2000.0);
return -1;
fprintf(stderr,"Directory does not fit in FAT%d (capacity %.2f MB)\n",
s->fat_type, s->sector_count / 2000.0);
return -EINVAL;
}
/* fix fat for entry */
@@ -983,7 +981,7 @@ static int init_directories(BDRVVVFATState* s,
static BDRVVVFATState *vvv = NULL;
#endif
static int enable_write_target(BDRVVVFATState *s, Error **errp);
static int enable_write_target(BDRVVVFATState *s);
static int is_consistent(BDRVVVFATState *s);
static void vvfat_rebind(BlockDriverState *bs)
@@ -1067,8 +1065,7 @@ static void vvfat_parse_filename(const char *filename, QDict *options,
qdict_put(options, "rw", qbool_from_int(rw));
}
static int vvfat_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int vvfat_open(BlockDriverState *bs, QDict *options, int flags)
{
BDRVVVFATState *s = bs->opaque;
int cyls, heads, secs;
@@ -1087,17 +1084,19 @@ DLOG(if (stderr == NULL) {
setbuf(stderr, NULL);
})
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
ret = -EINVAL;
goto fail;
}
dirname = qemu_opt_get(opts, "dir");
if (!dirname) {
error_setg(errp, "vvfat block driver requires a 'dir' option");
qerror_report(ERROR_CLASS_GENERIC_ERROR, "vvfat block driver requires "
"a 'dir' option");
ret = -EINVAL;
goto fail;
}
@@ -1123,7 +1122,6 @@ DLOG(if (stderr == NULL) {
if (!s->fat_type) {
s->fat_type = 16;
}
s->first_sectors_number = 0x40;
cyls = s->fat_type == 12 ? 64 : 1024;
heads = 16;
secs = 63;
@@ -1138,7 +1136,8 @@ DLOG(if (stderr == NULL) {
case 12:
break;
default:
error_setg(errp, "Valid FAT types are only 12, 16 and 32");
qerror_report(ERROR_CLASS_GENERIC_ERROR, "Valid FAT types are only "
"12, 16 and 32");
ret = -EINVAL;
goto fail;
}
@@ -1151,6 +1150,7 @@ DLOG(if (stderr == NULL) {
s->current_cluster=0xffffffff;
s->first_sectors_number=0x40;
/* read only is the default for safety */
bs->read_only = 1;
s->qcow = s->write_target = NULL;
@@ -1164,8 +1164,8 @@ DLOG(if (stderr == NULL) {
s->sector_count = cyls * heads * secs - (s->first_sectors_number - 1);
if (qemu_opt_get_bool(opts, "rw", false)) {
ret = enable_write_target(s, errp);
if (ret < 0) {
if (enable_write_target(s)) {
ret = -EIO;
goto fail;
}
bs->read_only = 0;
@@ -1173,7 +1173,7 @@ DLOG(if (stderr == NULL) {
bs->total_sectors = cyls * heads * secs;
if (init_directories(s, dirname, heads, secs, errp)) {
if (init_directories(s, dirname, heads, secs)) {
ret = -EIO;
goto fail;
}
@@ -1590,20 +1590,17 @@ static int parse_short_name(BDRVVVFATState* s,
lfn->name[i] = direntry->name[i];
}
for (j = 2; j >= 0 && direntry->name[8 + j] == ' '; j--) {
}
for (j = 2; j >= 0 && direntry->extension[j] == ' '; j--);
if (j >= 0) {
lfn->name[i++] = '.';
lfn->name[i + j + 1] = '\0';
for (;j >= 0; j--) {
uint8_t c = direntry->name[8 + j];
if (c <= ' ' || c > 0x7f) {
return -2;
} else if (s->downcase_short_names) {
lfn->name[i + j] = qemu_tolower(c);
} else {
lfn->name[i + j] = c;
}
if (direntry->extension[j] <= ' ' || direntry->extension[j] > 0x7f)
return -2;
else if (s->downcase_short_names)
lfn->name[i + j] = qemu_tolower(direntry->extension[j]);
else
lfn->name[i + j] = direntry->extension[j];
}
} else
lfn->name[i + j + 1] = '\0';
@@ -1868,7 +1865,7 @@ static int check_directory_consistency(BDRVVVFATState *s,
if (s->used_clusters[cluster_num] & USED_ANY) {
fprintf(stderr, "cluster %d used more than once\n", (int)cluster_num);
goto fail;
return 0;
}
s->used_clusters[cluster_num] = USED_DIRECTORY;
@@ -2877,17 +2874,16 @@ static coroutine_fn int vvfat_co_write(BlockDriverState *bs, int64_t sector_num,
return ret;
}
static int64_t coroutine_fn vvfat_co_get_block_status(BlockDriverState *bs,
static int coroutine_fn vvfat_co_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int* n)
{
BDRVVVFATState* s = bs->opaque;
*n = s->sector_count - sector_num;
if (*n > nb_sectors) {
*n = nb_sectors;
} else if (*n < 0) {
return 0;
}
return BDRV_BLOCK_DATA;
if (*n > nb_sectors)
*n = nb_sectors;
else if (*n < 0)
return 0;
return 1;
}
static int write_target_commit(BlockDriverState *bs, int64_t sector_num,
@@ -2898,7 +2894,7 @@ static int write_target_commit(BlockDriverState *bs, int64_t sector_num,
static void write_target_close(BlockDriverState *bs) {
BDRVVVFATState* s = *((BDRVVVFATState**) bs->opaque);
bdrv_unref(s->qcow);
bdrv_delete(s->qcow);
g_free(s->qcow_filename);
}
@@ -2908,7 +2904,7 @@ static BlockDriver vvfat_write_target = {
.bdrv_close = write_target_close,
};
static int enable_write_target(BDRVVVFATState *s, Error **errp)
static int enable_write_target(BDRVVVFATState *s)
{
BlockDriver *bdrv_qcow;
QEMUOptionParameter *options;
@@ -2921,8 +2917,9 @@ static int enable_write_target(BDRVVVFATState *s, Error **errp)
s->qcow_filename = g_malloc(1024);
ret = get_tmp_filename(s->qcow_filename, 1024);
if (ret < 0) {
error_setg_errno(errp, -ret, "can't create temporary file");
goto err;
g_free(s->qcow_filename);
s->qcow_filename = NULL;
return ret;
}
bdrv_qcow = bdrv_find_format("qcow");
@@ -2930,35 +2927,30 @@ static int enable_write_target(BDRVVVFATState *s, Error **errp)
set_option_parameter_int(options, BLOCK_OPT_SIZE, s->sector_count * 512);
set_option_parameter(options, BLOCK_OPT_BACKING_FILE, "fat:");
ret = bdrv_create(bdrv_qcow, s->qcow_filename, options, errp);
free_option_parameters(options);
if (ret < 0) {
goto err;
if (bdrv_create(bdrv_qcow, s->qcow_filename, options) < 0)
return -1;
s->qcow = bdrv_new("");
if (s->qcow == NULL) {
return -1;
}
s->qcow = NULL;
ret = bdrv_open(&s->qcow, s->qcow_filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_CACHE_WB | BDRV_O_NO_FLUSH,
bdrv_qcow, errp);
ret = bdrv_open(s->qcow, s->qcow_filename, NULL,
BDRV_O_RDWR | BDRV_O_CACHE_WB | BDRV_O_NO_FLUSH, bdrv_qcow);
if (ret < 0) {
goto err;
return ret;
}
#ifndef _WIN32
unlink(s->qcow_filename);
#endif
bdrv_set_backing_hd(s->bs, bdrv_new("", &error_abort));
s->bs->backing_hd = calloc(sizeof(BlockDriverState), 1);
s->bs->backing_hd->drv = &vvfat_write_target;
s->bs->backing_hd->opaque = g_malloc(sizeof(void*));
*(void**)s->bs->backing_hd->opaque = s;
return 0;
err:
g_free(s->qcow_filename);
s->qcow_filename = NULL;
return ret;
}
static void vvfat_close(BlockDriverState *bs)
@@ -2989,7 +2981,7 @@ static BlockDriver bdrv_vvfat = {
.bdrv_read = vvfat_co_read,
.bdrv_write = vvfat_co_write,
.bdrv_co_get_block_status = vvfat_co_get_block_status,
.bdrv_co_is_allocated = vvfat_co_is_allocated,
};
static void bdrv_vvfat_init(void)

View File

@@ -25,6 +25,7 @@
#include "qemu/timer.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "qemu-common.h"
#include "block/aio.h"
#include "raw-aio.h"
#include "qemu/event_notifier.h"
@@ -105,6 +106,13 @@ static void win32_aio_completion_cb(EventNotifier *e)
}
}
static int win32_aio_flush_cb(EventNotifier *e)
{
QEMUWin32AIOState *s = container_of(e, QEMUWin32AIOState, e);
return (s->count > 0) ? 1 : 0;
}
static void win32_aio_cancel(BlockDriverAIOCB *blockacb)
{
QEMUWin32AIOCB *waiocb = (QEMUWin32AIOCB *)blockacb;
@@ -194,7 +202,8 @@ QEMUWin32AIOState *win32_aio_init(void)
goto out_close_efd;
}
qemu_aio_set_event_notifier(&s->e, win32_aio_completion_cb);
qemu_aio_set_event_notifier(&s->e, win32_aio_completion_cb,
win32_aio_flush_cb);
return s;

View File

@@ -27,8 +27,8 @@ static void nbd_accept(void *opaque)
socklen_t addr_len = sizeof(addr);
int fd = accept(server_fd, (struct sockaddr *)&addr, &addr_len);
if (fd >= 0 && !nbd_client_new(NULL, fd, nbd_client_put)) {
close(fd);
if (fd >= 0) {
nbd_client_new(NULL, fd, nbd_client_put);
}
}
@@ -69,6 +69,12 @@ static void nbd_close_notifier(Notifier *n, void *data)
g_free(cn);
}
static void nbd_server_put_ref(NBDExport *exp)
{
BlockDriverState *bs = nbd_export_get_blockdev(exp);
drive_put_ref(drive_get_by_blockdev(bs));
}
void qmp_nbd_server_add(const char *device, bool has_writable, bool writable,
Error **errp)
{
@@ -99,9 +105,11 @@ void qmp_nbd_server_add(const char *device, bool has_writable, bool writable,
writable = false;
}
exp = nbd_export_new(bs, 0, -1, writable ? 0 : NBD_FLAG_READ_ONLY, NULL);
exp = nbd_export_new(bs, 0, -1, writable ? 0 : NBD_FLAG_READ_ONLY,
nbd_server_put_ref);
nbd_export_set_name(exp, device);
drive_get_ref(drive_get_by_blockdev(bs));
n = g_malloc0(sizeof(NBDCloseNotifier));
n->n.notify = nbd_close_notifier;

2110
blockdev.c

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More