Compare commits

..

55 Commits

Author SHA1 Message Date
Michael Roth
2001e197cf Update version for v2.2.1 release
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-03-10 12:29:35 -05:00
Kevin Wolf
c70221df1f vpc: Fix size in fixed image creation
If total_sectors is rounded to match the geometry, total_size needs to
be changed as well. Otherwise we end up with an image whose geometry
describes a disk larger than the image file, which doesn't end well.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit c7dd631d482912fd615a9ef18a0e0691e7a84836)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-03-08 22:58:14 -05:00
Kevin Wolf
07db6859ab coroutine: Fix use after free with qemu_coroutine_yield()
Instead of using the same function for entering and exiting coroutines,
and hoping that it doesn't add any functionality that hurts with the
parameters used for exiting, we can just directly call into the real
task switch in qemu_coroutine_switch().

This fixes a use-after-free scenario where reentering a coroutine that
has yielded still accesses the old parent coroutine (which may have
meanwhile terminated) in the part of coroutine_swap() that follows
qemu_coroutine_switch().

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 80687b4dd6f43b3fef61fef8fbcb358457350562)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-03-08 22:58:14 -05:00
Michael Roth
c4ca8af86d acpi: update generated hex files
Previous patch
    pc: acpi: fix WindowsXP BSOD when memory hotplug is enabled
changed DSDT, update hex files for non-iasl builds.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-03-08 22:58:14 -05:00
Michael Roth
16765a55c1 acpi-test: update expected DSDT
Previous patch
    pc: acpi: fix WindowsXP BSOD when memory hotplug is enabled
changed DSDT, update expected test files.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-03-08 22:58:14 -05:00
Igor Mammedov
dab0efc33f pc: acpi: fix WindowsXP BSOD when memory hotplug is enabled
ACPI parser in XP considers PNP0A06 devices of CPU and
memory hotplug as duplicates. Adding unique _UID
to CPU hotplug device fixes BSOD.

Cc: qemu-stable@nongnu.org
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 6d4e4cb998)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-03-08 22:58:14 -05:00
Stefano Stabellini
6c699aa9f9 xen-hvm: increase maxmem before calling xc_domain_populate_physmap
Increase maxmem before calling xc_domain_populate_physmap_exact to
avoid the risk of running out of guest memory. This way we can also
avoid complex memory calculations in libxl at domain construction
time.

This patch fixes an abort() when assigning more than 4 NICs to a VM.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Don Slutz <dslutz@verizon.com>
(cherry picked from commit c1d322e604)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-03-08 22:58:13 -05:00
Eduardo Habkost
a958b9be86 linux-user: Check for cpu_init() errors
This was the only caller of cpu_init() that was not checking for NULL
yet.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
(cherry picked from commit 696da41b1b)

Conflicts:
	linux-user/main.c

*removed context dependency on ec53b45

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-03-08 22:58:13 -05:00
Jun Li
4ec1b9b159 qdev: Avoid type assertion in qdev_build_hotpluggable_device_list()
Currently when *obj is not a TYPE_DEVICE, QEMU will abort. This patch
fixes it. When *obj is not a TYPE_DEVICE, just do not add it to hotpluggable
device list.

This patch also fixes the following issue:
1. boot QEMU using cli:
$ /opt/qemu-git-arm/bin/qemu-system-x86_64 -monitor stdio -enable-kvm \
-device virtio-scsi-pci,id=scsi0

2. device_del scsi0 via hmp using tab key(first input device_del, then press
"Tab" key).
(qemu) device_del

After step 2, QEMU will abort.
(qemu) device_del hw/core/qdev.c:930:qdev_build_hotpluggable_device_list:
Object 0x5555563a2460 is not an instance of type device

Signed-off-by: Jun Li <junmuzi@gmail.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit 09d5601771)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-03-08 22:58:13 -05:00
Paolo Bonzini
3e04f97cbc kvm/apic: fix 2.2->2.1 migration
The wait_for_sipi field is set back to 1 after an INIT, so it was not
effective to reset it in kvm_apic_realize.  Introduce a reset callback
and reset wait_for_sipi there.

Reported-by: Igor Mammedov <imammedo@redhat.com>
Cc: qemu-stable@nongnu.org
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 575a6f4082)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-03-01 18:13:00 -06:00
Leon Alrae
00fd8904f6 target-mips: fix broken snapshotting
Recently added CP0.BadInstr and CP0.BadInstrP registers ended up in cpu_load()
under different offset than in cpu_save(). These and all registers between were
incorrectly restored.

Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
(cherry picked from commit b40a1530f2)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-03-01 18:09:42 -06:00
Gerd Hoffmann
3d1cd5997d update ipxe from 69313ed to 35c5379
Anton D. Kachalov (1):
      [intel] Add 8086:1557 card (Intel 82599 10G ethernet mezz)

Christian Hesse (1):
      [build] Merge util/geniso and util/genliso

Curtis Larsen (3):
      [efi] Use EFI_CONSOLE_CONTROL_PROTOCOL to set text mode if available
      [efi] Report errors from attempting to disconnect existing drivers
      [efi] Try various possible SNP receive filters

Dale Hamel (1):
      [smbios] Expose board serial number as ${board-serial}

Florian Schmaus (1):
      [build] Set GITVERSION only if there is a git repository

Hannes Reinecke (3):
      [ethernet] Provide eth_random_addr() to generate random Ethernet addresses
      [igbvf] Assign random MAC address if none is set
      [igbvf] Allow changing of MAC address

Jan Kiszka (1):
      [intel] Add I217-LM PCI ID

Marin Hannache (4):
      [nfs] Fix an invalid free() when loading a symlink
      [nfs] Fix an invalid free() when loading a regular (non-symlink) file
      [nfs] Rewrite NFS URI handling
      [readline] Add CTRL-W shortcut to remove a word

Michael Brown (144):
      [profile] Allow interrupts to be excluded from profiling results
      [intel] Exclude time spent in hypervisor from profiling
      [build] Fix version.o dependency upon git index
      [tcp] Defer sending ACKs until all received packets have been processed
      [lkrnprefix] Function as a bzImage kernel
      [build] Avoid errors when build directory is mounted via NFS
      [undi] Apply quota only to number of complete received packets
      [lkrnprefix] Make real-mode setup code relocatable
      [intel] Increase receive ring fill level
      [syslog] Strip invalid characters from hostname
      [test] Add self-tests for strdup()
      [libc] Prevent strndup() from reading beyond the end of the string
      [efi] Allow for optional protocols
      [efi] Make EFI_DEVICE_PATH_TO_TEXT_PROTOCOL optional
      [efi] Make EFI_HII_DATABASE_PROTOCOL optional
      [efi] Do not try to fetch loaded image device path protocol
      [ipv6] Fix definition of IN6_IS_ADDR_LINKLOCAL()
      [dhcpv6] Do not set sin6_scope_id on the unspecified client socket address
      [ipv6] Do not set sin6_scope_id on source address
      [ipv6] Include network device when transcribing multicast addresses
      [ipv6] Avoid potentially copying from a NULL pointer in ipv6_tx()
      [librm] Allow for the PIC interrupt vector offset to be changed
      [ifmgmt] Do not sleep CPU while configuring network devices
      [scsi] Improve sense code parsing
      [iscsi] Read IPv4 settings only from the relevant network device
      [iscsi] Include IP address origin in iBFT
      [debug] Allow debug message colours to be customised via DBGCOL=...
      [build] Expose build timestamp, build name, and product names
      [efi] Allow device paths to be easily included in debug messages
      [efi] Provide a meaningful EFI SNP device name
      [efi] Restructure EFI driver model
      [build] Fix erroneous object name in version object
      [build] Add yet another potential location for isolinux.bin
      [efi] Allow network devices to be created on top of arbitrary SNP devices
      [autoboot] Allow autoboot device to be identified by link-layer address
      [efi] Identify autoboot device by MAC address when chainloading
      [efi] Attempt to start only drivers claiming support for a device
      [efi] Rewrite SNP NIC driver
      [efi] Include SNP NIC driver within the all-drivers target
      [crypto] Add support for iPAddress subject alternative names
      [crypto] Fix debug message
      [netdevice] Reset network device index when last device is unregistered
      [efi] Update EDK2 headers
      [efi] Install our own disk I/O protocol and claim exclusive use of it
      [efi] Allow for interception of boot services calls by loaded image
      [efi] Print well-known GUIDs by name in debug messages
      [efi] Include EFI_CONSOLE_CONTROL_PROTOCOL header
      [ioapi] Fail ioremap() when attempting to map a zero bus address
      [intel] Check for ioremap() failures
      [realtek] Check for ioremap() failures
      [vmxnet3] Check for ioremap() failures
      [skel] Check for ioremap() failures
      [myson] Check for ioremap() failures
      [natsemi] Check for ioremap() failures
      [i386] Add functions to read and write model-specific registers
      [x86_64] Add functions to read and write model-specific registers
      [efi] Show more diagnostic information when building with DEBUG=efi_wrap
      [ioapi] Centralise notion of PAGE_SIZE
      [lotest] Discard packets arriving on the incorrect network device
      [xen] Import selected public headers
      [xen] Add basic support for PV-HVM domains
      [xen] Add support for Xen netfront virtual NICs
      [efi] Default to releasing network devices for use via SNP
      [efi] Unload started images only on failure
      [efi] Fill in loaded image's DeviceHandle if firmware fails to do so
      [efi] Fix incorrect debug message level when device has no device path
      [efi] Report exact failure when unable to open the device path
      [netdevice] Avoid registering duplicate network devices
      [efi] Ignore failures when attempting to install SNP HII protocol
      [efi] Expand the range of well-known EFI GUIDs in debug messages
      [efi] Provide efi_handle_name() for debugging
      [efi] Add ability to dump all openers of a given protocol on a handle
      [efi] Use efi_handle_name() instead of efi_handle_devpath_text()
      [efi] Use efi_handle_name() instead of efi_devpath_text() where applicable
      [efi] Allow compiler to perform type checks on EFI_HANDLE
      [efi] Avoid unnecessarily passing pointers to EFI_HANDLEs
      [efi] Dump existing openers when we are unable to open a protocol
      [efi] Dump handle information around connect/disconnect attempts
      [efi] Improve debugging of the debugging facilities
      [efi] Add excessive sanity checks into efi_debug functions
      [efi] Also try original ComponentName protocol for retrieving driver names
      [efi] Print raw device path when we have no DevicePathToTextProtocol
      [efi] Add ability to dump SNP device mode information
      [efi] Reset multicast filter list when setting SNP receive filters
      [efi] Provide centralised definitions of commonly-used GUIDs
      [efi] Open device path protocol only at point of use
      [efi] Move abstract device path and handle functions to efi_utils.c
      [efi] Generalise snpnet_pci_info() to efi_locate_device()
      [bios] Support displaying and hiding cursor
      [efi] Support displaying and hiding cursor
      [readline] Ensure cursor is visible when prompting for input
      [xen] Accept alternative Xen platform PCI device ID 5853:0002
      [xen] Use version 1 grant tables by default
      [xen] Cope with unexpected initial backend states
      [smc9000] Avoid using CONFIG as a preprocessor macro
      [build] Allow for named configurations at build time
      [intel] Display PBS value when applying ICH errata workaround
      [intel] Display before and after values for both PBS and PBA
      [intel] Apply PBS/PBA errata workaround only to ICH8 PCI device IDs
      [efi] Add definitions of GUIDs observed during Windows boot
      [efi] Dump details of any calls to our dummy block and disk I/O protocols
      [romprefix] Do not preserve unused register %di
      [build] Remove obsolete references to .zrom build targets
      [build] Allow ISA ROMs to be built
      [build] Avoid deleting config header files if build is interrupted
      [prefix] Halt system without burning CPU if we cannot access the payload
      [prefix] Report both %esi and %ecx when opening payload fails
      [util] Use PCI length field to obtain length of individual images
      [mromprefix] Use PCI length field to obtain length of individual images
      [mromprefix] Allow for .mrom images larger than 128kB
      [efi] Show details of intercepted LoadImage() calls
      [efi] Make our virtual file system case insensitive
      [efi] Wrap any images loaded by our wrapped image
      [efi] Use the SNP protocol instance to match the SNP chainloading device
      [efi] Avoid returning uninitialised data from PCI configuration space reads
      [efi] Make EFI_PCI_ROOT_BRIDGE_IO_PROTOCOL optional
      [efi] Allow for non-PCI snpnet devices
      [build] Clean up all binary directories on "make [very]clean"
      [efi] Add efifatbin utility
      [efi] Provide dummy device path in efi_image_probe()
      [dhcp] Check for matching chaddr in received DHCP packets
      [dhcp] Remove obsolete dhcp_chaddr() function
      [build] Use -malign-double to build 32-bit UEFI binaries
      [efi] Centralise definitions of more protocol GUIDs
      [efi] Add definitions of GUIDs observed when chainloading from Intel driver
      [efi] Free transmit ring entry before calling netdev_tx_complete()
      [efi] Generalise snpnet_dev_info() to efi_device_info()
      [efi] Update to current EDK2 headers
      [efi] Add NII / UNDI driver
      [efi] Check for presence of UNDI in NII protocol
      [efi] Include NII driver within "snp" and "snponly" build targets
      [ping] Report timed-out pings via the callback function
      [ping] Allow termination after a specified number of packets
      [ping] Allow "ping" command output to be inhibited
      [intel] Use autoloaded MAC address instead of EEPROM MAC address
      [crypto] Fix parsing of OCSP responder ID key hash
      [vmxnet3] Add profiling code to exclude time spent in the hypervisor
      [netdevice] Fix erroneous use of free(iobuf) instead of free_iob(iobuf)
      [libc] Add ASSERTED macro to test if any assertion has triggered
      [list] Add sanity checks after list-adding functions
      [malloc] Tidy up debug output
      [malloc] Sanity check parameters to alloc_memblock() and free_memblock()
      [malloc] Check integrity of free list
      [malloc] Report caller address as soon as memory corruption is detected

Peter Lemenkov (1):
      [build] Check if git index actually exists

Robin Smidsrød (2):
      [build] Add named configuration for VirtualBox
      [build] Avoid using embedded script in VirtualBox named configuration

Sven Ulland (1):
      [lacp] Set "aggregatable" flag in response LACPDU

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit c246cee4ee)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-03-01 18:06:31 -06:00
Paolo Bonzini
a97f9a7ec7 exec: change default exception_index value for migration to -1
In QEMU 2.2 the exception_index value was added to the migration stream
through a subsection.  The default was set to 0, which is wrong and
should have been -1.

However, 2.2 does not have commit e511b4d (cpu-exec: reset exception_index
correctly, 2014-11-26), hence in 2.2 the exception_index is never used
and is set to -1 on the next call to cpu_exec.  So we can change the
migration stream to make the default -1.  The effects are:

- 2.2.1 -> 2.2.0: cpu->exception_index set incorrectly to 0 if it
were -1 on the source; then reset to -1 in cpu_exec.  This is TCG
only; KVM does not use exception_index.

- 2.2.0 -> 2.2.1: cpu->exception_index set incorrectly to -1 if it
were 0 on the source; but it would be reset to -1 in cpu_exec anyway.
This is TCG only; KVM does not use exception_index.

- 2.2.1 -> 2.1: two bugs fixed: 1) can migrate backwards if
cpu->exception_index is set to -1; 2) should not migrate backwards
(but 2.2.0 allows it) if cpu->exception_index is set to 0

- 2.2.0 -> 2.3.0: 2.2.0 will send the subsection unnecessarily if
exception_index is -1, but that is not a problem.  2.3.0 will set
cpu->exception_index to -1 if it is 0 on the source, but this would
be anyway a problem for 2.2.0 -> 2.2.x migration (due to lack of
commit e511b4d in 2.2.x) so we can ignore it

- 2.2.1 -> 2.3.0: everything works.

In addition, play it safe and never send the subsection unless TCG
is in use.  KVM does not use exception_index (PPC KVM stores values
in it for use in the subsequent call to ppc_cpu_do_interrupt, but
does not need it as soon as kvm_handle_debug returns).  Xen and
qtest do not run any code for the CPU at all.

Reported-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1418989994-17244-3-git-send-email-pbonzini@redhat.com
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit adee64249e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-24 15:17:35 -06:00
Fam Zheng
987aba53db qtest: Fix deadloop by running main loop AIO context's timers
qemu_clock_run_timers() only takes care of main_loop_tlg, we shouldn't
forget aio timer list groups.

Currently, the qemu_clock_deadline_ns_all (a few lines above) counts all
the timergroups of this clock type, including aio tlg, but we don't fire
them, so they are never cleared, which makes a dead loop.

For example, this function hangs when trying to drive throttled block
request queue with qtest clock_step.

Signed-off-by: Fam Zheng <famz@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1421661103-29153-1-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit efef88b3d9)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-24 15:05:39 -06:00
Peter Wu
7d389a2138 block/iscsi: fix uninitialized variable
'ret' was never initialized in the success path.

Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit debfb917a4)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-24 15:02:12 -06:00
Zhang Haoyu
2a020d29df fix mc146818rtc wrong subsection name to avoid vmstate_subsection_load() fail
fix mc146818rtc wrong subsection name to avoid vmstate_subsection_load() fail
during incoming migration or loadvm.

Signed-off-by: Zhang Haoyu <zhanghy@sangfor.com.cn>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit bb42631190)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-23 18:04:34 -06:00
Daniel P. Berrange
6833856e86 libcacard: stop linking against every single 3rd party library
Building QEMU results in a libcacard.so that links against
practically the entire world

	linux-vdso.so.1 =>  (0x00007fff71e99000)
	libssl3.so => /usr/lib64/libssl3.so (0x00007f49f94b6000)
	libsmime3.so => /usr/lib64/libsmime3.so (0x00007f49f928e000)
	libnss3.so => /usr/lib64/libnss3.so (0x00007f49f8f67000)
	libnssutil3.so => /usr/lib64/libnssutil3.so (0x00007f49f8d3b000)
	libplds4.so => /usr/lib64/libplds4.so (0x00007f49f8b36000)
	libplc4.so => /usr/lib64/libplc4.so (0x00007f49f8931000)
	libnspr4.so => /usr/lib64/libnspr4.so (0x00007f49f86f2000)
	libdl.so.2 => /usr/lib64/libdl.so.2 (0x00007f49f84ed000)
	libm.so.6 => /usr/lib64/libm.so.6 (0x00007f49f81e5000)
	libgthread-2.0.so.0 => /usr/lib64/libgthread-2.0.so.0 (0x00007f49f7fe3000)
	librt.so.1 => /usr/lib64/librt.so.1 (0x00007f49f7dda000)
	libz.so.1 => /usr/lib64/libz.so.1 (0x00007f49f7bc4000)
	libcap-ng.so.0 => /usr/lib64/libcap-ng.so.0 (0x00007f49f79be000)
	libuuid.so.1 => /usr/lib64/libuuid.so.1 (0x00007f49f77b8000)
	libgnutls.so.28 => /usr/lib64/libgnutls.so.28 (0x00007f49f749a000)
	libSDL-1.2.so.0 => /usr/lib64/libSDL-1.2.so.0 (0x00007f49f71fd000)
	libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x00007f49f6fe0000)
	libvte.so.9 => /usr/lib64/libvte.so.9 (0x00007f49f6d3f000)
	libXext.so.6 => /usr/lib64/libXext.so.6 (0x00007f49f6b2d000)
	libgtk-x11-2.0.so.0 => /usr/lib64/libgtk-x11-2.0.so.0 (0x00007f49f64a0000)
	libgdk-x11-2.0.so.0 => /usr/lib64/libgdk-x11-2.0.so.0 (0x00007f49f61de000)
	libpangocairo-1.0.so.0 => /usr/lib64/libpangocairo-1.0.so.0 (0x00007f49f5fd1000)
	libatk-1.0.so.0 => /usr/lib64/libatk-1.0.so.0 (0x00007f49f5daa000)
	libcairo.so.2 => /usr/lib64/libcairo.so.2 (0x00007f49f5a9d000)
	libgdk_pixbuf-2.0.so.0 => /usr/lib64/libgdk_pixbuf-2.0.so.0 (0x00007f49f5878000)
	libgio-2.0.so.0 => /usr/lib64/libgio-2.0.so.0 (0x00007f49f5500000)
	libpangoft2-1.0.so.0 => /usr/lib64/libpangoft2-1.0.so.0 (0x00007f49f52eb000)
	libpango-1.0.so.0 => /usr/lib64/libpango-1.0.so.0 (0x00007f49f50a0000)
	libgobject-2.0.so.0 => /usr/lib64/libgobject-2.0.so.0 (0x00007f49f4e4e000)
	libglib-2.0.so.0 => /usr/lib64/libglib-2.0.so.0 (0x00007f49f4b15000)
	libfontconfig.so.1 => /usr/lib64/libfontconfig.so.1 (0x00007f49f48d6000)
	libfreetype.so.6 => /usr/lib64/libfreetype.so.6 (0x00007f49f462b000)
	libX11.so.6 => /usr/lib64/libX11.so.6 (0x00007f49f42e8000)
	libxenstore.so.3.0 => /usr/lib64/libxenstore.so.3.0 (0x00007f49f40de000)
	libxenctrl.so.4.4 => /usr/lib64/libxenctrl.so.4.4 (0x00007f49f3eb6000)
	libxenguest.so.4.4 => /usr/lib64/libxenguest.so.4.4 (0x00007f49f3c8b000)
	libseccomp.so.2 => /usr/lib64/libseccomp.so.2 (0x00007f49f3a74000)
	librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00007f49f385d000)
	libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00007f49f364a000)
	libutil.so.1 => /usr/lib64/libutil.so.1 (0x00007f49f3447000)
	libc.so.6 => /usr/lib64/libc.so.6 (0x00007f49f3089000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f49f9902000)
	libp11-kit.so.0 => /usr/lib64/libp11-kit.so.0 (0x00007f49f2e23000)
	libtspi.so.1 => /usr/lib64/libtspi.so.1 (0x00007f49f2bb2000)
	libtasn1.so.6 => /usr/lib64/libtasn1.so.6 (0x00007f49f299f000)
	libnettle.so.4 => /usr/lib64/libnettle.so.4 (0x00007f49f276d000)
	libhogweed.so.2 => /usr/lib64/libhogweed.so.2 (0x00007f49f2545000)
	libgmp.so.10 => /usr/lib64/libgmp.so.10 (0x00007f49f22cd000)
	libncurses.so.5 => /usr/lib64/libncurses.so.5 (0x00007f49f20a5000)
	libtinfo.so.5 => /usr/lib64/libtinfo.so.5 (0x00007f49f1e7a000)
	libgmodule-2.0.so.0 => /usr/lib64/libgmodule-2.0.so.0 (0x00007f49f1c76000)
	libXfixes.so.3 => /usr/lib64/libXfixes.so.3 (0x00007f49f1a6f000)
	libXrender.so.1 => /usr/lib64/libXrender.so.1 (0x00007f49f1865000)
	libXinerama.so.1 => /usr/lib64/libXinerama.so.1 (0x00007f49f1662000)
	libXi.so.6 => /usr/lib64/libXi.so.6 (0x00007f49f1452000)
	libXrandr.so.2 => /usr/lib64/libXrandr.so.2 (0x00007f49f1247000)
	libXcursor.so.1 => /usr/lib64/libXcursor.so.1 (0x00007f49f103c000)
	libXcomposite.so.1 => /usr/lib64/libXcomposite.so.1 (0x00007f49f0e39000)
	libXdamage.so.1 => /usr/lib64/libXdamage.so.1 (0x00007f49f0c35000)
	libharfbuzz.so.0 => /usr/lib64/libharfbuzz.so.0 (0x00007f49f09dd000)
	libpixman-1.so.0 => /usr/lib64/libpixman-1.so.0 (0x00007f49f072f000)
	libEGL.so.1 => /usr/lib64/libEGL.so.1 (0x00007f49f0505000)
	libpng16.so.16 => /usr/lib64/libpng16.so.16 (0x00007f49f02d2000)
	libxcb-shm.so.0 => /usr/lib64/libxcb-shm.so.0 (0x00007f49f00cd000)
	libxcb-render.so.0 => /usr/lib64/libxcb-render.so.0 (0x00007f49efec3000)
	libxcb.so.1 => /usr/lib64/libxcb.so.1 (0x00007f49efca1000)
	libGL.so.1 => /usr/lib64/libGL.so.1 (0x00007f49efa06000)
	libffi.so.6 => /usr/lib64/libffi.so.6 (0x00007f49ef7fe000)
	libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x00007f49ef5d8000)
	libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x00007f49ef3be000)
	libexpat.so.1 => /usr/lib64/libexpat.so.1 (0x00007f49ef193000)
	libbz2.so.1 => /usr/lib64/libbz2.so.1 (0x00007f49eef83000)
	libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007f49eed6c000)
	liblzma.so.5 => /usr/lib64/liblzma.so.5 (0x00007f49eeb46000)
	libnl-route-3.so.200 => /usr/lib64/libnl-route-3.so.200 (0x00007f49ee8e2000)
	libnl-3.so.200 => /usr/lib64/libnl-3.so.200 (0x00007f49ee6c4000)
	libcrypto.so.10 => /usr/lib64/libcrypto.so.10 (0x00007f49ee2d6000)
	libssl.so.10 => /usr/lib64/libssl.so.10 (0x00007f49ee067000)
	libgraphite2.so.3 => /usr/lib64/libgraphite2.so.3 (0x00007f49ede48000)
	libX11-xcb.so.1 => /usr/lib64/libX11-xcb.so.1 (0x00007f49edc46000)
	libxcb-dri2.so.0 => /usr/lib64/libxcb-dri2.so.0 (0x00007f49eda41000)
	libxcb-xfixes.so.0 => /usr/lib64/libxcb-xfixes.so.0 (0x00007f49ed838000)
	libxcb-shape.so.0 => /usr/lib64/libxcb-shape.so.0 (0x00007f49ed634000)
	libgbm.so.1 => /usr/lib64/libgbm.so.1 (0x00007f49ed426000)
	libwayland-client.so.0 => /usr/lib64/libwayland-client.so.0 (0x00007f49ed217000)
	libwayland-server.so.0 => /usr/lib64/libwayland-server.so.0 (0x00007f49ed005000)
	libglapi.so.0 => /usr/lib64/libglapi.so.0 (0x00007f49ecddb000)
	libdrm.so.2 => /usr/lib64/libdrm.so.2 (0x00007f49ecbce000)
	libXau.so.6 => /usr/lib64/libXau.so.6 (0x00007f49ec9ca000)
	libxcb-glx.so.0 => /usr/lib64/libxcb-glx.so.0 (0x00007f49ec7b0000)
	libxcb-dri3.so.0 => /usr/lib64/libxcb-dri3.so.0 (0x00007f49ec5ad000)
	libxcb-present.so.0 => /usr/lib64/libxcb-present.so.0 (0x00007f49ec3aa000)
	libxcb-randr.so.0 => /usr/lib64/libxcb-randr.so.0 (0x00007f49ec19b000)
	libxcb-sync.so.1 => /usr/lib64/libxcb-sync.so.1 (0x00007f49ebf94000)
	libxshmfence.so.1 => /usr/lib64/libxshmfence.so.1 (0x00007f49ebd91000)
	libXxf86vm.so.1 => /usr/lib64/libXxf86vm.so.1 (0x00007f49ebb8a000)
	libpcre.so.1 => /usr/lib64/libpcre.so.1 (0x00007f49eb91d000)
	libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00007f49eb6cf000)
	libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00007f49eb3ec000)
	libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x00007f49eb1e8000)
	libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00007f49eafb4000)
	libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00007f49eada5000)
	libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x00007f49eaba0000)

All libcacard actually needs are the NSS libs. Linking against the entire
world is a regression caused by

  commit 9d171bd937
  Author: Michael Tokarev <mjt@tls.msk.ru>
  Date:   Thu May 8 16:48:27 2014 +0400

    libcacard: remove libcacard-specific CFLAGS and LIBS from global vars

Which removed the setting of the LIBS variable in libcacard/Makefile.

Adding it back as an empty assignment brings the linked libs back to a more
reasonable set

	linux-vdso.so.1 =>  (0x00007fff575c1000)
	libssl3.so => /usr/lib64/libssl3.so (0x00007f7f753b1000)
	libsmime3.so => /usr/lib64/libsmime3.so (0x00007f7f75189000)
	libnss3.so => /usr/lib64/libnss3.so (0x00007f7f74e62000)
	libnssutil3.so => /usr/lib64/libnssutil3.so (0x00007f7f74c36000)
	libplds4.so => /usr/lib64/libplds4.so (0x00007f7f74a31000)
	libplc4.so => /usr/lib64/libplc4.so (0x00007f7f7482c000)
	libnspr4.so => /usr/lib64/libnspr4.so (0x00007f7f745ed000)
	libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x00007f7f743d0000)
	libdl.so.2 => /usr/lib64/libdl.so.2 (0x00007f7f741cc000)
	libgthread-2.0.so.0 => /usr/lib64/libgthread-2.0.so.0 (0x00007f7f73fca000)
	libglib-2.0.so.0 => /usr/lib64/libglib-2.0.so.0 (0x00007f7f73c90000)
	libc.so.6 => /usr/lib64/libc.so.6 (0x00007f7f738d3000)
	libz.so.1 => /usr/lib64/libz.so.1 (0x00007f7f736bd000)
	librt.so.1 => /usr/lib64/librt.so.1 (0x00007f7f734b4000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f7f757fd000)

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Cc: <qemu-stable@nongnu.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit b41112c46b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-23 18:04:34 -06:00
Paolo Bonzini
a9eb2b6053 qemu-thread: fix qemu_event without futexes
This had a possible deadlock that was visible with rcutorture.

    qemu_event_set                    qemu_event_wait
    ----------------------------------------------------------------
                                      cmpxchg reads FREE, writes BUSY
                                      futex_wait: pthread_mutex_lock
                                      futex_wait: value == BUSY
    xchg reads BUSY, writes SET
    futex_wake: pthread_cond_broadcast
                                      futex_wait: pthread_cond_wait
                                      <deadlock>

The fix is simply to avoid condvar tricks and do the obvious locking
around pthread_cond_broadcast:

    qemu_event_set        qemu_event_wait
    ----------------------------------------------------------------
                                      cmpxchg reads FREE, writes BUSY
                                      futex_wait: pthread_mutex_lock
                                      futex_wait: value == BUSY
    xchg reads BUSY, writes SET
    futex_wake: pthread_mutex_lock
    (blocks)
                                      futex_wait: pthread_cond_wait
    (mutex unlocked)
    futex_wake: pthread_cond_broadcast
    futex_wake: pthread_mutex_unlock
                                      futex_wait: pthread_mutex_unlock

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 158ef8cbb7)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-23 18:04:34 -06:00
Alex Williamson
4d49de6b6f vfio-pci: Fix missing unparent of dynamically allocated MemoryRegion
Commit d8d9581460 added explicit object_unparent() calls for
dynamically allocated MemoryRegions.  The VFIOMSIXInfo structure also
contains such a MemoryRegion, covering the mmap'd region of a PCI BAR
above the MSI-X table.  This structure is freed as part of the class
exit function and therefore also needs an explicit object_unparent().
Failing to do this results in random segfaults due to fields within
the structure, often the class pointer, being reclaimed and corrupted
by the time object_finalize_child_property() is called for the object.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org # 2.2
(cherry picked from commit 3a4dbe6aa9)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-23 18:04:34 -06:00
Peter Maydell
3750d2588e target-arm/translate-a64: Fix wrong mmu_idx usage for LDT/STT
The LDT/STT (load/store unprivileged) instruction decode was using
the wrong MMU index value. This meant that instead of these insns
being "always access as if user-mode regardless of current privilege"
they were "always access as if kernel-mode regardless of current
privilege". This went unnoticed because AArch64 Linux doesn't use
these instructions.

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
---
I'm not counting this as a security issue because I'm assuming
nobody treats TCG guests as a security boundary (certainly I
would not recommend doing so...)

(cherry picked from commit 949013ce11)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 18:38:35 -06:00
Dinar Valeev
4ac8b01fa8 hw/input/hid.c Fix capslock hid code
When ever USB keyboard is used, e.g. '-usbdevice keyboard' pressing
caps lock key send 0x32 hid code, which is treated as backslash.
Instead it should be 0x39 code. This affects sending uppercase keys,
as they typed whith caps lock active.

While on x86 this can be workarounded by using ps/2 protocol. On
Power it is crusial as we don't have anything else than USB.

This is fixes guest automation tasts over vnc.

Signed-off-by: Dinar Valeev <dvaleev@suse.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 0ee4de5840)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 18:36:50 -06:00
Paolo Bonzini
e60fb7af55 sb16: fix interrupt acknowledgement
SoundBlaster 16 emulation is very broken and consumes a lot of CPU, but a
small fix was suggested offlist and it is enough to fix some games.  I
got Epic Pinball to work with the "SoundBlaster Clone" option.

The processing of the interrupt register is wrong due to two missing
"not"s.  This causes the interrupt flag to remain set even after the
Acknowledge ports have been read (0x0e and 0x0f).

The line was introduced by commit 85571bc (audio merge (malc), 2004-11-07),
but the code might have been broken before because I did not look closely
at the huge patches from 10 years ago.

Reported-by: Joshua Bair <j_bair@bellsouth.net>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 9939375c28)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 18:36:08 -06:00
Cornelia Huck
451b9e2d4c virtio: fix feature bit checks
Several places check against the feature bit number instead of against
the feature bit. Fix them.

Cc: qemu-stable@nongnu.org
Reported-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 91d5c57a2e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 18:34:26 -06:00
Paolo Bonzini
0d093159b4 vt82c686: avoid out-of-bounds read
superio_ioport_readb can read the 256th element of the array.
Coverity reports an out-of-bounds write in superio_ioport_writeb,
but it does not show the corresponding out-of-bounds read
because it cannot prove that it can happen.  Fix the root
cause of the problem (zhanghailang's patch instead fixes
the logic in superio_ioport_writeb).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Cc: qemu-stable@nongnu.org
(cherry picked from commit 9feb8adeaa)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 18:16:04 -06:00
Paolo Bonzini
8d1fdb16cd target-i386: fix movntsd on big-endian hosts
This was accessing an XMM register's low half without going through XMM_Q.

Cc: qemu-stable@nongnu.org
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 07958082fd)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 18:15:01 -06:00
Paolo Bonzini
b0a231a9a9 scsi: fix cancellation when I/O was completed but DMA was not.
Commit d577646 (scsi: Introduce scsi_req_cancel_complete, 2014-09-25)
was supposed to have no semantic change, but it missed a case.  When
r->aiocb has already been NULLed, but DMA was not complete and the
SCSI layer was waiting for scsi_req_continue, after the patch the
SCSI layer will not call the .cancel callback of SCSIBusInfo.

Fixes: d5776465ee
Cc: qemu-stable@nongnu.org
Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Tested-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 488eef2f1d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 18:12:51 -06:00
Peter Maydell
09e2753be0 linux-user: Fix broken m68k signal handling on 64 bit hosts
The m68k signal frame setup code which writes the signal return
trampoline code to the stack was assuming that a 'long' was 32 bits;
on 64 bit systems this meant we would end up writing the 32 bit
(2 insn) trampoline sequence to retaddr+4,retaddr+6 instead of
the intended retaddr+0,retaddr+2, resulting in a guest crash when
it tried to execute the invalid zero-bytes at retaddr+0.
Fix by using uint32_t instead; also use uint16_t rather than short
for consistency. This fixes bug LP:1404690.

Reported-by: Michel Boaventura
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
(cherry picked from commit 1669add752)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 18:09:36 -06:00
Paolo Bonzini
49725cdf04 pckbd: set bits 2-3-6-7 of the output port by default
OSes typically write 0xdd/0xdf to turn the A20 line off and on.  This
has bits 2-3-6-7 on, so that the output port subsection is migrated.
Change the reset value and migration default to include those four
bits, thus avoiding that the subsection is migrated.

This strictly speaking changes guest ABI, but the long time during which
we have not migrated the value means that the guests really do not care
much; so the change is for all machine types.

Reported-by: Igor Mammedov <imammedo@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit d13c040409)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 18:06:17 -06:00
Paolo Bonzini
fdb2ed44f1 serial: refine serial_thr_ipending_needed
If the THR interrupt is disabled, there is no need to migrate thr_ipending
because LSR.THRE will be sampled again when the interrupt is enabled.
(This is the behavior that is not documented in the datasheet, but
relied on by Windows!)

Note that in this case IIR will never be 0x2 so, if thr_ipending were
to be one, QEMU would produce the subsection.

Reported-by: Igor Mammedov <imammedo@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit bfa7362889)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 18:06:01 -06:00
Paolo Bonzini
e54bcad901 serial: reset thri_pending on IER writes with THRI=0
This is responsible for failure of migration from 2.2 to 2.1, because
thr_ipending is always one in practice.

serial.c is setting thr_ipending unconditionally.  However, thr_ipending
is not used at all if THRI=0, and it will be overwritten again the next
time THRE or THRI changes.  For that reason, we can set thr_ipending to
zero every time THRI is reset.

There is disagreement on whether LSR.THRE should be resampled when IER.THRI
goes from 1 to 1.  This patch does not touch the code, leaving that for
QEMU 2.3+.

This has no semantic change and is enough to fix migration in the common
case where the interrupt is not pending or is reported in IIR.  It does not
change the migration format, so 2.2.0 -> 2.1 will remain broken but we
can fix 2.2.1 -> 2.1 without breaking 2.2.1 <-> 2.2.0.

The case that remains broken (the one in which the subsection is strictly
necessary) is when THRE=1, the THRI interrupt has *not* been acknowledged
yet, and a higher-priority interrupt comes.  In this case, you need the
subsection to tell the source that the lower-priority THRI interrupt is
pending.  The subsection's breakage of migration, in this case, prevents
continuing the VM on the destination with an invalid state.

Cc: qemu-stable@nongnu.org
Reported-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 4e02b0fcf5)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 18:03:34 -06:00
Marcel Apfelbaum
e1ce0c3cb7 vl.c: fix regression when reading machine type from config file
After 'Machine as QOM' series the machine type input triggers
the creation of the machine class.
If the machine type is set in the configuration file, the machine
class is not updated accordingly and remains the default.

Fixed that by querying the machine options after the configuration
file is loaded.

Cc: qemu-stable@nongnu.org
Reported-by: William Dauchy <william@gandi.net>
Signed-off-by: Marcel Apfelbaum <marcel@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 364c3e6b8d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:28:09 -06:00
David Gibson
cb3360dbdd PPC: Fix crash on spapr_tce_table_finalize()
spapr_tce_table_finalize() can SEGV if the object was not previously
realized.  In particular this can be triggered by running
         qemu-system-ppc -device spapr-tce-table,?

The basic problem is that we have mismatched initialization versus
finalization: spapr_tce_table_finalize() is attempting to undo things that
are done in spapr_tce_table_realize(), not an instance_init function.

Therefore, replace spapr_tce_table_finalize() with
spapr_tce_table_unrealize().

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Cc: qemu-stable@nongnu.org
Signed-off-by: Alexander Graf <agraf@suse.de>
(cherry picked from commit 5f9490de56)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:28:01 -06:00
Paolo Bonzini
f738adeb5e atomic: fix position of volatile qualifier
What needs to be volatile is not the pointer, but the pointed-to
value!

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 2cbcfb281a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:27:56 -06:00
Vladimir Sementsov-Ogievskiy
83dbd88b5c migration/block: fix pending() return value
Because of wrong return value of .save_live_pending() in
migration/block.c, migration finishes before the whole disk is
transferred. Such situation occurs when the migration process is fast
enough, for example when source and dest are on the same host.

If in the bulk phase we return something < max_size, we will skip
transferring the tail of the device. Currently we have "set pending to
BLOCK_SIZE if it is zero" for bulk phase, but there no guarantee, that
it will be < max_size.

True approach is to return, for example, max_size+1 when we are in the
bulk phase.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@parallels.com>
Message-id: 1419933856-4018-2-git-send-email-vsementsov@parallels.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 04636dc410)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:27:50 -06:00
Max Filippov
718ab31016 target-xtensa: test cross-page opcode
Alter cross-page TB test to also test cross-page opcode.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
(cherry picked from commit 85d36377e4)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:27:43 -06:00
Max Filippov
27ad3df73e target-xtensa: fix translation for opcodes crossing page boundary
If TB ends with an opcode that crosses page boundary and the following
page is not executable then EPC1 for the code fetch exception wrongly
points at the beginning of the TB. Always treat instruction that crosses
page boundary as a separate TB.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
(cherry picked from commit 01673a3401)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:27:36 -06:00
Peter Maydell
6569578197 audio: Don't free hw resources until after hw backend is stopped
When stopping an audio voice, call the audio backend's fini
method before calling audio_pcm_hw_free_resources_ rather than
afterwards. This allows backends which use helper threads (like
pulseaudio) to terminate those threads before the conv_buf or
mix_buf are freed and avoids race conditions where the helper
may access a NULL pointer or freed memory.

Cc: qemu-stable@nongnu.org
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1418406239-9838-1-git-send-email-peter.maydell@linaro.org
(cherry picked from commit b28fb27b5e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:27:26 -06:00
Paolo Bonzini
51d703ff2e linuxboot: fix loading old kernels
Old kernels that used high memory only allowed the initrd to be in the
first 896MB of memory.  If you load the initrd above, they complain
that "initrd extends beyond end of memory".

In order to fix this, while not breaking machines with small amounts
of memory fixed by cdebec5 (linuxboot: compute initrd loading address,
2014-10-06), we need to distinguish two cases.  If pc.c placed the
initrd at end of memory, use the new algorithm based on the e801
memory map.  If instead pc.c placed the initrd at the maximum address
specified by the bzImage, leave it there.

The only interesting part is that the low-memory info block is now
loaded very early, in real mode, and thus the 32-bit address has
to be converted into a real mode segment.  The initrd address is
also patched in the info block before entering real mode, it is
simpler that way.

This fixes booting the RHEL4.8 32-bit installation image with 1GB
of RAM.

Cc: qemu-stable@nongnu.org
Cc: mst@redhat.com
Cc: jsnow@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 269e235849)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:27:19 -06:00
Kevin Wolf
ebd2bd2227 block: Don't probe for unknown backing file format
If a qcow2 image specifies a backing file format that doesn't correspond
to any format driver that qemu knows, we shouldn't fall back to probing,
but simply error out.

Not looking up the backing file driver in bdrv_open_backing_file(), but
just filling in the "driver" option if it isn't there moves us closer to
the goal of having everything in QDict options and gets us the error
handling of bdrv_open(), which correctly refuses unknown drivers.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1416935562-7760-4-git-send-email-kwolf@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit c5f6e493bb)

Conflicts:
	tests/qemu-iotests/group

*resolved context conflict due to group 113 being present locally

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:27:06 -06:00
Kevin Wolf
9f8da0319d qcow2.py: Add required padding for header extensions
The qcow2 specification requires that the header extension data be
padded to round up the extension size to the next multiple of 8 bytes.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1416935562-7760-3-git-send-email-kwolf@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 8884dd1bbc)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:24:35 -06:00
Kevin Wolf
63a3acd24a qcow2: Fix header extension size check
After reading the extension header, offset is incremented, but not
checked against end_offset any more. This way an integer overflow could
happen when checking whether the extension end is within the allowed
range, effectively disabling the check.

This patch adds the missing check and a test case for it.

Cc: qemu-stable@nongnu.org
Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1416935562-7760-2-git-send-email-kwolf@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 2ebafc854d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:10:11 -06:00
Gary R Hook
9fc6075d28 block migration: fix return value
Modify block_save_iterate() to return positive/zero/negative
(success/not done/failure) return status. The computation of
the blocks transferred (an int64_t) exceeds the size of an
int return value.

Signed-off-by: Gary R Hook <gary.hook@nimboxx.com>
Reviewed-by: ChenLiang <chenliang88@huawei.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1416958202-15913-1-git-send-email-gary.hook@nimboxx.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit ebd9fbd7e1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:10:06 -06:00
Max Reitz
6950b92765 block/raw-posix: Fix ret in raw_open_common()
The return value must be negative on error; there is one place in
raw_open_common() where errp is set, but ret remains 0. Fix it.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 01212d4ed6)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:10:01 -06:00
Max Reitz
9b3f3d6da9 qcow2: Respect bdrv_truncate() error
bdrv_truncate() may fail and qcow2_write_compressed() should return the
error code in that case.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 6a69b9620a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:09:55 -06:00
Max Reitz
6f45cda114 qcow2: Flushing the caches in qcow2_close may fail
qcow2_cache_flush() may fail; if one of the caches failed to be flushed
successfully to disk in qcow2_close() the image should not be marked
clean, and we should emit a warning.

This breaks the (qcow2-specific) iotests 026, 071 and 089; change their
output accordingly.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 3b5e14c76a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:09:50 -06:00
Max Reitz
1e85e69fd6 qcow2: Prevent numerical overflow
In qcow2_alloc_cluster_offset(), *num is limited to
INT_MAX >> BDRV_SECTOR_BITS by all callers. However, since remaining is
of type uint64_t, we might as well cast *num to that type before
performing the shift.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 11c89769dc)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:09:44 -06:00
Max Reitz
ff15187eca iotests: Add test for unsupported image creation
Add a test for creating and amending images (amendment uses the creation
options) with formats not supporting creation over protocols not
supporting creation.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 2247798d13)

Conflicts:
	tests/qemu-iotests/group

*removed context dependency on iotest group 114

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:08:24 -06:00
Max Reitz
0a0a984352 iotests: Only kill NBD server if it runs
There may be NBD tests which do not create a sample image and simply
test whether wrong usage of the protocol is rejected as expected. In
this case, there will be no NBD server and trying to kill it during
clean-up will fail.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit f798068c56)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:04:13 -06:00
Max Reitz
b15bfd0558 qemu-img: Check create_opts before image amendment
The image options which can be amended are described by the .create_opts
field for every driver. This field must therefore be non-NULL so that
anything can be amended in the first place. Check that this holds true
before going into qemu_opts_create() (because if .create_opts is NULL,
the create_opts pointer in img_amend() will be NULL after
qemu_opts_append()).

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit b2439d26f0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:03:58 -06:00
Max Reitz
10be14ee7d qemu-img: Check create_opts before image creation
If a driver supports image creation, it needs to set the .create_opts
field. We can use that to make sure .create_opts for both drivers
involved is not NULL for the target image in qemu-img convert, which is
important so that the create_opts pointer in img_convert() is not NULL
after the qemu_opts_append() calls and when going into
qemu_opts_create().

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit f75613cf24)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:03:51 -06:00
Max Reitz
6065d5484a block: Check create_opts before image creation
If a driver supports image creation, it needs to set the .create_opts
field. We can use that to make sure .create_opts for both drivers
involved is not NULL in bdrv_img_create(), which is important so that
the create_opts pointer in that function is not NULL after the
qemu_opts_append() calls and when going into qemu_opts_create().

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit c614972408)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:03:44 -06:00
Max Reitz
0fc9a06b56 block/nfs: Add create_opts
The nfs protocol driver is capable of creating images, but did not
specify any creation options. Fix it.

A way to test this issue is the following:

$ qemu-img create -f nfs nfs://127.0.0.1/foo.qcow2 64M

Without this patch, it segfaults. With this patch, it does not. However,
this is not something that should really work; qemu-img should check
whether the parameter for the -f option (and -O for convert) is indeed a
format, and error out if it is not. Therefore, I am not making it an
iotest.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit fd752801ae)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:03:37 -06:00
Max Reitz
1961d1c347 block/vvfat: qcow driver may not be found
Although virtually impossible right now, bdrv_find_format("qcow") may
fail. The vvfat block driver should heed that case.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 1bcb15cf77)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:03:31 -06:00
Max Reitz
e81703b42c block: Omit bdrv_find_format for essential drivers
We can always assume raw, file and qcow2 being available; so do not use
bdrv_find_format() to locate their BlockDriver objects but statically
reference the respective objects.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit ef8104378c)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:03:25 -06:00
Max Reitz
7e213f8535 block: Make essential BlockDriver objects public
There are some block drivers which are essential to QEMU and may not be
removed: These are raw, file and qcow2 (as the default non-raw format).
Make their BlockDriver objects public so they can be directly referenced
throughout the block layer without needing to call bdrv_find_format()
and having to deal with an error at runtime, while the real problem
occurred during linking (where raw, file or qcow2 were not linked into
qemu).

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 5f535a941e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2015-02-22 12:03:18 -06:00
3341 changed files with 132935 additions and 342735 deletions

View File

@@ -1,2 +0,0 @@
((c-mode . ((c-file-style . "stroustrup")
(indent-tabs-mode . nil))))

19
.gitignore vendored
View File

@@ -17,15 +17,12 @@
/trace/generated-tcg-tracers.h
/trace/generated-ust-provider.h
/trace/generated-ust.c
/ui/shader/texture-blit-frag.h
/ui/shader/texture-blit-vert.h
/libcacard/trace/generated-tracers.c
*-timestamp
/*-softmmu
/*-darwin-user
/*-linux-user
/*-bsd-user
/ivshmem-client
/ivshmem-server
/libdis*
/libuser
/linux-headers/asm
@@ -35,14 +32,19 @@
/qapi-visit.[ch]
/qapi-event.[ch]
/qmp-commands.h
/qmp-introspect.[ch]
/qmp-marshal.c
/qemu-doc.html
/qemu-tech.html
/qemu-doc.info
/qemu-tech.info
/qemu.1
/qemu.pod
/qemu-img.1
/qemu-img.pod
/qemu-img
/qemu-nbd
/qemu-nbd.8
/qemu-nbd.pod
/qemu-options.def
/qemu-options.texi
/qemu-img-cmds.texi
@@ -51,17 +53,16 @@
/qemu-ga
/qemu-bridge-helper
/qemu-monitor.texi
/qemu-monitor-info.texi
/qmp-commands.txt
/vscclient
/fsdev/virtfs-proxy-helper
*.[1-9]
/fsdev/virtfs-proxy-helper.1
/fsdev/virtfs-proxy-helper.pod
*.a
*.aux
*.cp
*.dvi
*.exe
*.msi
*.dll
*.so
*.mo
@@ -69,7 +70,6 @@
*.ky
*.log
*.pdf
*.pod
*.cps
*.fns
*.kys
@@ -108,5 +108,4 @@
cscope.*
tags
TAGS
docker-src.*
*~

View File

@@ -1,38 +1,9 @@
sudo: false
language: c
python:
- "2.4"
compiler:
- gcc
- clang
cache: ccache
addons:
apt:
packages:
- libaio-dev
- libattr1-dev
- libbrlapi-dev
- libcap-ng-dev
- libgnutls-dev
- libgtk-3-dev
- libiscsi-dev
- liblttng-ust-dev
- libncurses5-dev
- libnss3-dev
- libpixman-1-dev
- libpng12-dev
- librados-dev
- libsdl1.2-dev
- libseccomp-dev
- libspice-protocol-dev
- libspice-server-dev
- libssh2-1-dev
- liburcu-dev
- libusb-1.0-0-dev
- libvte-2.90-dev
- sparse
- uuid-dev
notifications:
irc:
channels:
@@ -41,50 +12,89 @@ notifications:
on_failure: always
env:
global:
- TEST_CMD="make check"
- TEST_CMD=""
- EXTRA_CONFIG=""
# Development packages, EXTRA_PKGS saved for additional builds
- CORE_PKGS="libusb-1.0-0-dev libiscsi-dev librados-dev libncurses5-dev"
- NET_PKGS="libseccomp-dev libgnutls-dev libssh2-1-dev libspice-server-dev libspice-protocol-dev libnss3-dev"
- GUI_PKGS="libgtk-3-dev libvte-2.90-dev libsdl1.2-dev libpng12-dev libpixman-1-dev"
- EXTRA_PKGS=""
matrix:
- CONFIG=""
- CONFIG="--enable-debug --enable-debug-tcg --enable-trace-backends=log"
- CONFIG="--disable-linux-aio --disable-cap-ng --disable-attr --disable-brlapi --disable-uuid --disable-libusb"
- CONFIG="--enable-modules"
- CONFIG="--with-coroutine=ucontext"
- CONFIG="--with-coroutine=sigaltstack"
# Group major targets together with their linux-user counterparts
- TARGETS=alpha-softmmu,alpha-linux-user
- TARGETS=arm-softmmu,arm-linux-user,armeb-linux-user,aarch64-softmmu,aarch64-linux-user
- TARGETS=cris-softmmu,cris-linux-user
- TARGETS=i386-softmmu,i386-linux-user,x86_64-softmmu,x86_64-linux-user
- TARGETS=m68k-softmmu,m68k-linux-user
- TARGETS=microblaze-softmmu,microblazeel-softmmu,microblaze-linux-user,microblazeel-linux-user
- TARGETS=mips-softmmu,mips64-softmmu,mips64el-softmmu,mipsel-softmmu
- TARGETS=mips-linux-user,mips64-linux-user,mips64el-linux-user,mipsel-linux-user,mipsn32-linux-user,mipsn32el-linux-user
- TARGETS=or32-softmmu,or32-linux-user
- TARGETS=ppc-softmmu,ppc64-softmmu,ppcemb-softmmu,ppc-linux-user,ppc64-linux-user,ppc64abi32-linux-user,ppc64le-linux-user
- TARGETS=s390x-softmmu,s390x-linux-user
- TARGETS=sh4-softmmu,sh4eb-softmmu,sh4-linux-user sh4eb-linux-user
- TARGETS=sparc-softmmu,sparc64-softmmu,sparc-linux-user,sparc32plus-linux-user,sparc64-linux-user
- TARGETS=unicore32-softmmu,unicore32-linux-user
# Group remaining softmmu only targets into one build
- TARGETS=lm32-softmmu,moxie-softmmu,tricore-softmmu,xtensa-softmmu,xtensaeb-softmmu
git:
# we want to do this ourselves
submodules: false
before_install:
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew update ; fi
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew install libffi gettext glib pixman ; fi
- wget -O - http://people.linaro.org/~alex.bennee/qemu-submodule-git-seed.tar.xz | tar -xvJ
- git submodule update --init --recursive
- sudo apt-get update -qq
- sudo apt-get install -qq ${CORE_PKGS} ${NET_PKGS} ${GUI_PKGS} ${EXTRA_PKGS}
before_script:
- ./configure ${CONFIG}
- ./configure --target-list=${TARGETS} --enable-debug-tcg ${EXTRA_CONFIG}
script:
- make -j3 && ${TEST_CMD}
- make -j2 && ${TEST_CMD}
matrix:
# We manually include a number of additional build for non-standard bits
include:
# Sparse is GCC only
- env: CONFIG="--enable-sparse"
# Make check target (we only do this once)
- env:
- TARGETS=alpha-softmmu,arm-softmmu,aarch64-softmmu,cris-softmmu,
i386-softmmu,x86_64-softmmu,m68k-softmmu,microblaze-softmmu,
microblazeel-softmmu,mips-softmmu,mips64-softmmu,
mips64el-softmmu,mipsel-softmmu,or32-softmmu,ppc-softmmu,
ppc64-softmmu,ppcemb-softmmu,s390x-softmmu,sh4-softmmu,
sh4eb-softmmu,sparc-softmmu,sparc64-softmmu,
unicore32-softmmu,unicore32-linux-user,
lm32-softmmu,moxie-softmmu,tricore-softmmu,xtensa-softmmu,
xtensaeb-softmmu
TEST_CMD="make check"
compiler: gcc
# gprof/gcov are GCC features
- env: CONFIG="--enable-gprof --enable-gcov --disable-pie"
# Debug related options
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-debug"
compiler: gcc
# We manually include builds which we disable "make check" for
- env: CONFIG="--enable-debug --enable-tcg-interpreter"
TEST_CMD=""
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-debug --enable-tcg-interpreter"
compiler: gcc
- env: CONFIG="--enable-trace-backends=simple"
TEST_CMD=""
# All the extra -dev packages
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="libaio-dev libcap-ng-dev libattr1-dev libbrlapi-dev uuid-dev libusb-1.0.0-dev"
compiler: gcc
- env: CONFIG="--enable-trace-backends=ftrace"
TEST_CMD=""
# Currently configure doesn't force --disable-pie
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-gprof --enable-gcov --disable-pie"
compiler: gcc
- env: CONFIG="--enable-trace-backends=ust"
TEST_CMD=""
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="sparse"
EXTRA_CONFIG="--enable-sparse"
compiler: gcc
- env: CONFIG="--with-coroutine=gthread"
TEST_CMD=""
# All the trace backends (apart from dtrace)
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backends=stderr"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backends=simple"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backends=ftrace"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="liblttng-ust-dev liburcu-dev"
EXTRA_CONFIG="--enable-trace-backends=ust"
compiler: gcc
- env: CONFIG=""
os: osx
compiler: clang

View File

@@ -87,15 +87,10 @@ Furthermore, it is the QEMU coding style.
5. Declarations
Mixed declarations (interleaving statements and declarations within
blocks) are generally not allowed; declarations should be at the beginning
of blocks.
Every now and then, an exception is made for declarations inside a
#ifdef or #ifndef block: if the code looks nicer, such declarations can
be placed at the top of the block even if there are statements above.
On the other hand, however, it's often best to move that #ifdef/#ifndef
block to a separate function altogether.
Mixed declarations (interleaving statements and declarations within blocks)
are not allowed; declarations should be at the beginning of blocks. In other
words, the code should not generate warnings if using GCC's
-Wdeclaration-after-statement option.
6. Conditional statements

55
HACKING
View File

@@ -157,58 +157,3 @@ painful. These are:
* you may assume that integers are 2s complement representation
* you may assume that right shift of a signed integer duplicates
the sign bit (ie it is an arithmetic shift, not a logical shift)
7. Error handling and reporting
7.1 Reporting errors to the human user
Do not use printf(), fprintf() or monitor_printf(). Instead, use
error_report() or error_vreport() from error-report.h. This ensures the
error is reported in the right place (current monitor or stderr), and in
a uniform format.
Use error_printf() & friends to print additional information.
error_report() prints the current location. In certain common cases
like command line parsing, the current location is tracked
automatically. To manipulate it manually, use the loc_*() from
error-report.h.
7.2 Propagating errors
An error can't always be reported to the user right where it's detected,
but often needs to be propagated up the call chain to a place that can
handle it. This can be done in various ways.
The most flexible one is Error objects. See error.h for usage
information.
Use the simplest suitable method to communicate success / failure to
callers. Stick to common methods: non-negative on success / -1 on
error, non-negative / -errno, non-null / null, or Error objects.
Example: when a function returns a non-null pointer on success, and it
can fail only in one way (as far as the caller is concerned), returning
null on failure is just fine, and certainly simpler and a lot easier on
the eyes than propagating an Error object through an Error ** parameter.
Example: when a function's callers need to report details on failure
only the function really knows, use Error **, and set suitable errors.
Do not report an error to the user when you're also returning an error
for somebody else to handle. Leave the reporting to the place that
consumes the error returned.
7.3 Handling errors
Calling exit() is fine when handling configuration errors during
startup. It's problematic during normal operation. In particular,
monitor commands should never exit().
Do not call exit() or abort() to handle an error that can be triggered
by the guest (e.g., some unimplemented corner case in guest code
translation or device emulation). Guests should not be able to
terminate QEMU.
Note that &error_fatal is just another way to exit(1), and &error_abort
is just another way to abort().

View File

@@ -11,7 +11,7 @@ option) any later version.
As of July 2013, contributions under version 2 of the GNU General Public
License (and no later version) are only accepted for the following files
or directories: bsd-user/, linux-user/, hw/vfio/, hw/xen/xen_pt*.
or directories: bsd-user/, linux-user/, hw/misc/vfio.c, hw/xen/xen_pt*.
3) The Tiny Code Generator (TCG) is released under the BSD license
(see license headers in files).

File diff suppressed because it is too large Load Diff

179
Makefile
View File

@@ -3,11 +3,6 @@
# Always point to the root of the build tree (needs GNU make).
BUILD_DIR=$(CURDIR)
# Before including a proper config-host.mak, assume we are in the source tree
SRC_PATH=.
UNCHECKED_GOALS := %clean TAGS cscope ctags docker docker-%
# All following code might depend on configuration variables
ifneq ($(wildcard config-host.mak),)
# Put the all: rule here so that config-host.mak can contain dependencies.
@@ -30,6 +25,7 @@ CONFIG_ALL=y
-include config-all-devices.mak
-include config-all-disas.mak
include $(SRC_PATH)/rules.mak
config-host.mak: $(SRC_PATH)/configure
@echo $@ is out-of-date, running configure
@# TODO: The next lines include code which supports a smooth
@@ -42,19 +38,15 @@ config-host.mak: $(SRC_PATH)/configure
fi
else
config-host.mak:
ifneq ($(filter-out $(UNCHECKED_GOALS),$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
ifneq ($(filter-out %clean,$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
@echo "Please call configure before running make!"
@exit 1
endif
endif
include $(SRC_PATH)/rules.mak
GENERATED_HEADERS = config-host.h qemu-options.def
GENERATED_HEADERS += qmp-commands.h qapi-types.h qapi-visit.h qapi-event.h
GENERATED_SOURCES += qmp-marshal.c qapi-types.c qapi-visit.c qapi-event.c
GENERATED_HEADERS += qmp-introspect.h
GENERATED_SOURCES += qmp-introspect.c
GENERATED_HEADERS += trace/generated-events.h
GENERATED_SOURCES += trace/generated-events.c
@@ -82,7 +74,7 @@ Makefile: ;
configure: ;
.PHONY: all clean cscope distclean dvi html info install install-doc \
pdf recurse-all speed test dist msi
pdf recurse-all speed test dist
$(call set-vpath, $(SRC_PATH))
@@ -91,8 +83,7 @@ LIBS+=-lz $(LIBS_TOOLS)
HELPERS-$(CONFIG_LINUX) = qemu-bridge-helper$(EXESUF)
ifdef BUILD_DOCS
DOCS=qemu-doc.html qemu-tech.html qemu.1 qemu-img.1 qemu-nbd.8 qemu-ga.8
DOCS+=qmp-commands.txt
DOCS=qemu-doc.html qemu-tech.html qemu.1 qemu-img.1 qemu-nbd.8 qmp-commands.txt
ifdef CONFIG_VIRTFS
DOCS+=fsdev/virtfs-proxy-helper.1
endif
@@ -118,9 +109,8 @@ endif
-include $(SUBDIR_DEVICES_MAK_DEP)
%/config-devices.mak: default-configs/%.mak
$(call quiet-command, \
$(SHELL) $(SRC_PATH)/scripts/make_device_config.sh $< $*-config-devices.mak.d $@ > $@.tmp, " GEN $@.tmp")
$(call quiet-command, if test -f $@; then \
$(call quiet-command,$(SHELL) $(SRC_PATH)/scripts/make_device_config.sh $@ $<, " GEN $@")
@if test -f $@; then \
if cmp -s $@.old $@; then \
mv $@.tmp $@; \
cp -p $@ $@.old; \
@@ -136,7 +126,7 @@ endif
else \
mv $@.tmp $@; \
cp -p $@ $@.old; \
fi, " GEN $@");
fi
defconfig:
rm -f config-all-devices.mak $(SUBDIR_DEVICES_MAK)
@@ -149,21 +139,18 @@ dummy := $(call unnest-vars,, \
stub-obj-y \
util-obj-y \
qga-obj-y \
ivshmem-client-obj-y \
ivshmem-server-obj-y \
qga-vss-dll-obj-y \
block-obj-y \
block-obj-m \
crypto-obj-y \
crypto-aes-obj-y \
qom-obj-y \
io-obj-y \
common-obj-y \
common-obj-m)
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/tests/Makefile
endif
ifeq ($(CONFIG_SMARTCARD_NSS),y)
include $(SRC_PATH)/libcacard/Makefile
endif
all: $(DOCS) $(TOOLS) $(HELPERS-y) recurse-all modules
@@ -176,8 +163,6 @@ SUBDIR_RULES=$(patsubst %,subdir-%, $(TARGET_DIRS))
SOFTMMU_SUBDIR_RULES=$(filter %-softmmu,$(SUBDIR_RULES))
$(SOFTMMU_SUBDIR_RULES): $(block-obj-y)
$(SOFTMMU_SUBDIR_RULES): $(crypto-obj-y)
$(SOFTMMU_SUBDIR_RULES): $(io-obj-y)
$(SOFTMMU_SUBDIR_RULES): config-all-devices.mak
subdir-%:
@@ -202,7 +187,7 @@ subdir-dtc:dtc/libfdt dtc/tests
dtc/%:
mkdir -p $@
$(SUBDIR_RULES): libqemuutil.a libqemustub.a $(common-obj-y) $(qom-obj-y) $(crypto-aes-obj-$(CONFIG_USER_ONLY))
$(SUBDIR_RULES): libqemuutil.a libqemustub.a $(common-obj-y)
ROMSUBDIR_RULES=$(patsubst %,romsubdir-%, $(ROMS))
romsubdir-%:
@@ -212,9 +197,9 @@ ALL_SUBDIRS=$(TARGET_DIRS) $(patsubst %,pc-bios/%, $(ROMS))
recurse-all: $(SUBDIR_RULES) $(ROMSUBDIR_RULES)
$(BUILD_DIR)/version.o: $(SRC_PATH)/version.rc config-host.h | $(BUILD_DIR)/version.lo
$(BUILD_DIR)/version.o: $(SRC_PATH)/version.rc $(BUILD_DIR)/config-host.h | $(BUILD_DIR)/version.lo
$(call quiet-command,$(WINDRES) -I$(BUILD_DIR) -o $@ $<," RC version.o")
$(BUILD_DIR)/version.lo: $(SRC_PATH)/version.rc config-host.h
$(BUILD_DIR)/version.lo: $(SRC_PATH)/version.rc $(BUILD_DIR)/config-host.h
$(call quiet-command,$(WINDRES) -I$(BUILD_DIR) -o $@ $<," RC version.lo")
Makefile: $(version-obj-y) $(version-lobj-y)
@@ -232,13 +217,13 @@ util/module.o-cflags = -D'CONFIG_BLOCK_MODULES=$(block-modules)'
qemu-img.o: qemu-img-cmds.h
qemu-img$(EXESUF): qemu-img.o $(block-obj-y) $(crypto-obj-y) $(io-obj-y) $(qom-obj-y) libqemuutil.a libqemustub.a
qemu-nbd$(EXESUF): qemu-nbd.o $(block-obj-y) $(crypto-obj-y) $(io-obj-y) $(qom-obj-y) libqemuutil.a libqemustub.a
qemu-io$(EXESUF): qemu-io.o $(block-obj-y) $(crypto-obj-y) $(io-obj-y) $(qom-obj-y) libqemuutil.a libqemustub.a
qemu-img$(EXESUF): qemu-img.o $(block-obj-y) libqemuutil.a libqemustub.a
qemu-nbd$(EXESUF): qemu-nbd.o $(block-obj-y) libqemuutil.a libqemustub.a
qemu-io$(EXESUF): qemu-io.o $(block-obj-y) libqemuutil.a libqemustub.a
qemu-bridge-helper$(EXESUF): qemu-bridge-helper.o libqemuutil.a libqemustub.a
qemu-bridge-helper$(EXESUF): qemu-bridge-helper.o
fsdev/virtfs-proxy-helper$(EXESUF): fsdev/virtfs-proxy-helper.o fsdev/9p-marshal.o fsdev/9p-iov-marshal.o libqemuutil.a libqemustub.a
fsdev/virtfs-proxy-helper$(EXESUF): fsdev/virtfs-proxy-helper.o fsdev/virtio-9p-marshal.o libqemuutil.a libqemustub.a
fsdev/virtfs-proxy-helper$(EXESUF): LIBS += -lcap
qemu-img-cmds.h: $(SRC_PATH)/qemu-img-cmds.hx
@@ -254,49 +239,42 @@ qapi-py = $(SRC_PATH)/scripts/qapi.py $(SRC_PATH)/scripts/ordereddict.py
qga/qapi-generated/qga-qapi-types.c qga/qapi-generated/qga-qapi-types.h :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" $<, \
$(gen-out-type) -o qga/qapi-generated -p "qga-" -i $<, \
" GEN $@")
qga/qapi-generated/qga-qapi-visit.c qga/qapi-generated/qga-qapi-visit.h :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" $<, \
$(gen-out-type) -o qga/qapi-generated -p "qga-" -i $<, \
" GEN $@")
qga/qapi-generated/qga-qmp-commands.h qga/qapi-generated/qga-qmp-marshal.c :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" $<, \
$(gen-out-type) -o qga/qapi-generated -p "qga-" -i $<, \
" GEN $@")
qapi-modules = $(SRC_PATH)/qapi-schema.json $(SRC_PATH)/qapi/common.json \
$(SRC_PATH)/qapi/block.json $(SRC_PATH)/qapi/block-core.json \
$(SRC_PATH)/qapi/event.json $(SRC_PATH)/qapi/introspect.json \
$(SRC_PATH)/qapi/crypto.json $(SRC_PATH)/qapi/rocker.json \
$(SRC_PATH)/qapi/trace.json
$(SRC_PATH)/qapi/event.json
qapi-types.c qapi-types.h :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py \
$(gen-out-type) -o "." -b $<, \
$(gen-out-type) -o "." -b -i $<, \
" GEN $@")
qapi-visit.c qapi-visit.h :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py \
$(gen-out-type) -o "." -b $<, \
$(gen-out-type) -o "." -b -i $<, \
" GEN $@")
qapi-event.c qapi-event.h :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-event.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-event.py \
$(gen-out-type) -o "." $<, \
$(gen-out-type) -o "." -b -i $<, \
" GEN $@")
qmp-commands.h qmp-marshal.c :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py \
$(gen-out-type) -o "." -m $<, \
" GEN $@")
qmp-introspect.h qmp-introspect.c :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-introspect.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-introspect.py \
$(gen-out-type) -o "." $<, \
$(gen-out-type) -o "." -m -i $<, \
" GEN $@")
QGALIB_GEN=$(addprefix qga/qapi-generated/, qga-qapi-types.h qga-qapi-visit.h qga-qmp-commands.h)
@@ -305,44 +283,15 @@ $(qga-obj-y) qemu-ga.o: $(QGALIB_GEN)
qemu-ga$(EXESUF): $(qga-obj-y) libqemuutil.a libqemustub.a
$(call LINK, $^)
ifdef QEMU_GA_MSI_ENABLED
QEMU_GA_MSI=qemu-ga-$(ARCH).msi
msi: $(QEMU_GA_MSI)
$(QEMU_GA_MSI): qemu-ga.exe $(QGA_VSS_PROVIDER)
$(QEMU_GA_MSI): config-host.mak
$(QEMU_GA_MSI): $(SRC_PATH)/qga/installer/qemu-ga.wxs
$(call quiet-command,QEMU_GA_VERSION="$(QEMU_GA_VERSION)" QEMU_GA_MANUFACTURER="$(QEMU_GA_MANUFACTURER)" QEMU_GA_DISTRO="$(QEMU_GA_DISTRO)" BUILD_DIR="$(BUILD_DIR)" \
wixl -o $@ $(QEMU_GA_MSI_ARCH) $(QEMU_GA_MSI_WITH_VSS) $(QEMU_GA_MSI_MINGW_DLL_PATH) $<, " WIXL $@")
else
msi:
@echo "MSI build not configured or dependency resolution failed (reconfigure with --enable-guest-agent-msi option)"
endif
ifneq ($(EXESUF),)
.PHONY: qemu-ga
qemu-ga: qemu-ga$(EXESUF) $(QGA_VSS_PROVIDER) $(QEMU_GA_MSI)
endif
ivshmem-client$(EXESUF): $(ivshmem-client-obj-y) libqemuutil.a libqemustub.a
$(call LINK, $^)
ivshmem-server$(EXESUF): $(ivshmem-server-obj-y) libqemuutil.a libqemustub.a
$(call LINK, $^)
clean:
# avoid old build problems by removing potentially incorrect old files
rm -f config.mak op-i386.h opc-i386.h gen-op-i386.h op-arm.h opc-arm.h gen-op-arm.h
rm -f qemu-options.def
rm -f *.msi
find . \( -name '*.l[oa]' -o -name '*.so' -o -name '*.dll' -o -name '*.mo' -o -name '*.[oda]' \) -type f -exec rm {} +
rm -f $(filter-out %.tlb,$(TOOLS)) $(HELPERS-y) qemu-ga TAGS cscope.* *.pod *~ */*~
rm -f fsdev/*.pod
rm -rf .libs */.libs
rm -f qemu-img-cmds.h
rm -f ui/shader/*-vert.h ui/shader/*-frag.h
@# May not be present in GENERATED_HEADERS
rm -f trace/generated-tracers-dtrace.dtrace*
rm -f trace/generated-tracers-dtrace.h*
@@ -354,7 +303,6 @@ clean:
if test -d $$d; then $(MAKE) -C $$d $@ || exit 1; fi; \
rm -f $$d/qemu-options.def; \
done
rm -f $(SUBDIR_DEVICES_MAK) config-all-devices.mak
VERSION ?= $(shell cat VERSION)
@@ -364,9 +312,9 @@ qemu-%.tar.bz2:
$(SRC_PATH)/scripts/make-release "$(SRC_PATH)" "$(patsubst qemu-%.tar.bz2,%,$@)"
distclean: clean
rm -f config-host.mak config-host.h* config-host.ld $(DOCS) qemu-options.texi qemu-img-cmds.texi qemu-monitor.texi qemu-monitor-info.texi
rm -f config-all-devices.mak config-all-disas.mak config.status
rm -f po/*.mo tests/qemu-iotests/common.env
rm -f config-host.mak config-host.h* config-host.ld $(DOCS) qemu-options.texi qemu-img-cmds.texi qemu-monitor.texi
rm -f config-all-devices.mak config-all-disas.mak
rm -f po/*.mo
rm -f roms/seabios/config.mak roms/vgabios/config.mak
rm -f qemu-doc.info qemu-doc.aux qemu-doc.cp qemu-doc.cps qemu-doc.dvi
rm -f qemu-doc.fn qemu-doc.fns qemu-doc.info qemu-doc.ky qemu-doc.kys
@@ -379,8 +327,8 @@ distclean: clean
rm -rf $$d || exit 1 ; \
done
rm -Rf .sdk
if test -f pixman/config.log; then $(MAKE) -C pixman distclean; fi
if test -f dtc/version_gen.h; then $(MAKE) $(DTC_MAKE_ARGS) clean; fi
if test -f pixman/config.log; then make -C pixman distclean; fi
if test -f dtc/version_gen.h; then make $(DTC_MAKE_ARGS) clean; fi
KEYMAPS=da en-gb et fr fr-ch is lt modifiers no pt-br sv \
ar de en-us fi fr-be hr it lv nl pl ru th \
@@ -389,8 +337,8 @@ bepo cz
ifdef INSTALL_BLOBS
BLOBS=bios.bin bios-256k.bin sgabios.bin vgabios.bin vgabios-cirrus.bin \
vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin vgabios-virtio.bin \
acpi-dsdt.aml \
vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin \
acpi-dsdt.aml q35-acpi-dsdt.aml \
ppc_rom.bin openbios-sparc32 openbios-sparc64 openbios-ppc QEMU,tcx.bin QEMU,cgthree.bin \
pxe-e1000.rom pxe-eepro100.rom pxe-ne2k_pci.rom \
pxe-pcnet.rom pxe-rtl8139.rom pxe-virtio.rom \
@@ -399,6 +347,7 @@ efi-pcnet.rom efi-rtl8139.rom efi-virtio.rom \
qemu-icon.bmp qemu_logo_no_text.svg \
bamboo.dtb petalogix-s3adsp1800.dtb petalogix-ml605.dtb \
multiboot.bin linuxboot.bin kvmvapic.bin \
s390-zipl.rom \
s390-ccw.img \
spapr-rtas.bin slof.bin \
palcode-clipper \
@@ -419,9 +368,6 @@ ifneq ($(TOOLS),)
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man8"
$(INSTALL_DATA) qemu-nbd.8 "$(DESTDIR)$(mandir)/man8"
endif
ifneq (,$(findstring qemu-ga,$(TOOLS)))
$(INSTALL_DATA) qemu-ga.8 "$(DESTDIR)$(mandir)/man8"
endif
endif
ifdef CONFIG_VIRTFS
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man1"
@@ -438,11 +384,16 @@ ifneq (,$(findstring qemu-ga,$(TOOLS)))
endif
endif
install-confdir:
$(INSTALL_DIR) "$(DESTDIR)$(qemu_confdir)"
install: all $(if $(BUILD_DOCS),install-doc) \
install-sysconfig: install-datadir install-confdir
$(INSTALL_DATA) $(SRC_PATH)/sysconfigs/target/target-x86_64.conf "$(DESTDIR)$(qemu_confdir)"
install: all $(if $(BUILD_DOCS),install-doc) install-sysconfig \
install-datadir install-localstatedir
ifneq ($(TOOLS),)
$(call install-prog,$(subst qemu-ga,qemu-ga$(EXESUF),$(TOOLS)),$(DESTDIR)$(bindir))
$(call install-prog,$(TOOLS),$(DESTDIR)$(bindir))
endif
ifneq ($(CONFIG_MODULES),)
$(INSTALL_DIR) "$(DESTDIR)$(qemu_moddir)"
@@ -476,36 +427,15 @@ endif
test speed: all
$(MAKE) -C tests/tcg $@
.PHONY: ctags
ctags:
rm -f $@
find "$(SRC_PATH)" -name '*.[hc]' -exec ctags --append {} +
.PHONY: TAGS
TAGS:
rm -f $@
find "$(SRC_PATH)" -name '*.[hc]' -exec etags --append {} +
cscope:
rm -f "$(SRC_PATH)"/cscope.*
find "$(SRC_PATH)/" -name "*.[chsS]" -print | sed 's,^\./,,' > "$(SRC_PATH)/cscope.files"
cscope -b -i"$(SRC_PATH)/cscope.files"
# opengl shader programs
ui/shader/%-vert.h: $(SRC_PATH)/ui/shader/%.vert $(SRC_PATH)/scripts/shaderinclude.pl
@mkdir -p $(dir $@)
$(call quiet-command,\
perl $(SRC_PATH)/scripts/shaderinclude.pl $< > $@,\
" VERT $@")
ui/shader/%-frag.h: $(SRC_PATH)/ui/shader/%.frag $(SRC_PATH)/scripts/shaderinclude.pl
@mkdir -p $(dir $@)
$(call quiet-command,\
perl $(SRC_PATH)/scripts/shaderinclude.pl $< > $@,\
" FRAG $@")
ui/console-gl.o: $(SRC_PATH)/ui/console-gl.c \
ui/shader/texture-blit-vert.h ui/shader/texture-blit-frag.h
rm -f ./cscope.*
find "$(SRC_PATH)" -name "*.[chsS]" -print | sed 's,^\./,,' > ./cscope.files
cscope -b
# documentation
MAKEINFO=makeinfo
@@ -530,16 +460,13 @@ qemu-options.texi: $(SRC_PATH)/qemu-options.hx
qemu-monitor.texi: $(SRC_PATH)/hmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -t < $< > $@," GEN $@")
qemu-monitor-info.texi: $(SRC_PATH)/hmp-commands-info.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -t < $< > $@," GEN $@")
qmp-commands.txt: $(SRC_PATH)/qmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -q < $< > $@," GEN $@")
qemu-img-cmds.texi: $(SRC_PATH)/qemu-img-cmds.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -t < $< > $@," GEN $@")
qemu.1: qemu-doc.texi qemu-options.texi qemu-monitor.texi qemu-monitor-info.texi
qemu.1: qemu-doc.texi qemu-options.texi qemu-monitor.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< qemu.pod && \
$(POD2MAN) --section=1 --center=" " --release=" " qemu.pod > $@, \
@@ -563,12 +490,6 @@ qemu-nbd.8: qemu-nbd.texi
$(POD2MAN) --section=8 --center=" " --release=" " qemu-nbd.pod > $@, \
" GEN $@")
qemu-ga.8: qemu-ga.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< qemu-ga.pod && \
$(POD2MAN) --section=8 --center=" " --release=" " qemu-ga.pod > $@, \
" GEN $@")
dvi: qemu-doc.dvi qemu-tech.dvi
html: qemu-doc.html qemu-tech.html
info: qemu-doc.info qemu-tech.info
@@ -576,8 +497,7 @@ pdf: qemu-doc.pdf qemu-tech.pdf
qemu-doc.dvi qemu-doc.html qemu-doc.info qemu-doc.pdf: \
qemu-img.texi qemu-nbd.texi qemu-options.texi \
qemu-monitor.texi qemu-img-cmds.texi qemu-ga.texi \
qemu-monitor-info.texi
qemu-monitor.texi qemu-img-cmds.texi
ifdef CONFIG_WIN32
@@ -602,7 +522,7 @@ installer: $(INSTALLER)
INSTDIR=/tmp/qemu-nsis
$(INSTALLER): $(SRC_PATH)/qemu.nsi
$(MAKE) install prefix=${INSTDIR}
make install prefix=${INSTDIR}
ifdef SIGNCODE
(cd ${INSTDIR}; \
for i in *.exe; do \
@@ -627,7 +547,6 @@ endif # SIGNCODE
$(if $(DLL_PATH),-DDLLDIR="$(DLL_PATH)") \
-DSRCDIR="$(SRC_PATH)" \
-DOUTFILE="$(INSTALLER)" \
-DDISPLAYVERSION="$(VERSION)" \
$(SRC_PATH)/qemu.nsi
rm -r ${INSTDIR}
ifdef SIGNCODE
@@ -637,12 +556,10 @@ endif # CONFIG_WIN
# Add a dependency on the generated files, so that they are always
# rebuilt before other object files
ifneq ($(filter-out $(UNCHECKED_GOALS),$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
ifneq ($(filter-out %clean,$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
Makefile: $(GENERATED_HEADERS)
endif
# Include automatically generated dependency files
# Dependencies in Makefile.objs files come from our recursive subdir rules
-include $(wildcard *.d tests/*.d)
include $(SRC_PATH)/tests/docker/Makefile.include

View File

@@ -1,38 +1,37 @@
#######################################################################
# Common libraries for tools and emulators
stub-obj-y = stubs/ crypto/
util-obj-y = util/ qobject/ qapi/
util-obj-y += qmp-introspect.o qapi-types.o qapi-visit.o qapi-event.o
stub-obj-y = stubs/
util-obj-y = util/ qobject/ qapi/ qapi-types.o qapi-visit.o qapi-event.o
#######################################################################
# block-obj-y is code used by both qemu system emulation and qemu-img
block-obj-y = async.o thread-pool.o
block-obj-y += nbd/
block-obj-y += block.o blockjob.o
block-obj-y += nbd.o block.o blockjob.o
block-obj-y += main-loop.o iohandler.o qemu-timer.o
block-obj-$(CONFIG_POSIX) += aio-posix.o
block-obj-$(CONFIG_WIN32) += aio-win32.o
block-obj-y += block/
block-obj-y += qemu-io-cmds.o
block-obj-y += qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
block-obj-y += qemu-coroutine-sleep.o
block-obj-y += coroutine-$(CONFIG_COROUTINE_BACKEND).o
block-obj-m = block/
#######################################################################
# crypto-obj-y is code used by both qemu system emulation and qemu-img
crypto-obj-y = crypto/
crypto-aes-obj-y = crypto/
######################################################################
# smartcard
#######################################################################
# qom-obj-y is code used by both qemu system emulation and qemu-img
qom-obj-y = qom/
#######################################################################
# io-obj-y is code used by both qemu system emulation and qemu-img
io-obj-y = io/
libcacard-y += libcacard/cac.o libcacard/event.o
libcacard-y += libcacard/vcard.o libcacard/vreader.o
libcacard-y += libcacard/vcard_emul_nss.o
libcacard-y += libcacard/vcard_emul_type.o
libcacard-y += libcacard/card_7816.o
libcacard-y += libcacard/vcardt.o
libcacard/vcard_emul_nss.o-cflags := $(NSS_CFLAGS)
libcacard/vcard_emul_nss.o-libs := $(NSS_LIBS)
######################################################################
# Target independent part of system emulation. The long term path is to
@@ -49,9 +48,15 @@ common-obj-$(CONFIG_POSIX) += os-posix.o
common-obj-$(CONFIG_LINUX) += fsdev/
common-obj-y += migration/
common-obj-y += migration.o migration-tcp.o
common-obj-y += vmstate.o
common-obj-y += qemu-file.o qemu-file-unix.o qemu-file-stdio.o
common-obj-$(CONFIG_RDMA) += migration-rdma.o
common-obj-y += qemu-char.o #aio.o
common-obj-y += page_cache.o
common-obj-y += block-migration.o
common-obj-y += page_cache.o xbzrle.o
common-obj-$(CONFIG_POSIX) += migration-exec.o migration-unix.o migration-fd.o
common-obj-$(CONFIG_SPICE) += spice-qemu-char.o
@@ -59,8 +64,6 @@ common-obj-y += audio/
common-obj-y += hw/
common-obj-y += accel.o
common-obj-y += replay/
common-obj-y += ui/
common-obj-y += bt-host.o bt-vhci.o
bt-host.o-cflags := $(BLUEZ_CFLAGS)
@@ -76,18 +79,18 @@ common-obj-y += backends/
common-obj-$(CONFIG_SECCOMP) += qemu-seccomp.o
common-obj-$(CONFIG_FDT) += device_tree.o
common-obj-$(CONFIG_SMARTCARD_NSS) += $(libcacard-y)
######################################################################
# qapi
common-obj-y += qmp-marshal.o
common-obj-y += qmp-introspect.o
common-obj-y += qmp.o hmp.o
endif
#######################################################################
# Target-independent parts used in system and user emulation
common-obj-y += qemu-log.o
common-obj-y += tcg-runtime.o
common-obj-y += hw/
common-obj-y += qom/
@@ -110,8 +113,3 @@ target-obj-y += trace/
# by libqemuutil.a. These should be moved to a separate .json schema.
qga-obj-y = qga/
qga-vss-dll-obj-y = qga/
######################################################################
# contrib
ivshmem-client-obj-y = contrib/ivshmem-client/
ivshmem-server-obj-y = contrib/ivshmem-server/

View File

@@ -1,13 +1,11 @@
# -*- Mode: makefile -*-
BUILD_DIR?=$(CURDIR)/..
include ../config-host.mak
include config-target.mak
include config-devices.mak
include $(SRC_PATH)/rules.mak
$(call set-vpath, $(SRC_PATH):$(BUILD_DIR))
$(call set-vpath, $(SRC_PATH))
ifdef CONFIG_LINUX
QEMU_CFLAGS += -I../linux-headers
endif
@@ -85,11 +83,8 @@ all: $(PROGS) stap
#########################################################
# cpu emulator library
obj-y = exec.o translate-all.o cpu-exec.o
obj-y += translate-common.o
obj-y += cpu-exec-common.o
obj-y += tcg/tcg.o tcg/tcg-op.o tcg/optimize.o
obj-y += tcg/tcg.o tcg/optimize.o
obj-$(CONFIG_TCG_INTERPRETER) += tci.o
obj-y += tcg/tcg-common.o
obj-$(CONFIG_TCG_INTERPRETER) += disas/tci.o
obj-y += fpu/softfloat.o
obj-y += target-$(TARGET_BASE_ARCH)/
@@ -108,12 +103,7 @@ obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/dpd/decimal128.o
ifdef CONFIG_LINUX_USER
# Note that we only add linux-user/host/$ARCH if it exists, and
# that it must come before linux-user/host/generic in the search path.
QEMU_CFLAGS+=-I$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR) \
$(patsubst %,-I%,$(wildcard $(SRC_PATH)/linux-user/host/$(ARCH))) \
-I$(SRC_PATH)/linux-user/host/generic \
-I$(SRC_PATH)/linux-user
QEMU_CFLAGS+=-I$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR) -I$(SRC_PATH)/linux-user
obj-y += linux-user/
obj-y += gdbstub.o thunk.o user-exec.o
@@ -139,12 +129,12 @@ ifdef CONFIG_SOFTMMU
obj-y += arch_init.o cpus.o monitor.o gdbstub.o balloon.o ioport.o numa.o
obj-y += qtest.o bootdevice.o
obj-y += hw/
obj-$(CONFIG_FDT) += device_tree.o
obj-$(CONFIG_KVM) += kvm-all.o
obj-y += memory.o cputlb.o
obj-y += memory.o savevm.o cputlb.o
obj-y += memory_mapping.o
obj-y += dump.o
obj-y += migration/ram.o migration/savevm.o
LIBS := $(libs_softmmu) $(LIBS)
LIBS+=$(libs_softmmu)
# xen support
obj-$(CONFIG_XEN) += xen-common.o
@@ -159,7 +149,7 @@ else
obj-y += hw/$(TARGET_BASE_ARCH)/
endif
GENERATED_HEADERS += hmp-commands.h hmp-commands-info.h qmp-commands-old.h
GENERATED_HEADERS += hmp-commands.h qmp-commands-old.h
endif # CONFIG_SOFTMMU
@@ -178,30 +168,16 @@ target-obj-y-save := $(target-obj-y)
dummy := $(call unnest-vars,.., \
block-obj-y \
block-obj-m \
crypto-obj-y \
crypto-aes-obj-y \
qom-obj-y \
io-obj-y \
common-obj-y \
common-obj-m)
target-obj-y := $(target-obj-y-save)
all-obj-y += $(common-obj-y)
all-obj-y += $(target-obj-y)
all-obj-y += $(qom-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(block-obj-y)
all-obj-$(CONFIG_USER_ONLY) += $(crypto-aes-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(crypto-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(io-obj-y)
$(QEMU_PROG_BUILD): config-devices.mak
# build either PROG or PROGW
$(QEMU_PROG_BUILD): $(all-obj-y) ../libqemuutil.a ../libqemustub.a
$(call LINK, $(filter-out %.mak, $^))
ifdef CONFIG_DARWIN
$(call quiet-command,Rez -append $(SRC_PATH)/pc-bios/qemu.rsrc -o $@," REZ $(TARGET_DIR)$@")
$(call quiet-command,SetFile -a C $@," SETFILE $(TARGET_DIR)$@")
endif
$(call LINK,$^)
gdbstub-xml.c: $(TARGET_XML_FILES) $(SRC_PATH)/scripts/feature_to_c.sh
$(call quiet-command,rm -f $@ && $(SHELL) $(SRC_PATH)/scripts/feature_to_c.sh $@ $(TARGET_XML_FILES)," GEN $(TARGET_DIR)$@")
@@ -209,9 +185,6 @@ gdbstub-xml.c: $(TARGET_XML_FILES) $(SRC_PATH)/scripts/feature_to_c.sh
hmp-commands.h: $(SRC_PATH)/hmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
hmp-commands-info.h: $(SRC_PATH)/hmp-commands-info.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
qmp-commands-old.h: $(SRC_PATH)/qmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")

108
README
View File

@@ -1,107 +1,3 @@
QEMU README
===========
Read the documentation in qemu-doc.html or on http://wiki.qemu-project.org
QEMU is a generic and open source machine & userspace emulator and
virtualizer.
QEMU is capable of emulating a complete machine in software without any
need for hardware virtualization support. By using dynamic translation,
it achieves very good performance. QEMU can also integrate with the Xen
and KVM hypervisors to provide emulated hardware while allowing the
hypervisor to manage the CPU. With hypervisor support, QEMU can achieve
near native performance for CPUs. When QEMU emulates CPUs directly it is
capable of running operating systems made for one machine (e.g. an ARMv7
board) on a different machine (e.g. an x86_64 PC board).
QEMU is also capable of providing userspace API virtualization for Linux
and BSD kernel interfaces. This allows binaries compiled against one
architecture ABI (e.g. the Linux PPC64 ABI) to be run on a host using a
different architecture ABI (e.g. the Linux x86_64 ABI). This does not
involve any hardware emulation, simply CPU and syscall emulation.
QEMU aims to fit into a variety of use cases. It can be invoked directly
by users wishing to have full control over its behaviour and settings.
It also aims to facilitate integration into higher level management
layers, by providing a stable command line interface and monitor API.
It is commonly invoked indirectly via the libvirt library when using
open source applications such as oVirt, OpenStack and virt-manager.
QEMU as a whole is released under the GNU General Public License,
version 2. For full licensing details, consult the LICENSE file.
Building
========
QEMU is multi-platform software intended to be buildable on all modern
Linux platforms, OS-X, Win32 (via the Mingw64 toolchain) and a variety
of other UNIX targets. The simple steps to build QEMU are:
mkdir build
cd build
../configure
make
Complete details of the process for building and configuring QEMU for
all supported host platforms can be found in the qemu-tech.html file.
Additional information can also be found online via the QEMU website:
http://qemu-project.org/Hosts/Linux
http://qemu-project.org/Hosts/W32
Submitting patches
==================
The QEMU source code is maintained under the GIT version control system.
git clone git://git.qemu-project.org/qemu.git
When submitting patches, the preferred approach is to use 'git
format-patch' and/or 'git send-email' to format & send the mail to the
qemu-devel@nongnu.org mailing list. All patches submitted must contain
a 'Signed-off-by' line from the author. Patches should follow the
guidelines set out in the HACKING and CODING_STYLE files.
Additional information on submitting patches can be found online via
the QEMU website
http://qemu-project.org/Contribute/SubmitAPatch
http://qemu-project.org/Contribute/TrivialPatches
Bug reporting
=============
The QEMU project uses Launchpad as its primary upstream bug tracker. Bugs
found when running code built from QEMU git or upstream released sources
should be reported via:
https://bugs.launchpad.net/qemu/
If using QEMU via an operating system vendor pre-built binary package, it
is preferable to report bugs to the vendor's own bug tracker first. If
the bug is also known to affect latest upstream code, it can also be
reported via launchpad.
For additional information on bug reporting consult:
http://qemu-project.org/Contribute/ReportABug
Contact
=======
The QEMU community can be contacted in a number of ways, with the two
main methods being email and IRC
- qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel
- #qemu on irc.oftc.net
Information on additional methods of contacting the community can be
found online via the QEMU website:
http://qemu-project.org/Contribute/StartHere
-- End
- QEMU team

View File

@@ -1 +1 @@
2.6.50
2.2.1

View File

@@ -23,7 +23,6 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "sysemu/accel.h"
#include "hw/boards.h"
#include "qemu-common.h"
@@ -77,7 +76,7 @@ static int accel_init_machine(AccelClass *acc, MachineState *ms)
return ret;
}
void configure_accelerator(MachineState *ms)
int configure_accelerator(MachineState *ms)
{
const char *p;
char buf[10];
@@ -128,6 +127,8 @@ void configure_accelerator(MachineState *ms)
if (init_failed) {
fprintf(stderr, "Back to %s accelerator.\n", acc->name);
}
return !accel_initialised;
}

View File

@@ -13,14 +13,10 @@
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "block/block.h"
#include "qemu/queue.h"
#include "qemu/sockets.h"
#ifdef CONFIG_EPOLL_CREATE1
#include <sys/epoll.h>
#endif
struct AioHandler
{
@@ -28,167 +24,11 @@ struct AioHandler
IOHandler *io_read;
IOHandler *io_write;
int deleted;
int pollfds_idx;
void *opaque;
bool is_external;
QLIST_ENTRY(AioHandler) node;
};
#ifdef CONFIG_EPOLL_CREATE1
/* The fd number threashold to switch to epoll */
#define EPOLL_ENABLE_THRESHOLD 64
static void aio_epoll_disable(AioContext *ctx)
{
ctx->epoll_available = false;
if (!ctx->epoll_enabled) {
return;
}
ctx->epoll_enabled = false;
close(ctx->epollfd);
}
static inline int epoll_events_from_pfd(int pfd_events)
{
return (pfd_events & G_IO_IN ? EPOLLIN : 0) |
(pfd_events & G_IO_OUT ? EPOLLOUT : 0) |
(pfd_events & G_IO_HUP ? EPOLLHUP : 0) |
(pfd_events & G_IO_ERR ? EPOLLERR : 0);
}
static bool aio_epoll_try_enable(AioContext *ctx)
{
AioHandler *node;
struct epoll_event event;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
int r;
if (node->deleted || !node->pfd.events) {
continue;
}
event.events = epoll_events_from_pfd(node->pfd.events);
event.data.ptr = node;
r = epoll_ctl(ctx->epollfd, EPOLL_CTL_ADD, node->pfd.fd, &event);
if (r) {
return false;
}
}
ctx->epoll_enabled = true;
return true;
}
static void aio_epoll_update(AioContext *ctx, AioHandler *node, bool is_new)
{
struct epoll_event event;
int r;
if (!ctx->epoll_enabled) {
return;
}
if (!node->pfd.events) {
r = epoll_ctl(ctx->epollfd, EPOLL_CTL_DEL, node->pfd.fd, &event);
if (r) {
aio_epoll_disable(ctx);
}
} else {
event.data.ptr = node;
event.events = epoll_events_from_pfd(node->pfd.events);
if (is_new) {
r = epoll_ctl(ctx->epollfd, EPOLL_CTL_ADD, node->pfd.fd, &event);
if (r) {
aio_epoll_disable(ctx);
}
} else {
r = epoll_ctl(ctx->epollfd, EPOLL_CTL_MOD, node->pfd.fd, &event);
if (r) {
aio_epoll_disable(ctx);
}
}
}
}
static int aio_epoll(AioContext *ctx, GPollFD *pfds,
unsigned npfd, int64_t timeout)
{
AioHandler *node;
int i, ret = 0;
struct epoll_event events[128];
assert(npfd == 1);
assert(pfds[0].fd == ctx->epollfd);
if (timeout > 0) {
ret = qemu_poll_ns(pfds, npfd, timeout);
}
if (timeout <= 0 || ret > 0) {
ret = epoll_wait(ctx->epollfd, events,
sizeof(events) / sizeof(events[0]),
timeout);
if (ret <= 0) {
goto out;
}
for (i = 0; i < ret; i++) {
int ev = events[i].events;
node = events[i].data.ptr;
node->pfd.revents = (ev & EPOLLIN ? G_IO_IN : 0) |
(ev & EPOLLOUT ? G_IO_OUT : 0) |
(ev & EPOLLHUP ? G_IO_HUP : 0) |
(ev & EPOLLERR ? G_IO_ERR : 0);
}
}
out:
return ret;
}
static bool aio_epoll_enabled(AioContext *ctx)
{
/* Fall back to ppoll when external clients are disabled. */
return !aio_external_disabled(ctx) && ctx->epoll_enabled;
}
static bool aio_epoll_check_poll(AioContext *ctx, GPollFD *pfds,
unsigned npfd, int64_t timeout)
{
if (!ctx->epoll_available) {
return false;
}
if (aio_epoll_enabled(ctx)) {
return true;
}
if (npfd >= EPOLL_ENABLE_THRESHOLD) {
if (aio_epoll_try_enable(ctx)) {
return true;
} else {
aio_epoll_disable(ctx);
}
}
return false;
}
#else
static void aio_epoll_update(AioContext *ctx, AioHandler *node, bool is_new)
{
}
static int aio_epoll(AioContext *ctx, GPollFD *pfds,
unsigned npfd, int64_t timeout)
{
assert(false);
}
static bool aio_epoll_enabled(AioContext *ctx)
{
return false;
}
static bool aio_epoll_check_poll(AioContext *ctx, GPollFD *pfds,
unsigned npfd, int64_t timeout)
{
return false;
}
#endif
static AioHandler *find_aio_handler(AioContext *ctx, int fd)
{
AioHandler *node;
@@ -204,14 +44,11 @@ static AioHandler *find_aio_handler(AioContext *ctx, int fd)
void aio_set_fd_handler(AioContext *ctx,
int fd,
bool is_external,
IOHandler *io_read,
IOHandler *io_write,
void *opaque)
{
AioHandler *node;
bool is_new = false;
bool deleted = false;
node = find_aio_handler(ctx, fd);
@@ -230,43 +67,37 @@ void aio_set_fd_handler(AioContext *ctx,
* releasing the walking_handlers lock.
*/
QLIST_REMOVE(node, node);
deleted = true;
g_free(node);
}
}
} else {
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_new0(AioHandler, 1);
node = g_malloc0(sizeof(AioHandler));
node->pfd.fd = fd;
QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node);
g_source_add_poll(&ctx->source, &node->pfd);
is_new = true;
}
/* Update handler with latest information */
node->io_read = io_read;
node->io_write = io_write;
node->opaque = opaque;
node->is_external = is_external;
node->pollfds_idx = -1;
node->pfd.events = (io_read ? G_IO_IN | G_IO_HUP | G_IO_ERR : 0);
node->pfd.events |= (io_write ? G_IO_OUT | G_IO_ERR : 0);
}
aio_epoll_update(ctx, node, is_new);
aio_notify(ctx);
if (deleted) {
g_free(node);
}
}
void aio_set_event_notifier(AioContext *ctx,
EventNotifier *notifier,
bool is_external,
EventNotifierHandler *io_read)
{
aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
is_external, (IOHandler *)io_read, NULL, notifier);
(IOHandler *)io_read, NULL, notifier);
}
bool aio_prepare(AioContext *ctx)
@@ -282,12 +113,10 @@ bool aio_pending(AioContext *ctx)
int revents;
revents = node->pfd.revents & node->pfd.events;
if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read &&
aio_node_check(ctx, node->is_external)) {
if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) {
return true;
}
if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write &&
aio_node_check(ctx, node->is_external)) {
if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) {
return true;
}
}
@@ -325,7 +154,6 @@ bool aio_dispatch(AioContext *ctx)
if (!node->deleted &&
(revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
aio_node_check(ctx, node->is_external) &&
node->io_read) {
node->io_read(node->opaque);
@@ -336,7 +164,6 @@ bool aio_dispatch(AioContext *ctx)
}
if (!node->deleted &&
(revents & (G_IO_OUT | G_IO_ERR)) &&
aio_node_check(ctx, node->is_external) &&
node->io_write) {
node->io_write(node->opaque);
progress = true;
@@ -359,141 +186,69 @@ bool aio_dispatch(AioContext *ctx)
return progress;
}
/* These thread-local variables are used only in a small part of aio_poll
* around the call to the poll() system call. In particular they are not
* used while aio_poll is performing callbacks, which makes it much easier
* to think about reentrancy!
*
* Stack-allocated arrays would be perfect but they have size limitations;
* heap allocation is expensive enough that we want to reuse arrays across
* calls to aio_poll(). And because poll() has to be called without holding
* any lock, the arrays cannot be stored in AioContext. Thread-local data
* has none of the disadvantages of these three options.
*/
static __thread GPollFD *pollfds;
static __thread AioHandler **nodes;
static __thread unsigned npfd, nalloc;
static __thread Notifier pollfds_cleanup_notifier;
static void pollfds_cleanup(Notifier *n, void *unused)
{
g_assert(npfd == 0);
g_free(pollfds);
g_free(nodes);
nalloc = 0;
}
static void add_pollfd(AioHandler *node)
{
if (npfd == nalloc) {
if (nalloc == 0) {
pollfds_cleanup_notifier.notify = pollfds_cleanup;
qemu_thread_atexit_add(&pollfds_cleanup_notifier);
nalloc = 8;
} else {
g_assert(nalloc <= INT_MAX);
nalloc *= 2;
}
pollfds = g_renew(GPollFD, pollfds, nalloc);
nodes = g_renew(AioHandler *, nodes, nalloc);
}
nodes[npfd] = node;
pollfds[npfd] = (GPollFD) {
.fd = node->pfd.fd,
.events = node->pfd.events,
};
npfd++;
}
bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
int i, ret;
bool was_dispatching;
int ret;
bool progress;
int64_t timeout;
aio_context_acquire(ctx);
was_dispatching = ctx->dispatching;
progress = false;
/* aio_notify can avoid the expensive event_notifier_set if
* everything (file descriptors, bottom halves, timers) will
* be re-evaluated before the next blocking poll(). This is
* already true when aio_poll is called with blocking == false;
* if blocking == true, it is only true after poll() returns,
* so disable the optimization now.
* if blocking == true, it is only true after poll() returns.
*
* If we're in a nested event loop, ctx->dispatching might be true.
* In that case we can restore it just before returning, but we
* have to clear it now.
*/
if (blocking) {
atomic_add(&ctx->notify_me, 2);
}
aio_set_dispatching(ctx, !blocking);
ctx->walking_handlers++;
assert(npfd == 0);
g_array_set_size(ctx->pollfds, 0);
/* fill pollfds */
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (!node->deleted && node->pfd.events
&& !aio_epoll_enabled(ctx)
&& aio_node_check(ctx, node->is_external)) {
add_pollfd(node);
node->pollfds_idx = -1;
if (!node->deleted && node->pfd.events) {
GPollFD pfd = {
.fd = node->pfd.fd,
.events = node->pfd.events,
};
node->pollfds_idx = ctx->pollfds->len;
g_array_append_val(ctx->pollfds, pfd);
}
}
timeout = blocking ? aio_compute_timeout(ctx) : 0;
ctx->walking_handlers--;
/* wait until next event */
if (timeout) {
aio_context_release(ctx);
}
if (aio_epoll_check_poll(ctx, pollfds, npfd, timeout)) {
AioHandler epoll_handler;
epoll_handler.pfd.fd = ctx->epollfd;
epoll_handler.pfd.events = G_IO_IN | G_IO_OUT | G_IO_HUP | G_IO_ERR;
npfd = 0;
add_pollfd(&epoll_handler);
ret = aio_epoll(ctx, pollfds, npfd, timeout);
} else {
ret = qemu_poll_ns(pollfds, npfd, timeout);
}
if (blocking) {
atomic_sub(&ctx->notify_me, 2);
}
if (timeout) {
aio_context_acquire(ctx);
}
aio_notify_accept(ctx);
ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
ctx->pollfds->len,
blocking ? aio_compute_timeout(ctx) : 0);
/* if we have any readable fds, dispatch event */
if (ret > 0) {
for (i = 0; i < npfd; i++) {
nodes[i]->pfd.revents = pollfds[i].revents;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->pollfds_idx != -1) {
GPollFD *pfd = &g_array_index(ctx->pollfds, GPollFD,
node->pollfds_idx);
node->pfd.revents = pfd->revents;
}
}
}
npfd = 0;
ctx->walking_handlers--;
/* Run dispatch even if there were no readable fds to run timers */
aio_set_dispatching(ctx, true);
if (aio_dispatch(ctx)) {
progress = true;
}
aio_context_release(ctx);
aio_set_dispatching(ctx, was_dispatching);
return progress;
}
void aio_context_setup(AioContext *ctx, Error **errp)
{
#ifdef CONFIG_EPOLL_CREATE1
assert(!ctx->epollfd);
ctx->epollfd = epoll_create1(EPOLL_CLOEXEC);
if (ctx->epollfd == -1) {
ctx->epoll_available = false;
} else {
ctx->epoll_available = true;
}
#endif
}

View File

@@ -15,7 +15,6 @@
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "block/block.h"
#include "qemu/queue.h"
@@ -29,13 +28,11 @@ struct AioHandler {
GPollFD pfd;
int deleted;
void *opaque;
bool is_external;
QLIST_ENTRY(AioHandler) node;
};
void aio_set_fd_handler(AioContext *ctx,
int fd,
bool is_external,
IOHandler *io_read,
IOHandler *io_write,
void *opaque)
@@ -70,7 +67,7 @@ void aio_set_fd_handler(AioContext *ctx,
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_new0(AioHandler, 1);
node = g_malloc0(sizeof(AioHandler));
node->pfd.fd = fd;
QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node);
}
@@ -89,7 +86,6 @@ void aio_set_fd_handler(AioContext *ctx,
node->opaque = opaque;
node->io_read = io_read;
node->io_write = io_write;
node->is_external = is_external;
event = event_notifier_get_handle(&ctx->notifier);
WSAEventSelect(node->pfd.fd, event,
@@ -102,7 +98,6 @@ void aio_set_fd_handler(AioContext *ctx,
void aio_set_event_notifier(AioContext *ctx,
EventNotifier *e,
bool is_external,
EventNotifierHandler *io_notify)
{
AioHandler *node;
@@ -134,11 +129,10 @@ void aio_set_event_notifier(AioContext *ctx,
} else {
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_new0(AioHandler, 1);
node = g_malloc0(sizeof(AioHandler));
node->e = e;
node->pfd.fd = (uintptr_t)event_notifier_get_handle(e);
node->pfd.events = G_IO_IN;
node->is_external = is_external;
QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node);
g_source_add_poll(&ctx->source, &node->pfd);
@@ -285,33 +279,36 @@ bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
bool progress, have_select_revents, first;
bool was_dispatching, progress, have_select_revents, first;
int count;
int timeout;
aio_context_acquire(ctx);
have_select_revents = aio_prepare(ctx);
if (have_select_revents) {
blocking = false;
}
was_dispatching = ctx->dispatching;
progress = false;
/* aio_notify can avoid the expensive event_notifier_set if
* everything (file descriptors, bottom halves, timers) will
* be re-evaluated before the next blocking poll(). This is
* already true when aio_poll is called with blocking == false;
* if blocking == true, it is only true after poll() returns,
* so disable the optimization now.
* if blocking == true, it is only true after poll() returns.
*
* If we're in a nested event loop, ctx->dispatching might be true.
* In that case we can restore it just before returning, but we
* have to clear it now.
*/
if (blocking) {
atomic_add(&ctx->notify_me, 2);
}
have_select_revents = aio_prepare(ctx);
aio_set_dispatching(ctx, !blocking);
ctx->walking_handlers++;
/* fill fd sets */
count = 0;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (!node->deleted && node->io_notify
&& aio_node_check(ctx, node->is_external)) {
if (!node->deleted && node->io_notify) {
events[count++] = event_notifier_get_handle(node->e);
}
}
@@ -319,36 +316,20 @@ bool aio_poll(AioContext *ctx, bool blocking)
ctx->walking_handlers--;
first = true;
/* ctx->notifier is always registered. */
assert(count > 0);
/* Multiple iterations, all of them non-blocking except the first,
* may be necessary to process all pending events. After the first
* WaitForMultipleObjects call ctx->notify_me will be decremented.
*/
do {
/* wait until next event */
while (count > 0) {
HANDLE event;
int ret;
timeout = blocking && !have_select_revents
timeout = blocking
? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
if (timeout) {
aio_context_release(ctx);
}
ret = WaitForMultipleObjects(count, events, FALSE, timeout);
if (blocking) {
assert(first);
atomic_sub(&ctx->notify_me, 2);
}
if (timeout) {
aio_context_acquire(ctx);
}
aio_set_dispatching(ctx, true);
if (first) {
aio_notify_accept(ctx);
progress |= aio_bh_poll(ctx);
first = false;
if (first && aio_bh_poll(ctx)) {
progress = true;
}
first = false;
/* if we have any signaled events, dispatch event */
event = NULL;
@@ -363,14 +344,10 @@ bool aio_poll(AioContext *ctx, bool blocking)
blocking = false;
progress |= aio_dispatch_handlers(ctx, event);
} while (count > 0);
}
progress |= timerlistgroup_run_timers(&ctx->tlg);
aio_context_release(ctx);
aio_set_dispatching(ctx, was_dispatching);
return progress;
}
void aio_context_setup(AioContext *ctx, Error **errp)
{
}

File diff suppressed because it is too large Load Diff

130
async.c
View File

@@ -22,8 +22,6 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/aio.h"
#include "block/thread-pool.h"
@@ -46,12 +44,10 @@ struct QEMUBH {
QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
{
QEMUBH *bh;
bh = g_new(QEMUBH, 1);
*bh = (QEMUBH){
.ctx = ctx,
.cb = cb,
.opaque = opaque,
};
bh = g_malloc0(sizeof(QEMUBH));
bh->ctx = ctx;
bh->cb = cb;
bh->opaque = opaque;
qemu_mutex_lock(&ctx->bh_lock);
bh->next = ctx->first_bh;
/* Make sure that the members are ready before putting bh into list */
@@ -61,11 +57,6 @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
return bh;
}
void aio_bh_call(QEMUBH *bh)
{
bh->cb(bh->opaque);
}
/* Multiple occurrences of aio_bh_poll cannot be called concurrently */
int aio_bh_poll(AioContext *ctx)
{
@@ -79,19 +70,16 @@ int aio_bh_poll(AioContext *ctx)
/* Make sure that fetching bh happens before accessing its members */
smp_read_barrier_depends();
next = bh->next;
/* The atomic_xchg is paired with the one in qemu_bh_schedule. The
* implicit memory barrier ensures that the callback sees all writes
* done by the scheduling thread. It also ensures that the scheduling
* thread sees the zero before bh->cb has run, and thus will call
* aio_notify again if necessary.
*/
if (!bh->deleted && atomic_xchg(&bh->scheduled, 0)) {
/* Idle BHs and the notify BH don't count as progress */
if (!bh->idle && bh != ctx->notify_dummy_bh) {
if (!bh->deleted && bh->scheduled) {
bh->scheduled = 0;
/* Paired with write barrier in bh schedule to ensure reading for
* idle & callbacks coming after bh's scheduling.
*/
smp_rmb();
if (!bh->idle)
ret = 1;
}
bh->idle = 0;
aio_bh_call(bh);
bh->cb(bh->opaque);
}
}
@@ -118,28 +106,33 @@ int aio_bh_poll(AioContext *ctx)
void qemu_bh_schedule_idle(QEMUBH *bh)
{
if (bh->scheduled)
return;
bh->idle = 1;
/* Make sure that idle & any writes needed by the callback are done
* before the locations are read in the aio_bh_poll.
*/
atomic_mb_set(&bh->scheduled, 1);
smp_wmb();
bh->scheduled = 1;
}
void qemu_bh_schedule(QEMUBH *bh)
{
AioContext *ctx;
if (bh->scheduled)
return;
ctx = bh->ctx;
bh->idle = 0;
/* The memory barrier implicit in atomic_xchg makes sure that:
/* Make sure that:
* 1. idle & any writes needed by the callback are done before the
* locations are read in the aio_bh_poll.
* 2. ctx is loaded before scheduled is set and the callback has a chance
* to execute.
*/
if (atomic_xchg(&bh->scheduled, 1) == 0) {
aio_notify(ctx);
}
smp_mb();
bh->scheduled = 1;
aio_notify(ctx);
}
@@ -193,8 +186,6 @@ aio_ctx_prepare(GSource *source, gint *timeout)
{
AioContext *ctx = (AioContext *) source;
atomic_or(&ctx->notify_me, 1);
/* We assume there is no timeout already supplied */
*timeout = qemu_timeout_ns_to_ms(aio_compute_timeout(ctx));
@@ -211,9 +202,6 @@ aio_ctx_check(GSource *source)
AioContext *ctx = (AioContext *) source;
QEMUBH *bh;
atomic_and(&ctx->notify_me, ~1);
aio_notify_accept(ctx);
for (bh = ctx->first_bh; bh; bh = bh->next) {
if (!bh->deleted && bh->scheduled) {
return true;
@@ -239,25 +227,12 @@ aio_ctx_finalize(GSource *source)
{
AioContext *ctx = (AioContext *) source;
qemu_bh_delete(ctx->notify_dummy_bh);
thread_pool_free(ctx->thread_pool);
qemu_mutex_lock(&ctx->bh_lock);
while (ctx->first_bh) {
QEMUBH *next = ctx->first_bh->next;
/* qemu_bh_delete() must have been called on BHs in this AioContext */
assert(ctx->first_bh->deleted);
g_free(ctx->first_bh);
ctx->first_bh = next;
}
qemu_mutex_unlock(&ctx->bh_lock);
aio_set_event_notifier(ctx, &ctx->notifier, false, NULL);
aio_set_event_notifier(ctx, &ctx->notifier, NULL);
event_notifier_cleanup(&ctx->notifier);
rfifolock_destroy(&ctx->lock);
qemu_mutex_destroy(&ctx->bh_lock);
g_array_free(ctx->pollfds, TRUE);
timerlistgroup_deinit(&ctx->tlg);
}
@@ -282,22 +257,24 @@ ThreadPool *aio_get_thread_pool(AioContext *ctx)
return ctx->thread_pool;
}
void aio_notify(AioContext *ctx)
void aio_set_dispatching(AioContext *ctx, bool dispatching)
{
/* Write e.g. bh->scheduled before reading ctx->notify_me. Pairs
* with atomic_or in aio_ctx_prepare or atomic_add in aio_poll.
*/
smp_mb();
if (ctx->notify_me) {
event_notifier_set(&ctx->notifier);
atomic_mb_set(&ctx->notified, true);
ctx->dispatching = dispatching;
if (!dispatching) {
/* Write ctx->dispatching before reading e.g. bh->scheduled.
* Optimization: this is only needed when we're entering the "unsafe"
* phase where other threads must call event_notifier_set.
*/
smp_mb();
}
}
void aio_notify_accept(AioContext *ctx)
void aio_notify(AioContext *ctx)
{
if (atomic_xchg(&ctx->notified, false)) {
event_notifier_test_and_clear(&ctx->notifier);
/* Write e.g. bh->scheduled before reading ctx->dispatching. */
smp_mb();
if (!ctx->dispatching) {
event_notifier_set(&ctx->notifier);
}
}
@@ -308,54 +285,31 @@ static void aio_timerlist_notify(void *opaque)
static void aio_rfifolock_cb(void *opaque)
{
AioContext *ctx = opaque;
/* Kick owner thread in case they are blocked in aio_poll() */
qemu_bh_schedule(ctx->notify_dummy_bh);
}
static void notify_dummy_bh(void *opaque)
{
/* Do nothing, we were invoked just to force the event loop to iterate */
}
static void event_notifier_dummy_cb(EventNotifier *e)
{
aio_notify(opaque);
}
AioContext *aio_context_new(Error **errp)
{
int ret;
AioContext *ctx;
Error *local_err = NULL;
ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext));
aio_context_setup(ctx, &local_err);
if (local_err) {
error_propagate(errp, local_err);
goto fail;
}
ret = event_notifier_init(&ctx->notifier, false);
if (ret < 0) {
g_source_destroy(&ctx->source);
error_setg_errno(errp, -ret, "Failed to initialize event notifier");
goto fail;
return NULL;
}
g_source_set_can_recurse(&ctx->source, true);
aio_set_event_notifier(ctx, &ctx->notifier,
false,
(EventNotifierHandler *)
event_notifier_dummy_cb);
event_notifier_test_and_clear);
ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
ctx->thread_pool = NULL;
qemu_mutex_init(&ctx->bh_lock);
rfifolock_init(&ctx->lock, aio_rfifolock_cb, ctx);
timerlistgroup_init(&ctx->tlg, aio_timerlist_notify, ctx);
ctx->notify_dummy_bh = aio_bh_new(ctx, notify_dummy_bh, NULL);
return ctx;
fail:
g_source_destroy(&ctx->source);
return NULL;
}
void aio_context_ref(AioContext *ctx)

View File

@@ -5,9 +5,13 @@ common-obj-$(CONFIG_SPICE) += spiceaudio.o
common-obj-$(CONFIG_COREAUDIO) += coreaudio.o
common-obj-$(CONFIG_ALSA) += alsaaudio.o
common-obj-$(CONFIG_DSOUND) += dsoundaudio.o
common-obj-$(CONFIG_FMOD) += fmodaudio.o
common-obj-$(CONFIG_ESD) += esdaudio.o
common-obj-$(CONFIG_PA) += paaudio.o
common-obj-$(CONFIG_WINWAVE) += winwaveaudio.o
common-obj-$(CONFIG_AUDIO_PT_INT) += audio_pt_int.o
common-obj-$(CONFIG_AUDIO_WIN_INT) += audio_win_int.o
common-obj-y += wavcapture.o
$(obj)/audio.o $(obj)/fmodaudio.o: QEMU_CFLAGS += $(FMOD_CFLAGS)
sdlaudio.o-cflags := $(SDL_CFLAGS)

View File

@@ -21,12 +21,10 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include <alsa/asoundlib.h>
#include "qemu-common.h"
#include "qemu/main-loop.h"
#include "audio.h"
#include "trace.h"
#if QEMU_GNUC_PREREQ(4, 3)
#pragma GCC diagnostic ignored "-Waddress"
@@ -35,28 +33,9 @@
#define AUDIO_CAP "alsa"
#include "audio_int.h"
typedef struct ALSAConf {
int size_in_usec_in;
int size_in_usec_out;
const char *pcm_name_in;
const char *pcm_name_out;
unsigned int buffer_size_in;
unsigned int period_size_in;
unsigned int buffer_size_out;
unsigned int period_size_out;
unsigned int threshold;
int buffer_size_in_overridden;
int period_size_in_overridden;
int buffer_size_out_overridden;
int period_size_out_overridden;
} ALSAConf;
struct pollhlp {
snd_pcm_t *handle;
struct pollfd *pfds;
ALSAConf *conf;
int count;
int mask;
};
@@ -77,6 +56,30 @@ typedef struct ALSAVoiceIn {
struct pollhlp pollhlp;
} ALSAVoiceIn;
static struct {
int size_in_usec_in;
int size_in_usec_out;
const char *pcm_name_in;
const char *pcm_name_out;
unsigned int buffer_size_in;
unsigned int period_size_in;
unsigned int buffer_size_out;
unsigned int period_size_out;
unsigned int threshold;
int buffer_size_in_overridden;
int period_size_in_overridden;
int buffer_size_out_overridden;
int period_size_out_overridden;
int verbose;
} conf = {
.buffer_size_out = 4096,
.period_size_out = 1024,
.pcm_name_out = "default",
.pcm_name_in = "default",
};
struct alsa_params_req {
int freq;
snd_pcm_format_t fmt;
@@ -202,7 +205,9 @@ static void alsa_poll_handler (void *opaque)
}
if (!(revents & hlp->mask)) {
trace_alsa_revents(revents);
if (conf.verbose) {
dolog ("revents = %d\n", revents);
}
return;
}
@@ -261,14 +266,31 @@ static int alsa_poll_helper (snd_pcm_t *handle, struct pollhlp *hlp, int mask)
for (i = 0; i < count; ++i) {
if (pfds[i].events & POLLIN) {
qemu_set_fd_handler (pfds[i].fd, alsa_poll_handler, NULL, hlp);
err = qemu_set_fd_handler (pfds[i].fd, alsa_poll_handler,
NULL, hlp);
}
if (pfds[i].events & POLLOUT) {
trace_alsa_pollout(i, pfds[i].fd);
qemu_set_fd_handler (pfds[i].fd, NULL, alsa_poll_handler, hlp);
if (conf.verbose) {
dolog ("POLLOUT %d %d\n", i, pfds[i].fd);
}
err = qemu_set_fd_handler (pfds[i].fd, NULL,
alsa_poll_handler, hlp);
}
if (conf.verbose) {
dolog ("Set handler events=%#x index=%d fd=%d err=%d\n",
pfds[i].events, i, pfds[i].fd, err);
}
trace_alsa_set_handler(pfds[i].events, i, pfds[i].fd, err);
if (err) {
dolog ("Failed to set handler events=%#x index=%d fd=%d err=%d\n",
pfds[i].events, i, pfds[i].fd, err);
while (i--) {
qemu_set_fd_handler (pfds[i].fd, NULL, NULL, NULL);
}
g_free (pfds);
return -1;
}
}
hlp->pfds = pfds;
hlp->count = count;
@@ -454,15 +476,14 @@ static void alsa_set_threshold (snd_pcm_t *handle, snd_pcm_uframes_t threshold)
}
static int alsa_open (int in, struct alsa_params_req *req,
struct alsa_params_obt *obt, snd_pcm_t **handlep,
ALSAConf *conf)
struct alsa_params_obt *obt, snd_pcm_t **handlep)
{
snd_pcm_t *handle;
snd_pcm_hw_params_t *hw_params;
int err;
int size_in_usec;
unsigned int freq, nchannels;
const char *pcm_name = in ? conf->pcm_name_in : conf->pcm_name_out;
const char *pcm_name = in ? conf.pcm_name_in : conf.pcm_name_out;
snd_pcm_uframes_t obt_buffer_size;
const char *typ = in ? "ADC" : "DAC";
snd_pcm_format_t obtfmt;
@@ -501,7 +522,7 @@ static int alsa_open (int in, struct alsa_params_req *req,
}
err = snd_pcm_hw_params_set_format (handle, hw_params, req->fmt);
if (err < 0) {
if (err < 0 && conf.verbose) {
alsa_logerr2 (err, typ, "Failed to set format %d\n", req->fmt);
}
@@ -633,7 +654,7 @@ static int alsa_open (int in, struct alsa_params_req *req,
goto err;
}
if (!in && conf->threshold) {
if (!in && conf.threshold) {
snd_pcm_uframes_t threshold;
int bytes_per_sec;
@@ -655,7 +676,7 @@ static int alsa_open (int in, struct alsa_params_req *req,
break;
}
threshold = (conf->threshold * bytes_per_sec) / 1000;
threshold = (conf.threshold * bytes_per_sec) / 1000;
alsa_set_threshold (handle, threshold);
}
@@ -665,9 +686,10 @@ static int alsa_open (int in, struct alsa_params_req *req,
*handlep = handle;
if (obtfmt != req->fmt ||
if (conf.verbose &&
(obtfmt != req->fmt ||
obt->nchannels != req->nchannels ||
obt->freq != req->freq) {
obt->freq != req->freq)) {
dolog ("Audio parameters for %s\n", typ);
alsa_dump_info (req, obt, obtfmt);
}
@@ -721,7 +743,9 @@ static void alsa_write_pending (ALSAVoiceOut *alsa)
if (written <= 0) {
switch (written) {
case 0:
trace_alsa_wrote_zero(len);
if (conf.verbose) {
dolog ("Failed to write %d frames (wrote zero)\n", len);
}
return;
case -EPIPE:
@@ -730,7 +754,9 @@ static void alsa_write_pending (ALSAVoiceOut *alsa)
len);
return;
}
trace_alsa_xrun_out();
if (conf.verbose) {
dolog ("Recovering from playback xrun\n");
}
continue;
case -ESTRPIPE:
@@ -741,7 +767,9 @@ static void alsa_write_pending (ALSAVoiceOut *alsa)
len);
return;
}
trace_alsa_resume_out();
if (conf.verbose) {
dolog ("Resuming suspended output stream\n");
}
continue;
case -EAGAIN:
@@ -791,27 +819,25 @@ static void alsa_fini_out (HWVoiceOut *hw)
alsa->pcm_buf = NULL;
}
static int alsa_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int alsa_init_out (HWVoiceOut *hw, struct audsettings *as)
{
ALSAVoiceOut *alsa = (ALSAVoiceOut *) hw;
struct alsa_params_req req;
struct alsa_params_obt obt;
snd_pcm_t *handle;
struct audsettings obt_as;
ALSAConf *conf = drv_opaque;
req.fmt = aud_to_alsafmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.period_size = conf->period_size_out;
req.buffer_size = conf->buffer_size_out;
req.size_in_usec = conf->size_in_usec_out;
req.period_size = conf.period_size_out;
req.buffer_size = conf.buffer_size_out;
req.size_in_usec = conf.size_in_usec_out;
req.override_mask =
(conf->period_size_out_overridden ? 1 : 0) |
(conf->buffer_size_out_overridden ? 2 : 0);
(conf.period_size_out_overridden ? 1 : 0) |
(conf.buffer_size_out_overridden ? 2 : 0);
if (alsa_open (0, &req, &obt, &handle, conf)) {
if (alsa_open (0, &req, &obt, &handle)) {
return -1;
}
@@ -832,7 +858,6 @@ static int alsa_init_out(HWVoiceOut *hw, struct audsettings *as,
}
alsa->handle = handle;
alsa->pollhlp.conf = conf;
return 0;
}
@@ -903,26 +928,25 @@ static int alsa_ctl_out (HWVoiceOut *hw, int cmd, ...)
return -1;
}
static int alsa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int alsa_init_in (HWVoiceIn *hw, struct audsettings *as)
{
ALSAVoiceIn *alsa = (ALSAVoiceIn *) hw;
struct alsa_params_req req;
struct alsa_params_obt obt;
snd_pcm_t *handle;
struct audsettings obt_as;
ALSAConf *conf = drv_opaque;
req.fmt = aud_to_alsafmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.period_size = conf->period_size_in;
req.buffer_size = conf->buffer_size_in;
req.size_in_usec = conf->size_in_usec_in;
req.period_size = conf.period_size_in;
req.buffer_size = conf.buffer_size_in;
req.size_in_usec = conf.size_in_usec_in;
req.override_mask =
(conf->period_size_in_overridden ? 1 : 0) |
(conf->buffer_size_in_overridden ? 2 : 0);
(conf.period_size_in_overridden ? 1 : 0) |
(conf.buffer_size_in_overridden ? 2 : 0);
if (alsa_open (1, &req, &obt, &handle, conf)) {
if (alsa_open (1, &req, &obt, &handle)) {
return -1;
}
@@ -943,7 +967,6 @@ static int alsa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
alsa->handle = handle;
alsa->pollhlp.conf = conf;
return 0;
}
@@ -999,10 +1022,14 @@ static int alsa_run_in (HWVoiceIn *hw)
dolog ("Failed to resume suspended input stream\n");
return 0;
}
trace_alsa_resume_in();
if (conf.verbose) {
dolog ("Resuming suspended input stream\n");
}
break;
default:
trace_alsa_no_frames(state);
if (conf.verbose) {
dolog ("No frames available and ALSA state is %d\n", state);
}
return 0;
}
}
@@ -1037,7 +1064,9 @@ static int alsa_run_in (HWVoiceIn *hw)
if (nread <= 0) {
switch (nread) {
case 0:
trace_alsa_read_zero(len);
if (conf.verbose) {
dolog ("Failed to read %ld frames (read zero)\n", len);
}
goto exit;
case -EPIPE:
@@ -1045,7 +1074,9 @@ static int alsa_run_in (HWVoiceIn *hw)
alsa_logerr (nread, "Failed to read %ld frames\n", len);
goto exit;
}
trace_alsa_xrun_in();
if (conf.verbose) {
dolog ("Recovering from capture xrun\n");
}
continue;
case -EAGAIN:
@@ -1117,85 +1148,82 @@ static int alsa_ctl_in (HWVoiceIn *hw, int cmd, ...)
return -1;
}
static ALSAConf glob_conf = {
.buffer_size_out = 4096,
.period_size_out = 1024,
.pcm_name_out = "default",
.pcm_name_in = "default",
};
static void *alsa_audio_init (void)
{
ALSAConf *conf = g_malloc(sizeof(ALSAConf));
*conf = glob_conf;
return conf;
return &conf;
}
static void alsa_audio_fini (void *opaque)
{
g_free(opaque);
(void) opaque;
}
static struct audio_option alsa_options[] = {
{
.name = "DAC_SIZE_IN_USEC",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.size_in_usec_out,
.valp = &conf.size_in_usec_out,
.descr = "DAC period/buffer size in microseconds (otherwise in frames)"
},
{
.name = "DAC_PERIOD_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.period_size_out,
.valp = &conf.period_size_out,
.descr = "DAC period size (0 to go with system default)",
.overriddenp = &glob_conf.period_size_out_overridden
.overriddenp = &conf.period_size_out_overridden
},
{
.name = "DAC_BUFFER_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.buffer_size_out,
.valp = &conf.buffer_size_out,
.descr = "DAC buffer size (0 to go with system default)",
.overriddenp = &glob_conf.buffer_size_out_overridden
.overriddenp = &conf.buffer_size_out_overridden
},
{
.name = "ADC_SIZE_IN_USEC",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.size_in_usec_in,
.valp = &conf.size_in_usec_in,
.descr =
"ADC period/buffer size in microseconds (otherwise in frames)"
},
{
.name = "ADC_PERIOD_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.period_size_in,
.valp = &conf.period_size_in,
.descr = "ADC period size (0 to go with system default)",
.overriddenp = &glob_conf.period_size_in_overridden
.overriddenp = &conf.period_size_in_overridden
},
{
.name = "ADC_BUFFER_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.buffer_size_in,
.valp = &conf.buffer_size_in,
.descr = "ADC buffer size (0 to go with system default)",
.overriddenp = &glob_conf.buffer_size_in_overridden
.overriddenp = &conf.buffer_size_in_overridden
},
{
.name = "THRESHOLD",
.tag = AUD_OPT_INT,
.valp = &glob_conf.threshold,
.valp = &conf.threshold,
.descr = "(undocumented)"
},
{
.name = "DAC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.pcm_name_out,
.valp = &conf.pcm_name_out,
.descr = "DAC device name (for instance dmix)"
},
{
.name = "ADC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.pcm_name_in,
.valp = &conf.pcm_name_in,
.descr = "ADC device name"
},
{
.name = "VERBOSE",
.tag = AUD_OPT_BOOL,
.valp = &conf.verbose,
.descr = "Behave in a more verbose way"
},
{ /* End of list */ }
};

View File

@@ -21,17 +21,16 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "hw/hw.h"
#include "audio.h"
#include "monitor/monitor.h"
#include "qemu/timer.h"
#include "sysemu/sysemu.h"
#include "qemu/cutils.h"
#define AUDIO_CAP "audio"
#include "audio_int.h"
/* #define DEBUG_PLIVE */
/* #define DEBUG_LIVE */
/* #define DEBUG_OUT */
/* #define DEBUG_CAPTURE */
@@ -67,6 +66,8 @@ static struct {
int hertz;
int64_t ticks;
} period;
int plive;
int log_to_monitor;
int try_poll_in;
int try_poll_out;
} conf = {
@@ -95,6 +96,8 @@ static struct {
},
.period = { .hertz = 100 },
.plive = 0,
.log_to_monitor = 0,
.try_poll_in = 1,
.try_poll_out = 1,
};
@@ -328,11 +331,20 @@ static const char *audio_get_conf_str (const char *key,
void AUD_vlog (const char *cap, const char *fmt, va_list ap)
{
if (cap) {
fprintf(stderr, "%s: ", cap);
}
if (conf.log_to_monitor) {
if (cap) {
monitor_printf(default_mon, "%s: ", cap);
}
vfprintf(stderr, fmt, ap);
monitor_vprintf(default_mon, fmt, ap);
}
else {
if (cap) {
fprintf (stderr, "%s: ", cap);
}
vfprintf (stderr, fmt, ap);
}
}
void AUD_log (const char *cap, const char *fmt, ...)
@@ -1442,6 +1454,9 @@ static void audio_run_out (AudioState *s)
while (sw) {
sw1 = sw->entries.le_next;
if (!sw->active && !sw->callback.fn) {
#ifdef DEBUG_PLIVE
dolog ("Finishing with old voice\n");
#endif
audio_close_out (sw);
}
sw = sw1;
@@ -1633,6 +1648,18 @@ static struct audio_option audio_options[] = {
.valp = &conf.period.hertz,
.descr = "Timer period in HZ (0 - use lowest possible)"
},
{
.name = "PLIVE",
.tag = AUD_OPT_BOOL,
.valp = &conf.plive,
.descr = "(undocumented)"
},
{
.name = "LOG_TO_MONITOR",
.tag = AUD_OPT_BOOL,
.valp = &conf.log_to_monitor,
.descr = "Print logging messages to monitor instead of stderr"
},
{ /* End of list */ }
};
@@ -1808,6 +1835,9 @@ static void audio_init (void)
atexit (audio_atexit);
s->ts = timer_new_ns(QEMU_CLOCK_VIRTUAL, audio_timer, s);
if (!s->ts) {
hw_error("Could not create audio timer\n");
}
audio_process_options ("AUDIO", audio_options);
@@ -1858,8 +1888,12 @@ static void audio_init (void)
if (!done) {
done = !audio_driver_init (s, &no_audio_driver);
assert(done);
dolog("warning: Using timer based audio emulation\n");
if (!done) {
hw_error("Could not initialize audio subsystem\n");
}
else {
dolog ("warning: Using timer based audio emulation\n");
}
}
if (conf.period.hertz <= 0) {
@@ -1870,7 +1904,8 @@ static void audio_init (void)
}
conf.period.ticks = 1;
} else {
conf.period.ticks = NANOSECONDS_PER_SECOND / conf.period.hertz;
conf.period.ticks =
muldiv64 (1, get_ticks_per_sec (), conf.period.hertz);
}
e = qemu_add_vm_change_state_handler (audio_vm_change_state_handler, s);

View File

@@ -24,6 +24,7 @@
#ifndef QEMU_AUDIO_H
#define QEMU_AUDIO_H
#include "config-host.h"
#include "qemu/queue.h"
typedef void (*audio_callback_fn) (void *opaque, int avail);

View File

@@ -156,13 +156,13 @@ struct audio_driver {
};
struct audio_pcm_ops {
int (*init_out)(HWVoiceOut *hw, struct audsettings *as, void *drv_opaque);
int (*init_out)(HWVoiceOut *hw, struct audsettings *as);
void (*fini_out)(HWVoiceOut *hw);
int (*run_out) (HWVoiceOut *hw, int live);
int (*write) (SWVoiceOut *sw, void *buf, int size);
int (*ctl_out) (HWVoiceOut *hw, int cmd, ...);
int (*init_in) (HWVoiceIn *hw, struct audsettings *as, void *drv_opaque);
int (*init_in) (HWVoiceIn *hw, struct audsettings *as);
void (*fini_in) (HWVoiceIn *hw);
int (*run_in) (HWVoiceIn *hw);
int (*read) (SWVoiceIn *sw, void *buf, int size);
@@ -206,11 +206,14 @@ extern struct audio_driver no_audio_driver;
extern struct audio_driver oss_audio_driver;
extern struct audio_driver sdl_audio_driver;
extern struct audio_driver wav_audio_driver;
extern struct audio_driver fmod_audio_driver;
extern struct audio_driver alsa_audio_driver;
extern struct audio_driver coreaudio_audio_driver;
extern struct audio_driver dsound_audio_driver;
extern struct audio_driver esd_audio_driver;
extern struct audio_driver pa_audio_driver;
extern struct audio_driver spice_audio_driver;
extern struct audio_driver winwave_audio_driver;
extern const struct mixeng_volume nominal_volume;
void audio_pcm_init_info (struct audio_pcm_info *info, struct audsettings *as);

View File

@@ -1,4 +1,3 @@
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "audio.h"

View File

@@ -262,7 +262,7 @@ static HW *glue (audio_pcm_hw_add_new_, TYPE) (struct audsettings *as)
#ifdef DAC
QLIST_INIT (&hw->cap_head);
#endif
if (glue (hw->pcm_ops->init_, TYPE) (hw, as, s->drv_opaque)) {
if (glue (hw->pcm_ops->init_, TYPE) (hw, as)) {
goto err0;
}
@@ -398,6 +398,10 @@ SW *glue (AUD_open_, TYPE) (
)
{
AudioState *s = &glob_audio_state;
#ifdef DAC
int live = 0;
SW *old_sw = NULL;
#endif
if (audio_bug (AUDIO_FUNC, !card || !name || !callback_fn || !as)) {
dolog ("card=%p name=%p callback_fn=%p as=%p\n",
@@ -422,6 +426,29 @@ SW *glue (AUD_open_, TYPE) (
return sw;
}
#ifdef DAC
if (conf.plive && sw && (!sw->active && !sw->empty)) {
live = sw->total_hw_samples_mixed;
#ifdef DEBUG_PLIVE
dolog ("Replacing voice %s with %d live samples\n", SW_NAME (sw), live);
dolog ("Old %s freq %d, bits %d, channels %d\n",
SW_NAME (sw), sw->info.freq, sw->info.bits, sw->info.nchannels);
dolog ("New %s freq %d, bits %d, channels %d\n",
name,
as->freq,
(as->fmt == AUD_FMT_S16 || as->fmt == AUD_FMT_U16) ? 16 : 8,
as->nchannels);
#endif
if (live) {
old_sw = sw;
old_sw->callback.fn = NULL;
sw = NULL;
}
}
#endif
if (!glue (conf.fixed_, TYPE).enabled && sw) {
glue (AUD_close_, TYPE) (card, sw);
sw = NULL;
@@ -454,6 +481,20 @@ SW *glue (AUD_open_, TYPE) (
sw->callback.fn = callback_fn;
sw->callback.opaque = callback_opaque;
#ifdef DAC
if (live) {
int mixed =
(live << old_sw->info.shift)
* old_sw->info.bytes_per_second
/ sw->info.bytes_per_second;
#ifdef DEBUG_PLIVE
dolog ("Silence will be mixed %d\n", mixed);
#endif
sw->total_hw_samples_mixed += mixed;
}
#endif
#ifdef DEBUG_AUDIO
dolog ("%s\n", name);
audio_pcm_print_info ("hw", &sw->hw->info);

View File

@@ -1,6 +1,5 @@
/* public domain */
#include "qemu/osdep.h"
#include "qemu-common.h"
#define AUDIO_CAP "win-int"

View File

@@ -22,8 +22,8 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include <CoreAudio/CoreAudio.h>
#include <string.h> /* strerror */
#include <pthread.h> /* pthread_X */
#include "qemu-common.h"
@@ -32,250 +32,28 @@
#define AUDIO_CAP "coreaudio"
#include "audio_int.h"
#ifndef MAC_OS_X_VERSION_10_6
#define MAC_OS_X_VERSION_10_6 1060
#endif
static int isAtexit;
typedef struct {
struct {
int buffer_frames;
int nbuffers;
} CoreaudioConf;
int isAtexit;
} conf = {
.buffer_frames = 512,
.nbuffers = 4,
.isAtexit = 0
};
typedef struct coreaudioVoiceOut {
HWVoiceOut hw;
pthread_mutex_t mutex;
int isAtexit;
AudioDeviceID outputDeviceID;
UInt32 audioDevicePropertyBufferFrameSize;
AudioStreamBasicDescription outputStreamBasicDescription;
AudioDeviceIOProcID ioprocid;
int live;
int decr;
int rpos;
} coreaudioVoiceOut;
#if MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_6
/* The APIs used here only become available from 10.6 */
static OSStatus coreaudio_get_voice(AudioDeviceID *id)
{
UInt32 size = sizeof(*id);
AudioObjectPropertyAddress addr = {
kAudioHardwarePropertyDefaultOutputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster
};
return AudioObjectGetPropertyData(kAudioObjectSystemObject,
&addr,
0,
NULL,
&size,
id);
}
static OSStatus coreaudio_get_framesizerange(AudioDeviceID id,
AudioValueRange *framerange)
{
UInt32 size = sizeof(*framerange);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyBufferFrameSizeRange,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectGetPropertyData(id,
&addr,
0,
NULL,
&size,
framerange);
}
static OSStatus coreaudio_get_framesize(AudioDeviceID id, UInt32 *framesize)
{
UInt32 size = sizeof(*framesize);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyBufferFrameSize,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectGetPropertyData(id,
&addr,
0,
NULL,
&size,
framesize);
}
static OSStatus coreaudio_set_framesize(AudioDeviceID id, UInt32 *framesize)
{
UInt32 size = sizeof(*framesize);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyBufferFrameSize,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectSetPropertyData(id,
&addr,
0,
NULL,
size,
framesize);
}
static OSStatus coreaudio_get_streamformat(AudioDeviceID id,
AudioStreamBasicDescription *d)
{
UInt32 size = sizeof(*d);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyStreamFormat,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectGetPropertyData(id,
&addr,
0,
NULL,
&size,
d);
}
static OSStatus coreaudio_set_streamformat(AudioDeviceID id,
AudioStreamBasicDescription *d)
{
UInt32 size = sizeof(*d);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyStreamFormat,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectSetPropertyData(id,
&addr,
0,
NULL,
size,
d);
}
static OSStatus coreaudio_get_isrunning(AudioDeviceID id, UInt32 *result)
{
UInt32 size = sizeof(*result);
AudioObjectPropertyAddress addr = {
kAudioDevicePropertyDeviceIsRunning,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
return AudioObjectGetPropertyData(id,
&addr,
0,
NULL,
&size,
result);
}
#else
/* Legacy versions of functions using deprecated APIs */
static OSStatus coreaudio_get_voice(AudioDeviceID *id)
{
UInt32 size = sizeof(*id);
return AudioHardwareGetProperty(
kAudioHardwarePropertyDefaultOutputDevice,
&size,
id);
}
static OSStatus coreaudio_get_framesizerange(AudioDeviceID id,
AudioValueRange *framerange)
{
UInt32 size = sizeof(*framerange);
return AudioDeviceGetProperty(
id,
0,
0,
kAudioDevicePropertyBufferFrameSizeRange,
&size,
framerange);
}
static OSStatus coreaudio_get_framesize(AudioDeviceID id, UInt32 *framesize)
{
UInt32 size = sizeof(*framesize);
return AudioDeviceGetProperty(
id,
0,
false,
kAudioDevicePropertyBufferFrameSize,
&size,
framesize);
}
static OSStatus coreaudio_set_framesize(AudioDeviceID id, UInt32 *framesize)
{
UInt32 size = sizeof(*framesize);
return AudioDeviceSetProperty(
id,
NULL,
0,
false,
kAudioDevicePropertyBufferFrameSize,
size,
framesize);
}
static OSStatus coreaudio_get_streamformat(AudioDeviceID id,
AudioStreamBasicDescription *d)
{
UInt32 size = sizeof(*d);
return AudioDeviceGetProperty(
id,
0,
false,
kAudioDevicePropertyStreamFormat,
&size,
d);
}
static OSStatus coreaudio_set_streamformat(AudioDeviceID id,
AudioStreamBasicDescription *d)
{
UInt32 size = sizeof(*d);
return AudioDeviceSetProperty(
id,
0,
0,
0,
kAudioDevicePropertyStreamFormat,
size,
d);
}
static OSStatus coreaudio_get_isrunning(AudioDeviceID id, UInt32 *result)
{
UInt32 size = sizeof(*result);
return AudioDeviceGetProperty(
id,
0,
0,
kAudioDevicePropertyDeviceIsRunning,
&size,
result);
}
#endif
static void coreaudio_logstatus (OSStatus status)
{
const char *str = "BUG";
@@ -370,7 +148,10 @@ static inline UInt32 isPlaying (AudioDeviceID outputDeviceID)
{
OSStatus status;
UInt32 result = 0;
status = coreaudio_get_isrunning(outputDeviceID, &result);
UInt32 propertySize = sizeof(outputDeviceID);
status = AudioDeviceGetProperty(
outputDeviceID, 0, 0,
kAudioDevicePropertyDeviceIsRunning, &propertySize, &result);
if (status != kAudioHardwareNoError) {
coreaudio_logerr(status,
"Could not determine whether Device is playing\n");
@@ -380,7 +161,7 @@ static inline UInt32 isPlaying (AudioDeviceID outputDeviceID)
static void coreaudio_atexit (void)
{
isAtexit = 1;
conf.isAtexit = 1;
}
static int coreaudio_lock (coreaudioVoiceOut *core, const char *fn_name)
@@ -506,15 +287,14 @@ static int coreaudio_write (SWVoiceOut *sw, void *buf, int len)
return audio_pcm_sw_write (sw, buf, len);
}
static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int coreaudio_init_out (HWVoiceOut *hw, struct audsettings *as)
{
OSStatus status;
coreaudioVoiceOut *core = (coreaudioVoiceOut *) hw;
UInt32 propertySize;
int err;
const char *typ = "playback";
AudioValueRange frameRange;
CoreaudioConf *conf = drv_opaque;
/* create mutex */
err = pthread_mutex_init(&core->mutex, NULL);
@@ -525,7 +305,12 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
audio_pcm_init_info (&hw->info, as);
status = coreaudio_get_voice(&core->outputDeviceID);
/* open default output device */
propertySize = sizeof(core->outputDeviceID);
status = AudioHardwareGetProperty(
kAudioHardwarePropertyDefaultOutputDevice,
&propertySize,
&core->outputDeviceID);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not get default output Device\n");
@@ -537,29 +322,42 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
}
/* get minimum and maximum buffer frame sizes */
status = coreaudio_get_framesizerange(core->outputDeviceID,
&frameRange);
propertySize = sizeof(frameRange);
status = AudioDeviceGetProperty(
core->outputDeviceID,
0,
0,
kAudioDevicePropertyBufferFrameSizeRange,
&propertySize,
&frameRange);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not get device buffer frame range\n");
return -1;
}
if (frameRange.mMinimum > conf->buffer_frames) {
if (frameRange.mMinimum > conf.buffer_frames) {
core->audioDevicePropertyBufferFrameSize = (UInt32) frameRange.mMinimum;
dolog ("warning: Upsizing Buffer Frames to %f\n", frameRange.mMinimum);
}
else if (frameRange.mMaximum < conf->buffer_frames) {
else if (frameRange.mMaximum < conf.buffer_frames) {
core->audioDevicePropertyBufferFrameSize = (UInt32) frameRange.mMaximum;
dolog ("warning: Downsizing Buffer Frames to %f\n", frameRange.mMaximum);
}
else {
core->audioDevicePropertyBufferFrameSize = conf->buffer_frames;
core->audioDevicePropertyBufferFrameSize = conf.buffer_frames;
}
/* set Buffer Frame Size */
status = coreaudio_set_framesize(core->outputDeviceID,
&core->audioDevicePropertyBufferFrameSize);
propertySize = sizeof(core->audioDevicePropertyBufferFrameSize);
status = AudioDeviceSetProperty(
core->outputDeviceID,
NULL,
0,
false,
kAudioDevicePropertyBufferFrameSize,
propertySize,
&core->audioDevicePropertyBufferFrameSize);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not set device buffer frame size %" PRIu32 "\n",
@@ -568,18 +366,30 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
}
/* get Buffer Frame Size */
status = coreaudio_get_framesize(core->outputDeviceID,
&core->audioDevicePropertyBufferFrameSize);
propertySize = sizeof(core->audioDevicePropertyBufferFrameSize);
status = AudioDeviceGetProperty(
core->outputDeviceID,
0,
false,
kAudioDevicePropertyBufferFrameSize,
&propertySize,
&core->audioDevicePropertyBufferFrameSize);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not get device buffer frame size\n");
return -1;
}
hw->samples = conf->nbuffers * core->audioDevicePropertyBufferFrameSize;
hw->samples = conf.nbuffers * core->audioDevicePropertyBufferFrameSize;
/* get StreamFormat */
status = coreaudio_get_streamformat(core->outputDeviceID,
&core->outputStreamBasicDescription);
propertySize = sizeof(core->outputStreamBasicDescription);
status = AudioDeviceGetProperty(
core->outputDeviceID,
0,
false,
kAudioDevicePropertyStreamFormat,
&propertySize,
&core->outputStreamBasicDescription);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not get Device Stream properties\n");
@@ -589,8 +399,15 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
/* set Samplerate */
core->outputStreamBasicDescription.mSampleRate = (Float64) as->freq;
status = coreaudio_set_streamformat(core->outputDeviceID,
&core->outputStreamBasicDescription);
propertySize = sizeof(core->outputStreamBasicDescription);
status = AudioDeviceSetProperty(
core->outputDeviceID,
0,
0,
0,
kAudioDevicePropertyStreamFormat,
propertySize,
&core->outputStreamBasicDescription);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ, "Could not set samplerate %d\n",
as->freq);
@@ -599,12 +416,8 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
}
/* set Callback */
core->ioprocid = NULL;
status = AudioDeviceCreateIOProcID(core->outputDeviceID,
audioDeviceIOProc,
hw,
&core->ioprocid);
if (status != kAudioHardwareNoError || core->ioprocid == NULL) {
status = AudioDeviceAddIOProc(core->outputDeviceID, audioDeviceIOProc, hw);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ, "Could not set IOProc\n");
core->outputDeviceID = kAudioDeviceUnknown;
return -1;
@@ -612,10 +425,10 @@ static int coreaudio_init_out(HWVoiceOut *hw, struct audsettings *as,
/* start Playback */
if (!isPlaying(core->outputDeviceID)) {
status = AudioDeviceStart(core->outputDeviceID, core->ioprocid);
status = AudioDeviceStart(core->outputDeviceID, audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ, "Could not start playback\n");
AudioDeviceDestroyIOProcID(core->outputDeviceID, core->ioprocid);
AudioDeviceRemoveIOProc(core->outputDeviceID, audioDeviceIOProc);
core->outputDeviceID = kAudioDeviceUnknown;
return -1;
}
@@ -630,18 +443,18 @@ static void coreaudio_fini_out (HWVoiceOut *hw)
int err;
coreaudioVoiceOut *core = (coreaudioVoiceOut *) hw;
if (!isAtexit) {
if (!conf.isAtexit) {
/* stop playback */
if (isPlaying(core->outputDeviceID)) {
status = AudioDeviceStop(core->outputDeviceID, core->ioprocid);
status = AudioDeviceStop(core->outputDeviceID, audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
coreaudio_logerr (status, "Could not stop playback\n");
}
}
/* remove callback */
status = AudioDeviceDestroyIOProcID(core->outputDeviceID,
core->ioprocid);
status = AudioDeviceRemoveIOProc(core->outputDeviceID,
audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
coreaudio_logerr (status, "Could not remove IOProc\n");
}
@@ -664,7 +477,7 @@ static int coreaudio_ctl_out (HWVoiceOut *hw, int cmd, ...)
case VOICE_ENABLE:
/* start playback */
if (!isPlaying(core->outputDeviceID)) {
status = AudioDeviceStart(core->outputDeviceID, core->ioprocid);
status = AudioDeviceStart(core->outputDeviceID, audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
coreaudio_logerr (status, "Could not resume playback\n");
}
@@ -673,10 +486,9 @@ static int coreaudio_ctl_out (HWVoiceOut *hw, int cmd, ...)
case VOICE_DISABLE:
/* stop playback */
if (!isAtexit) {
if (!conf.isAtexit) {
if (isPlaying(core->outputDeviceID)) {
status = AudioDeviceStop(core->outputDeviceID,
core->ioprocid);
status = AudioDeviceStop(core->outputDeviceID, audioDeviceIOProc);
if (status != kAudioHardwareNoError) {
coreaudio_logerr (status, "Could not pause playback\n");
}
@@ -687,36 +499,28 @@ static int coreaudio_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static CoreaudioConf glob_conf = {
.buffer_frames = 512,
.nbuffers = 4,
};
static void *coreaudio_audio_init (void)
{
CoreaudioConf *conf = g_malloc(sizeof(CoreaudioConf));
*conf = glob_conf;
atexit(coreaudio_atexit);
return conf;
return &coreaudio_audio_init;
}
static void coreaudio_audio_fini (void *opaque)
{
g_free(opaque);
(void) opaque;
}
static struct audio_option coreaudio_options[] = {
{
.name = "BUFFER_SIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.buffer_frames,
.valp = &conf.buffer_frames,
.descr = "Size of the buffer in frames"
},
{
.name = "BUFFER_COUNT",
.tag = AUD_OPT_INT,
.valp = &glob_conf.nbuffers,
.valp = &conf.nbuffers,
.descr = "Number of buffers"
},
{ /* End of list */ }

View File

@@ -67,11 +67,11 @@ static int glue (dsound_lock_, TYPE) (
LPVOID *p2p,
DWORD *blen1p,
DWORD *blen2p,
int entire,
dsound *s
int entire
)
{
HRESULT hr;
int i;
LPVOID p1 = NULL, p2 = NULL;
DWORD blen1 = 0, blen2 = 0;
DWORD flag;
@@ -81,18 +81,37 @@ static int glue (dsound_lock_, TYPE) (
#else
flag = entire ? DSBLOCK_ENTIREBUFFER : 0;
#endif
hr = glue(IFACE, _Lock)(buf, pos, len, &p1, &blen1, &p2, &blen2, flag);
for (i = 0; i < conf.lock_retries; ++i) {
hr = glue (IFACE, _Lock) (
buf,
pos,
len,
&p1,
&blen1,
&p2,
&blen2,
flag
);
if (FAILED (hr)) {
if (FAILED (hr)) {
#ifndef DSBTYPE_IN
if (hr == DSERR_BUFFERLOST) {
if (glue (dsound_restore_, TYPE) (buf, s)) {
dsound_logerr (hr, "Could not lock " NAME "\n");
if (hr == DSERR_BUFFERLOST) {
if (glue (dsound_restore_, TYPE) (buf)) {
dsound_logerr (hr, "Could not lock " NAME "\n");
goto fail;
}
continue;
}
#endif
dsound_logerr (hr, "Could not lock " NAME "\n");
goto fail;
}
#endif
dsound_logerr (hr, "Could not lock " NAME "\n");
break;
}
if (i == conf.lock_retries) {
dolog ("%d attempts to lock " NAME " failed\n", i);
goto fail;
}
@@ -155,19 +174,16 @@ static void dsound_fini_out (HWVoiceOut *hw)
}
#ifdef DSBTYPE_IN
static int dsound_init_in(HWVoiceIn *hw, struct audsettings *as,
void *drv_opaque)
static int dsound_init_in (HWVoiceIn *hw, struct audsettings *as)
#else
static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int dsound_init_out (HWVoiceOut *hw, struct audsettings *as)
#endif
{
int err;
HRESULT hr;
dsound *s = drv_opaque;
dsound *s = &glob_dsound;
WAVEFORMATEX wfx;
struct audsettings obt_as;
DSoundConf *conf = &s->conf;
#ifdef DSBTYPE_IN
const char *typ = "ADC";
DSoundVoiceIn *ds = (DSoundVoiceIn *) hw;
@@ -194,7 +210,7 @@ static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
bd.dwSize = sizeof (bd);
bd.lpwfxFormat = &wfx;
#ifdef DSBTYPE_IN
bd.dwBufferBytes = conf->bufsize_in;
bd.dwBufferBytes = conf.bufsize_in;
hr = IDirectSoundCapture_CreateCaptureBuffer (
s->dsound_capture,
&bd,
@@ -203,7 +219,7 @@ static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
);
#else
bd.dwFlags = DSBCAPS_STICKYFOCUS | DSBCAPS_GETCURRENTPOSITION2;
bd.dwBufferBytes = conf->bufsize_out;
bd.dwBufferBytes = conf.bufsize_out;
hr = IDirectSound_CreateSoundBuffer (
s->dsound,
&bd,
@@ -253,7 +269,6 @@ static int dsound_init_out(HWVoiceOut *hw, struct audsettings *as,
);
}
hw->samples = bc.dwBufferBytes >> hw->info.shift;
ds->s = s;
#ifdef DEBUG_DSOUND
dolog ("caps %ld, desc %ld\n",

View File

@@ -26,7 +26,6 @@
* SEAL 1.07 by Carlos 'pel' Hasan was used as documentation
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "audio.h"
@@ -42,25 +41,42 @@
/* #define DEBUG_DSOUND */
typedef struct {
static struct {
int lock_retries;
int restore_retries;
int getstatus_retries;
int set_primary;
int bufsize_in;
int bufsize_out;
struct audsettings settings;
int latency_millis;
} DSoundConf;
} conf = {
.lock_retries = 1,
.restore_retries = 1,
.getstatus_retries = 1,
.set_primary = 0,
.bufsize_in = 16384,
.bufsize_out = 16384,
.settings.freq = 44100,
.settings.nchannels = 2,
.settings.fmt = AUD_FMT_S16,
.latency_millis = 10
};
typedef struct {
LPDIRECTSOUND dsound;
LPDIRECTSOUNDCAPTURE dsound_capture;
LPDIRECTSOUNDBUFFER dsound_primary_buffer;
struct audsettings settings;
DSoundConf conf;
} dsound;
static dsound glob_dsound;
typedef struct {
HWVoiceOut hw;
LPDIRECTSOUNDBUFFER dsound_buffer;
DWORD old_pos;
int first_time;
dsound *s;
#ifdef DEBUG_DSOUND
DWORD old_ppos;
DWORD played;
@@ -72,7 +88,6 @@ typedef struct {
HWVoiceIn hw;
int first_time;
LPDIRECTSOUNDCAPTUREBUFFER dsound_capture_buffer;
dsound *s;
} DSoundVoiceIn;
static void dsound_log_hresult (HRESULT hr)
@@ -266,17 +281,29 @@ static void print_wave_format (WAVEFORMATEX *wfx)
}
#endif
static int dsound_restore_out (LPDIRECTSOUNDBUFFER dsb, dsound *s)
static int dsound_restore_out (LPDIRECTSOUNDBUFFER dsb)
{
HRESULT hr;
int i;
hr = IDirectSoundBuffer_Restore (dsb);
for (i = 0; i < conf.restore_retries; ++i) {
hr = IDirectSoundBuffer_Restore (dsb);
if (hr != DS_OK) {
dsound_logerr (hr, "Could not restore playback buffer\n");
return -1;
switch (hr) {
case DS_OK:
return 0;
case DSERR_BUFFERLOST:
continue;
default:
dsound_logerr (hr, "Could not restore playback buffer\n");
return -1;
}
}
return 0;
dolog ("%d attempts to restore playback buffer failed\n", i);
return -1;
}
#include "dsound_template.h"
@@ -284,20 +311,25 @@ static int dsound_restore_out (LPDIRECTSOUNDBUFFER dsb, dsound *s)
#include "dsound_template.h"
#undef DSBTYPE_IN
static int dsound_get_status_out (LPDIRECTSOUNDBUFFER dsb, DWORD *statusp,
dsound *s)
static int dsound_get_status_out (LPDIRECTSOUNDBUFFER dsb, DWORD *statusp)
{
HRESULT hr;
int i;
hr = IDirectSoundBuffer_GetStatus (dsb, statusp);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not get playback buffer status\n");
return -1;
}
for (i = 0; i < conf.getstatus_retries; ++i) {
hr = IDirectSoundBuffer_GetStatus (dsb, statusp);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not get playback buffer status\n");
return -1;
}
if (*statusp & DSERR_BUFFERLOST) {
dsound_restore_out(dsb, s);
return -1;
if (*statusp & DSERR_BUFFERLOST) {
if (dsound_restore_out (dsb)) {
return -1;
}
continue;
}
break;
}
return 0;
@@ -344,8 +376,7 @@ static void dsound_write_sample (HWVoiceOut *hw, uint8_t *dst, int dst_len)
hw->rpos = pos % hw->samples;
}
static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb,
dsound *s)
static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb)
{
int err;
LPVOID p1, p2;
@@ -358,8 +389,7 @@ static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb,
hw->samples << hw->info.shift,
&p1, &p2,
&blen1, &blen2,
1,
s
1
);
if (err) {
return;
@@ -385,9 +415,25 @@ static void dsound_clear_sample (HWVoiceOut *hw, LPDIRECTSOUNDBUFFER dsb,
dsound_unlock_out (dsb, p1, p2, blen1, blen2);
}
static int dsound_open (dsound *s)
static void dsound_close (dsound *s)
{
HRESULT hr;
if (s->dsound_primary_buffer) {
hr = IDirectSoundBuffer_Release (s->dsound_primary_buffer);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not release primary buffer\n");
}
s->dsound_primary_buffer = NULL;
}
}
static int dsound_open (dsound *s)
{
int err;
HRESULT hr;
WAVEFORMATEX wfx;
DSBUFFERDESC dsbd;
HWND hwnd;
hwnd = GetForegroundWindow ();
@@ -403,7 +449,63 @@ static int dsound_open (dsound *s)
return -1;
}
if (!conf.set_primary) {
return 0;
}
err = waveformat_from_audio_settings (&wfx, &conf.settings);
if (err) {
return -1;
}
memset (&dsbd, 0, sizeof (dsbd));
dsbd.dwSize = sizeof (dsbd);
dsbd.dwFlags = DSBCAPS_PRIMARYBUFFER;
dsbd.dwBufferBytes = 0;
dsbd.lpwfxFormat = NULL;
hr = IDirectSound_CreateSoundBuffer (
s->dsound,
&dsbd,
&s->dsound_primary_buffer,
NULL
);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not create primary playback buffer\n");
return -1;
}
hr = IDirectSoundBuffer_SetFormat (s->dsound_primary_buffer, &wfx);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not set primary playback buffer format\n");
}
hr = IDirectSoundBuffer_GetFormat (
s->dsound_primary_buffer,
&wfx,
sizeof (wfx),
NULL
);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not get primary playback buffer format\n");
goto fail0;
}
#ifdef DEBUG_DSOUND
dolog ("Primary\n");
print_wave_format (&wfx);
#endif
err = waveformat_to_audio_settings (&wfx, &s->settings);
if (err) {
goto fail0;
}
return 0;
fail0:
dsound_close (s);
return -1;
}
static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
@@ -412,7 +514,6 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
DWORD status;
DSoundVoiceOut *ds = (DSoundVoiceOut *) hw;
LPDIRECTSOUNDBUFFER dsb = ds->dsound_buffer;
dsound *s = ds->s;
if (!dsb) {
dolog ("Attempt to control voice without a buffer\n");
@@ -421,7 +522,7 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
switch (cmd) {
case VOICE_ENABLE:
if (dsound_get_status_out (dsb, &status, s)) {
if (dsound_get_status_out (dsb, &status)) {
return -1;
}
@@ -430,7 +531,7 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
dsound_clear_sample (hw, dsb, s);
dsound_clear_sample (hw, dsb);
hr = IDirectSoundBuffer_Play (dsb, 0, 0, DSBPLAY_LOOPING);
if (FAILED (hr)) {
@@ -440,7 +541,7 @@ static int dsound_ctl_out (HWVoiceOut *hw, int cmd, ...)
break;
case VOICE_DISABLE:
if (dsound_get_status_out (dsb, &status, s)) {
if (dsound_get_status_out (dsb, &status)) {
return -1;
}
@@ -477,8 +578,6 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
DWORD wpos, ppos, old_pos;
LPVOID p1, p2;
int bufsize;
dsound *s = ds->s;
DSoundConf *conf = &s->conf;
if (!dsb) {
dolog ("Attempt to run empty with playback buffer\n");
@@ -501,14 +600,14 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
len = live << hwshift;
if (ds->first_time) {
if (conf->latency_millis) {
if (conf.latency_millis) {
DWORD cur_blat;
cur_blat = audio_ring_dist (wpos, ppos, bufsize);
ds->first_time = 0;
old_pos = wpos;
old_pos +=
millis_to_bytes (&hw->info, conf->latency_millis) - cur_blat;
millis_to_bytes (&hw->info, conf.latency_millis) - cur_blat;
old_pos %= bufsize;
old_pos &= ~hw->info.align;
}
@@ -564,8 +663,7 @@ static int dsound_run_out (HWVoiceOut *hw, int live)
len,
&p1, &p2,
&blen1, &blen2,
0,
s
0
);
if (err) {
return 0;
@@ -668,7 +766,6 @@ static int dsound_run_in (HWVoiceIn *hw)
DWORD cpos, rpos;
LPVOID p1, p2;
int hwshift;
dsound *s = ds->s;
if (!dscb) {
dolog ("Attempt to run without capture buffer\n");
@@ -723,8 +820,7 @@ static int dsound_run_in (HWVoiceIn *hw)
&p2,
&blen1,
&blen2,
0,
s
0
);
if (err) {
return 0;
@@ -747,19 +843,12 @@ static int dsound_run_in (HWVoiceIn *hw)
return decr;
}
static DSoundConf glob_conf = {
.bufsize_in = 16384,
.bufsize_out = 16384,
.latency_millis = 10
};
static void dsound_audio_fini (void *opaque)
{
HRESULT hr;
dsound *s = opaque;
if (!s->dsound) {
g_free(s);
return;
}
@@ -770,7 +859,6 @@ static void dsound_audio_fini (void *opaque)
s->dsound = NULL;
if (!s->dsound_capture) {
g_free(s);
return;
}
@@ -779,21 +867,17 @@ static void dsound_audio_fini (void *opaque)
dsound_logerr (hr, "Could not release DirectSoundCapture\n");
}
s->dsound_capture = NULL;
g_free(s);
}
static void *dsound_audio_init (void)
{
int err;
HRESULT hr;
dsound *s = g_malloc0(sizeof(dsound));
dsound *s = &glob_dsound;
s->conf = glob_conf;
hr = CoInitialize (NULL);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not initialize COM\n");
g_free(s);
return NULL;
}
@@ -806,7 +890,6 @@ static void *dsound_audio_init (void)
);
if (FAILED (hr)) {
dsound_logerr (hr, "Could not create DirectSound instance\n");
g_free(s);
return NULL;
}
@@ -818,7 +901,7 @@ static void *dsound_audio_init (void)
if (FAILED (hr)) {
dsound_logerr (hr, "Could not release DirectSound\n");
}
g_free(s);
s->dsound = NULL;
return NULL;
}
@@ -855,22 +938,64 @@ static void *dsound_audio_init (void)
}
static struct audio_option dsound_options[] = {
{
.name = "LOCK_RETRIES",
.tag = AUD_OPT_INT,
.valp = &conf.lock_retries,
.descr = "Number of times to attempt locking the buffer"
},
{
.name = "RESTOURE_RETRIES",
.tag = AUD_OPT_INT,
.valp = &conf.restore_retries,
.descr = "Number of times to attempt restoring the buffer"
},
{
.name = "GETSTATUS_RETRIES",
.tag = AUD_OPT_INT,
.valp = &conf.getstatus_retries,
.descr = "Number of times to attempt getting status of the buffer"
},
{
.name = "SET_PRIMARY",
.tag = AUD_OPT_BOOL,
.valp = &conf.set_primary,
.descr = "Set the parameters of primary buffer"
},
{
.name = "LATENCY_MILLIS",
.tag = AUD_OPT_INT,
.valp = &glob_conf.latency_millis,
.valp = &conf.latency_millis,
.descr = "(undocumented)"
},
{
.name = "PRIMARY_FREQ",
.tag = AUD_OPT_INT,
.valp = &conf.settings.freq,
.descr = "Primary buffer frequency"
},
{
.name = "PRIMARY_CHANNELS",
.tag = AUD_OPT_INT,
.valp = &conf.settings.nchannels,
.descr = "Primary buffer number of channels (1 - mono, 2 - stereo)"
},
{
.name = "PRIMARY_FMT",
.tag = AUD_OPT_FMT,
.valp = &conf.settings.fmt,
.descr = "Primary buffer format"
},
{
.name = "BUFSIZE_OUT",
.tag = AUD_OPT_INT,
.valp = &glob_conf.bufsize_out,
.valp = &conf.bufsize_out,
.descr = "(undocumented)"
},
{
.name = "BUFSIZE_IN",
.tag = AUD_OPT_INT,
.valp = &glob_conf.bufsize_in,
.valp = &conf.bufsize_in,
.descr = "(undocumented)"
},
{ /* End of list */ }

557
audio/esdaudio.c Normal file
View File

@@ -0,0 +1,557 @@
/*
* QEMU ESD audio driver
*
* Copyright (c) 2006 Frederick Reeve (brushed up by malc)
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <esd.h>
#include "qemu-common.h"
#include "audio.h"
#define AUDIO_CAP "esd"
#include "audio_int.h"
#include "audio_pt_int.h"
typedef struct {
HWVoiceOut hw;
int done;
int live;
int decr;
int rpos;
void *pcm_buf;
int fd;
struct audio_pt pt;
} ESDVoiceOut;
typedef struct {
HWVoiceIn hw;
int done;
int dead;
int incr;
int wpos;
void *pcm_buf;
int fd;
struct audio_pt pt;
} ESDVoiceIn;
static struct {
int samples;
int divisor;
char *dac_host;
char *adc_host;
} conf = {
.samples = 1024,
.divisor = 2,
};
static void GCC_FMT_ATTR (2, 3) qesd_logerr (int err, const char *fmt, ...)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (AUDIO_CAP, "Reason: %s\n", strerror (err));
}
/* playback */
static void *qesd_thread_out (void *arg)
{
ESDVoiceOut *esd = arg;
HWVoiceOut *hw = &esd->hw;
int threshold;
threshold = conf.divisor ? hw->samples / conf.divisor : 0;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
for (;;) {
int decr, to_mix, rpos;
for (;;) {
if (esd->done) {
goto exit;
}
if (esd->live > threshold) {
break;
}
if (audio_pt_wait (&esd->pt, AUDIO_FUNC)) {
goto exit;
}
}
decr = to_mix = esd->live;
rpos = hw->rpos;
if (audio_pt_unlock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
while (to_mix) {
ssize_t written;
int chunk = audio_MIN (to_mix, hw->samples - rpos);
struct st_sample *src = hw->mix_buf + rpos;
hw->clip (esd->pcm_buf, src, chunk);
again:
written = write (esd->fd, esd->pcm_buf, chunk << hw->info.shift);
if (written == -1) {
if (errno == EINTR || errno == EAGAIN) {
goto again;
}
qesd_logerr (errno, "write failed\n");
return NULL;
}
if (written != chunk << hw->info.shift) {
int wsamples = written >> hw->info.shift;
int wbytes = wsamples << hw->info.shift;
if (wbytes != written) {
dolog ("warning: Misaligned write %d (requested %zd), "
"alignment %d\n",
wbytes, written, hw->info.align + 1);
}
to_mix -= wsamples;
rpos = (rpos + wsamples) % hw->samples;
break;
}
rpos = (rpos + chunk) % hw->samples;
to_mix -= chunk;
}
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
esd->rpos = rpos;
esd->live -= decr;
esd->decr += decr;
}
exit:
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
return NULL;
}
static int qesd_run_out (HWVoiceOut *hw, int live)
{
int decr;
ESDVoiceOut *esd = (ESDVoiceOut *) hw;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return 0;
}
decr = audio_MIN (live, esd->decr);
esd->decr -= decr;
esd->live = live - decr;
hw->rpos = esd->rpos;
if (esd->live > 0) {
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
}
else {
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
}
return decr;
}
static int qesd_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write (sw, buf, len);
}
static int qesd_init_out (HWVoiceOut *hw, struct audsettings *as)
{
ESDVoiceOut *esd = (ESDVoiceOut *) hw;
struct audsettings obt_as = *as;
int esdfmt = ESD_STREAM | ESD_PLAY;
esdfmt |= (as->nchannels == 2) ? ESD_STEREO : ESD_MONO;
switch (as->fmt) {
case AUD_FMT_S8:
case AUD_FMT_U8:
esdfmt |= ESD_BITS8;
obt_as.fmt = AUD_FMT_U8;
break;
case AUD_FMT_S32:
case AUD_FMT_U32:
dolog ("Will use 16 instead of 32 bit samples\n");
/* fall through */
case AUD_FMT_S16:
case AUD_FMT_U16:
deffmt:
esdfmt |= ESD_BITS16;
obt_as.fmt = AUD_FMT_S16;
break;
default:
dolog ("Internal logic error: Bad audio format %d\n", as->fmt);
goto deffmt;
}
obt_as.endianness = AUDIO_HOST_ENDIANNESS;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.samples;
esd->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
if (!esd->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
hw->samples << hw->info.shift);
return -1;
}
esd->fd = esd_play_stream (esdfmt, as->freq, conf.dac_host, NULL);
if (esd->fd < 0) {
qesd_logerr (errno, "esd_play_stream failed\n");
goto fail1;
}
if (audio_pt_init (&esd->pt, qesd_thread_out, esd, AUDIO_CAP, AUDIO_FUNC)) {
goto fail2;
}
return 0;
fail2:
if (close (esd->fd)) {
qesd_logerr (errno, "%s: close on esd socket(%d) failed\n",
AUDIO_FUNC, esd->fd);
}
esd->fd = -1;
fail1:
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
return -1;
}
static void qesd_fini_out (HWVoiceOut *hw)
{
void *ret;
ESDVoiceOut *esd = (ESDVoiceOut *) hw;
audio_pt_lock (&esd->pt, AUDIO_FUNC);
esd->done = 1;
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
audio_pt_join (&esd->pt, &ret, AUDIO_FUNC);
if (esd->fd >= 0) {
if (close (esd->fd)) {
qesd_logerr (errno, "failed to close esd socket\n");
}
esd->fd = -1;
}
audio_pt_fini (&esd->pt, AUDIO_FUNC);
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
}
static int qesd_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
(void) hw;
(void) cmd;
return 0;
}
/* capture */
static void *qesd_thread_in (void *arg)
{
ESDVoiceIn *esd = arg;
HWVoiceIn *hw = &esd->hw;
int threshold;
threshold = conf.divisor ? hw->samples / conf.divisor : 0;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
for (;;) {
int incr, to_grab, wpos;
for (;;) {
if (esd->done) {
goto exit;
}
if (esd->dead > threshold) {
break;
}
if (audio_pt_wait (&esd->pt, AUDIO_FUNC)) {
goto exit;
}
}
incr = to_grab = esd->dead;
wpos = hw->wpos;
if (audio_pt_unlock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
while (to_grab) {
ssize_t nread;
int chunk = audio_MIN (to_grab, hw->samples - wpos);
void *buf = advance (esd->pcm_buf, wpos);
again:
nread = read (esd->fd, buf, chunk << hw->info.shift);
if (nread == -1) {
if (errno == EINTR || errno == EAGAIN) {
goto again;
}
qesd_logerr (errno, "read failed\n");
return NULL;
}
if (nread != chunk << hw->info.shift) {
int rsamples = nread >> hw->info.shift;
int rbytes = rsamples << hw->info.shift;
if (rbytes != nread) {
dolog ("warning: Misaligned write %d (requested %zd), "
"alignment %d\n",
rbytes, nread, hw->info.align + 1);
}
to_grab -= rsamples;
wpos = (wpos + rsamples) % hw->samples;
break;
}
hw->conv (hw->conv_buf + wpos, buf, nread >> hw->info.shift);
wpos = (wpos + chunk) % hw->samples;
to_grab -= chunk;
}
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return NULL;
}
esd->wpos = wpos;
esd->dead -= incr;
esd->incr += incr;
}
exit:
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
return NULL;
}
static int qesd_run_in (HWVoiceIn *hw)
{
int live, incr, dead;
ESDVoiceIn *esd = (ESDVoiceIn *) hw;
if (audio_pt_lock (&esd->pt, AUDIO_FUNC)) {
return 0;
}
live = audio_pcm_hw_get_live_in (hw);
dead = hw->samples - live;
incr = audio_MIN (dead, esd->incr);
esd->incr -= incr;
esd->dead = dead - incr;
hw->wpos = esd->wpos;
if (esd->dead > 0) {
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
}
else {
audio_pt_unlock (&esd->pt, AUDIO_FUNC);
}
return incr;
}
static int qesd_read (SWVoiceIn *sw, void *buf, int len)
{
return audio_pcm_sw_read (sw, buf, len);
}
static int qesd_init_in (HWVoiceIn *hw, struct audsettings *as)
{
ESDVoiceIn *esd = (ESDVoiceIn *) hw;
struct audsettings obt_as = *as;
int esdfmt = ESD_STREAM | ESD_RECORD;
esdfmt |= (as->nchannels == 2) ? ESD_STEREO : ESD_MONO;
switch (as->fmt) {
case AUD_FMT_S8:
case AUD_FMT_U8:
esdfmt |= ESD_BITS8;
obt_as.fmt = AUD_FMT_U8;
break;
case AUD_FMT_S16:
case AUD_FMT_U16:
esdfmt |= ESD_BITS16;
obt_as.fmt = AUD_FMT_S16;
break;
case AUD_FMT_S32:
case AUD_FMT_U32:
dolog ("Will use 16 instead of 32 bit samples\n");
esdfmt |= ESD_BITS16;
obt_as.fmt = AUD_FMT_S16;
break;
}
obt_as.endianness = AUDIO_HOST_ENDIANNESS;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.samples;
esd->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
if (!esd->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
hw->samples << hw->info.shift);
return -1;
}
esd->fd = esd_record_stream (esdfmt, as->freq, conf.adc_host, NULL);
if (esd->fd < 0) {
qesd_logerr (errno, "esd_record_stream failed\n");
goto fail1;
}
if (audio_pt_init (&esd->pt, qesd_thread_in, esd, AUDIO_CAP, AUDIO_FUNC)) {
goto fail2;
}
return 0;
fail2:
if (close (esd->fd)) {
qesd_logerr (errno, "%s: close on esd socket(%d) failed\n",
AUDIO_FUNC, esd->fd);
}
esd->fd = -1;
fail1:
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
return -1;
}
static void qesd_fini_in (HWVoiceIn *hw)
{
void *ret;
ESDVoiceIn *esd = (ESDVoiceIn *) hw;
audio_pt_lock (&esd->pt, AUDIO_FUNC);
esd->done = 1;
audio_pt_unlock_and_signal (&esd->pt, AUDIO_FUNC);
audio_pt_join (&esd->pt, &ret, AUDIO_FUNC);
if (esd->fd >= 0) {
if (close (esd->fd)) {
qesd_logerr (errno, "failed to close esd socket\n");
}
esd->fd = -1;
}
audio_pt_fini (&esd->pt, AUDIO_FUNC);
g_free (esd->pcm_buf);
esd->pcm_buf = NULL;
}
static int qesd_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
(void) hw;
(void) cmd;
return 0;
}
/* common */
static void *qesd_audio_init (void)
{
return &conf;
}
static void qesd_audio_fini (void *opaque)
{
(void) opaque;
ldebug ("esd_fini");
}
struct audio_option qesd_options[] = {
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.samples,
.descr = "buffer size in samples"
},
{
.name = "DIVISOR",
.tag = AUD_OPT_INT,
.valp = &conf.divisor,
.descr = "threshold divisor"
},
{
.name = "DAC_HOST",
.tag = AUD_OPT_STR,
.valp = &conf.dac_host,
.descr = "playback host"
},
{
.name = "ADC_HOST",
.tag = AUD_OPT_STR,
.valp = &conf.adc_host,
.descr = "capture host"
},
{ /* End of list */ }
};
static struct audio_pcm_ops qesd_pcm_ops = {
.init_out = qesd_init_out,
.fini_out = qesd_fini_out,
.run_out = qesd_run_out,
.write = qesd_write,
.ctl_out = qesd_ctl_out,
.init_in = qesd_init_in,
.fini_in = qesd_fini_in,
.run_in = qesd_run_in,
.read = qesd_read,
.ctl_in = qesd_ctl_in,
};
struct audio_driver esd_audio_driver = {
.name = "esd",
.descr = "http://en.wikipedia.org/wiki/Esound",
.options = qesd_options,
.init = qesd_audio_init,
.fini = qesd_audio_fini,
.pcm_ops = &qesd_pcm_ops,
.can_be_default = 0,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (ESDVoiceOut),
.voice_size_in = sizeof (ESDVoiceIn)
};

685
audio/fmodaudio.c Normal file
View File

@@ -0,0 +1,685 @@
/*
* QEMU FMOD audio driver
*
* Copyright (c) 2004-2005 Vassili Karpov (malc)
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <fmod.h>
#include <fmod_errors.h>
#include "qemu-common.h"
#include "audio.h"
#define AUDIO_CAP "fmod"
#include "audio_int.h"
typedef struct FMODVoiceOut {
HWVoiceOut hw;
unsigned int old_pos;
FSOUND_SAMPLE *fmod_sample;
int channel;
} FMODVoiceOut;
typedef struct FMODVoiceIn {
HWVoiceIn hw;
FSOUND_SAMPLE *fmod_sample;
} FMODVoiceIn;
static struct {
const char *drvname;
int nb_samples;
int freq;
int nb_channels;
int bufsize;
int broken_adc;
} conf = {
.nb_samples = 2048 * 2,
.freq = 44100,
.nb_channels = 2,
};
static void GCC_FMT_ATTR (1, 2) fmod_logerr (const char *fmt, ...)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (AUDIO_CAP, "Reason: %s\n",
FMOD_ErrorString (FSOUND_GetError ()));
}
static void GCC_FMT_ATTR (2, 3) fmod_logerr2 (
const char *typ,
const char *fmt,
...
)
{
va_list ap;
AUD_log (AUDIO_CAP, "Could not initialize %s\n", typ);
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (AUDIO_CAP, "Reason: %s\n",
FMOD_ErrorString (FSOUND_GetError ()));
}
static int fmod_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write (sw, buf, len);
}
static void fmod_clear_sample (FMODVoiceOut *fmd)
{
HWVoiceOut *hw = &fmd->hw;
int status;
void *p1 = 0, *p2 = 0;
unsigned int len1 = 0, len2 = 0;
status = FSOUND_Sample_Lock (
fmd->fmod_sample,
0,
hw->samples << hw->info.shift,
&p1,
&p2,
&len1,
&len2
);
if (!status) {
fmod_logerr ("Failed to lock sample\n");
return;
}
if ((len1 & hw->info.align) || (len2 & hw->info.align)) {
dolog ("Lock returned misaligned length %d, %d, alignment %d\n",
len1, len2, hw->info.align + 1);
goto fail;
}
if ((len1 + len2) - (hw->samples << hw->info.shift)) {
dolog ("Lock returned incomplete length %d, %d\n",
len1 + len2, hw->samples << hw->info.shift);
goto fail;
}
audio_pcm_info_clear_buf (&hw->info, p1, hw->samples);
fail:
status = FSOUND_Sample_Unlock (fmd->fmod_sample, p1, p2, len1, len2);
if (!status) {
fmod_logerr ("Failed to unlock sample\n");
}
}
static void fmod_write_sample (HWVoiceOut *hw, uint8_t *dst, int dst_len)
{
int src_len1 = dst_len;
int src_len2 = 0;
int pos = hw->rpos + dst_len;
struct st_sample *src1 = hw->mix_buf + hw->rpos;
struct st_sample *src2 = NULL;
if (pos > hw->samples) {
src_len1 = hw->samples - hw->rpos;
src2 = hw->mix_buf;
src_len2 = dst_len - src_len1;
pos = src_len2;
}
if (src_len1) {
hw->clip (dst, src1, src_len1);
}
if (src_len2) {
dst = advance (dst, src_len1 << hw->info.shift);
hw->clip (dst, src2, src_len2);
}
hw->rpos = pos % hw->samples;
}
static int fmod_unlock_sample (FSOUND_SAMPLE *sample, void *p1, void *p2,
unsigned int blen1, unsigned int blen2)
{
int status = FSOUND_Sample_Unlock (sample, p1, p2, blen1, blen2);
if (!status) {
fmod_logerr ("Failed to unlock sample\n");
return -1;
}
return 0;
}
static int fmod_lock_sample (
FSOUND_SAMPLE *sample,
struct audio_pcm_info *info,
int pos,
int len,
void **p1,
void **p2,
unsigned int *blen1,
unsigned int *blen2
)
{
int status;
status = FSOUND_Sample_Lock (
sample,
pos << info->shift,
len << info->shift,
p1,
p2,
blen1,
blen2
);
if (!status) {
fmod_logerr ("Failed to lock sample\n");
return -1;
}
if ((*blen1 & info->align) || (*blen2 & info->align)) {
dolog ("Lock returned misaligned length %d, %d, alignment %d\n",
*blen1, *blen2, info->align + 1);
fmod_unlock_sample (sample, *p1, *p2, *blen1, *blen2);
*p1 = NULL - 1;
*p2 = NULL - 1;
*blen1 = ~0U;
*blen2 = ~0U;
return -1;
}
if (!*p1 && *blen1) {
dolog ("warning: !p1 && blen1=%d\n", *blen1);
*blen1 = 0;
}
if (!p2 && *blen2) {
dolog ("warning: !p2 && blen2=%d\n", *blen2);
*blen2 = 0;
}
return 0;
}
static int fmod_run_out (HWVoiceOut *hw, int live)
{
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
int decr;
void *p1 = 0, *p2 = 0;
unsigned int blen1 = 0, blen2 = 0;
unsigned int len1 = 0, len2 = 0;
if (!hw->pending_disable) {
return 0;
}
decr = live;
if (fmd->channel >= 0) {
int len = decr;
int old_pos = fmd->old_pos;
int ppos = FSOUND_GetCurrentPosition (fmd->channel);
if (ppos == old_pos || !ppos) {
return 0;
}
if ((old_pos < ppos) && ((old_pos + len) > ppos)) {
len = ppos - old_pos;
}
else {
if ((old_pos > ppos) && ((old_pos + len) > (ppos + hw->samples))) {
len = hw->samples - old_pos + ppos;
}
}
decr = len;
if (audio_bug (AUDIO_FUNC, decr < 0)) {
dolog ("decr=%d live=%d ppos=%d old_pos=%d len=%d\n",
decr, live, ppos, old_pos, len);
return 0;
}
}
if (!decr) {
return 0;
}
if (fmod_lock_sample (fmd->fmod_sample, &fmd->hw.info,
fmd->old_pos, decr,
&p1, &p2,
&blen1, &blen2)) {
return 0;
}
len1 = blen1 >> hw->info.shift;
len2 = blen2 >> hw->info.shift;
ldebug ("%p %p %d %d %d %d\n", p1, p2, len1, len2, blen1, blen2);
decr = len1 + len2;
if (p1 && len1) {
fmod_write_sample (hw, p1, len1);
}
if (p2 && len2) {
fmod_write_sample (hw, p2, len2);
}
fmod_unlock_sample (fmd->fmod_sample, p1, p2, blen1, blen2);
fmd->old_pos = (fmd->old_pos + decr) % hw->samples;
return decr;
}
static int aud_to_fmodfmt (audfmt_e fmt, int stereo)
{
int mode = FSOUND_LOOP_NORMAL;
switch (fmt) {
case AUD_FMT_S8:
mode |= FSOUND_SIGNED | FSOUND_8BITS;
break;
case AUD_FMT_U8:
mode |= FSOUND_UNSIGNED | FSOUND_8BITS;
break;
case AUD_FMT_S16:
mode |= FSOUND_SIGNED | FSOUND_16BITS;
break;
case AUD_FMT_U16:
mode |= FSOUND_UNSIGNED | FSOUND_16BITS;
break;
default:
dolog ("Internal logic error: Bad audio format %d\n", fmt);
#ifdef DEBUG_FMOD
abort ();
#endif
mode |= FSOUND_8BITS;
}
mode |= stereo ? FSOUND_STEREO : FSOUND_MONO;
return mode;
}
static void fmod_fini_out (HWVoiceOut *hw)
{
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
if (fmd->fmod_sample) {
FSOUND_Sample_Free (fmd->fmod_sample);
fmd->fmod_sample = 0;
if (fmd->channel >= 0) {
FSOUND_StopSound (fmd->channel);
}
}
}
static int fmod_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int mode, channel;
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
struct audsettings obt_as = *as;
mode = aud_to_fmodfmt (as->fmt, as->nchannels == 2 ? 1 : 0);
fmd->fmod_sample = FSOUND_Sample_Alloc (
FSOUND_FREE, /* index */
conf.nb_samples, /* length */
mode, /* mode */
as->freq, /* freq */
255, /* volume */
128, /* pan */
255 /* priority */
);
if (!fmd->fmod_sample) {
fmod_logerr2 ("DAC", "Failed to allocate FMOD sample\n");
return -1;
}
channel = FSOUND_PlaySoundEx (FSOUND_FREE, fmd->fmod_sample, 0, 1);
if (channel < 0) {
fmod_logerr2 ("DAC", "Failed to start playing sound\n");
FSOUND_Sample_Free (fmd->fmod_sample);
return -1;
}
fmd->channel = channel;
/* FMOD always operates on little endian frames? */
obt_as.endianness = 0;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.nb_samples;
return 0;
}
static int fmod_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
int status;
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
switch (cmd) {
case VOICE_ENABLE:
fmod_clear_sample (fmd);
status = FSOUND_SetPaused (fmd->channel, 0);
if (!status) {
fmod_logerr ("Failed to resume channel %d\n", fmd->channel);
}
break;
case VOICE_DISABLE:
status = FSOUND_SetPaused (fmd->channel, 1);
if (!status) {
fmod_logerr ("Failed to pause channel %d\n", fmd->channel);
}
break;
}
return 0;
}
static int fmod_init_in (HWVoiceIn *hw, struct audsettings *as)
{
int mode;
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
struct audsettings obt_as = *as;
if (conf.broken_adc) {
return -1;
}
mode = aud_to_fmodfmt (as->fmt, as->nchannels == 2 ? 1 : 0);
fmd->fmod_sample = FSOUND_Sample_Alloc (
FSOUND_FREE, /* index */
conf.nb_samples, /* length */
mode, /* mode */
as->freq, /* freq */
255, /* volume */
128, /* pan */
255 /* priority */
);
if (!fmd->fmod_sample) {
fmod_logerr2 ("ADC", "Failed to allocate FMOD sample\n");
return -1;
}
/* FMOD always operates on little endian frames? */
obt_as.endianness = 0;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = conf.nb_samples;
return 0;
}
static void fmod_fini_in (HWVoiceIn *hw)
{
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
if (fmd->fmod_sample) {
FSOUND_Record_Stop ();
FSOUND_Sample_Free (fmd->fmod_sample);
fmd->fmod_sample = 0;
}
}
static int fmod_run_in (HWVoiceIn *hw)
{
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
int hwshift = hw->info.shift;
int live, dead, new_pos, len;
unsigned int blen1 = 0, blen2 = 0;
unsigned int len1, len2;
unsigned int decr;
void *p1, *p2;
live = audio_pcm_hw_get_live_in (hw);
dead = hw->samples - live;
if (!dead) {
return 0;
}
new_pos = FSOUND_Record_GetPosition ();
if (new_pos < 0) {
fmod_logerr ("Could not get recording position\n");
return 0;
}
len = audio_ring_dist (new_pos, hw->wpos, hw->samples);
if (!len) {
return 0;
}
len = audio_MIN (len, dead);
if (fmod_lock_sample (fmd->fmod_sample, &fmd->hw.info,
hw->wpos, len,
&p1, &p2,
&blen1, &blen2)) {
return 0;
}
len1 = blen1 >> hwshift;
len2 = blen2 >> hwshift;
decr = len1 + len2;
if (p1 && blen1) {
hw->conv (hw->conv_buf + hw->wpos, p1, len1);
}
if (p2 && len2) {
hw->conv (hw->conv_buf, p2, len2);
}
fmod_unlock_sample (fmd->fmod_sample, p1, p2, blen1, blen2);
hw->wpos = (hw->wpos + decr) % hw->samples;
return decr;
}
static struct {
const char *name;
int type;
} drvtab[] = {
{ .name = "none", .type = FSOUND_OUTPUT_NOSOUND },
#ifdef _WIN32
{ .name = "winmm", .type = FSOUND_OUTPUT_WINMM },
{ .name = "dsound", .type = FSOUND_OUTPUT_DSOUND },
{ .name = "a3d", .type = FSOUND_OUTPUT_A3D },
{ .name = "asio", .type = FSOUND_OUTPUT_ASIO },
#endif
#ifdef __linux__
{ .name = "oss", .type = FSOUND_OUTPUT_OSS },
{ .name = "alsa", .type = FSOUND_OUTPUT_ALSA },
{ .name = "esd", .type = FSOUND_OUTPUT_ESD },
#endif
#ifdef __APPLE__
{ .name = "mac", .type = FSOUND_OUTPUT_MAC },
#endif
#if 0
{ .name = "xbox", .type = FSOUND_OUTPUT_XBOX },
{ .name = "ps2", .type = FSOUND_OUTPUT_PS2 },
{ .name = "gcube", .type = FSOUND_OUTPUT_GC },
#endif
{ .name = "none-realtime", .type = FSOUND_OUTPUT_NOSOUND_NONREALTIME }
};
static void *fmod_audio_init (void)
{
size_t i;
double ver;
int status;
int output_type = -1;
const char *drv = conf.drvname;
ver = FSOUND_GetVersion ();
if (ver < FMOD_VERSION) {
dolog ("Wrong FMOD version %f, need at least %f\n", ver, FMOD_VERSION);
return NULL;
}
#ifdef __linux__
if (ver < 3.75) {
dolog ("FMOD before 3.75 has bug preventing ADC from working\n"
"ADC will be disabled.\n");
conf.broken_adc = 1;
}
#endif
if (drv) {
int found = 0;
for (i = 0; i < ARRAY_SIZE (drvtab); i++) {
if (!strcmp (drv, drvtab[i].name)) {
output_type = drvtab[i].type;
found = 1;
break;
}
}
if (!found) {
dolog ("Unknown FMOD driver `%s'\n", drv);
dolog ("Valid drivers:\n");
for (i = 0; i < ARRAY_SIZE (drvtab); i++) {
dolog (" %s\n", drvtab[i].name);
}
}
}
if (output_type != -1) {
status = FSOUND_SetOutput (output_type);
if (!status) {
fmod_logerr ("FSOUND_SetOutput(%d) failed\n", output_type);
return NULL;
}
}
if (conf.bufsize) {
status = FSOUND_SetBufferSize (conf.bufsize);
if (!status) {
fmod_logerr ("FSOUND_SetBufferSize (%d) failed\n", conf.bufsize);
}
}
status = FSOUND_Init (conf.freq, conf.nb_channels, 0);
if (!status) {
fmod_logerr ("FSOUND_Init failed\n");
return NULL;
}
return &conf;
}
static int fmod_read (SWVoiceIn *sw, void *buf, int size)
{
return audio_pcm_sw_read (sw, buf, size);
}
static int fmod_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
int status;
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
switch (cmd) {
case VOICE_ENABLE:
status = FSOUND_Record_StartSample (fmd->fmod_sample, 1);
if (!status) {
fmod_logerr ("Failed to start recording\n");
}
break;
case VOICE_DISABLE:
status = FSOUND_Record_Stop ();
if (!status) {
fmod_logerr ("Failed to stop recording\n");
}
break;
}
return 0;
}
static void fmod_audio_fini (void *opaque)
{
(void) opaque;
FSOUND_Close ();
}
static struct audio_option fmod_options[] = {
{
.name = "DRV",
.tag = AUD_OPT_STR,
.valp = &conf.drvname,
.descr = "FMOD driver"
},
{
.name = "FREQ",
.tag = AUD_OPT_INT,
.valp = &conf.freq,
.descr = "Default frequency"
},
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.nb_samples,
.descr = "Buffer size in samples"
},
{
.name = "CHANNELS",
.tag = AUD_OPT_INT,
.valp = &conf.nb_channels,
.descr = "Number of default channels (1 - mono, 2 - stereo)"
},
{
.name = "BUFSIZE",
.tag = AUD_OPT_INT,
.valp = &conf.bufsize,
.descr = "(undocumented)"
},
{ /* End of list */ }
};
static struct audio_pcm_ops fmod_pcm_ops = {
.init_out = fmod_init_out,
.fini_out = fmod_fini_out,
.run_out = fmod_run_out,
.write = fmod_write,
.ctl_out = fmod_ctl_out,
.init_in = fmod_init_in,
.fini_in = fmod_fini_in,
.run_in = fmod_run_in,
.read = fmod_read,
.ctl_in = fmod_ctl_in
};
struct audio_driver fmod_audio_driver = {
.name = "fmod",
.descr = "FMOD 3.xx http://www.fmod.org",
.options = fmod_options,
.init = fmod_audio_init,
.fini = fmod_audio_fini,
.pcm_ops = &fmod_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (FMODVoiceOut),
.voice_size_in = sizeof (FMODVoiceIn)
};

View File

@@ -22,9 +22,7 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "qemu/bswap.h"
#include "audio.h"
#define AUDIO_CAP "mixeng"
@@ -271,7 +269,7 @@ f_sample *mixeng_clip[2][2][2][3] = {
* August 21, 1998
* Copyright 1998 Fabrice Bellard.
*
* [Rewrote completely the code of Lance Norskog And Sundry
* [Rewrote completly the code of Lance Norskog And Sundry
* Contributors with a more efficient algorithm.]
*
* This source code is freely redistributable and may be used for

View File

@@ -21,9 +21,7 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "qemu/host-utils.h"
#include "audio.h"
#include "qemu/timer.h"
@@ -50,8 +48,8 @@ static int no_run_out (HWVoiceOut *hw, int live)
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
ticks = now - no->old_ticks;
bytes = muldiv64(ticks, hw->info.bytes_per_second, NANOSECONDS_PER_SECOND);
bytes = audio_MIN(bytes, INT_MAX);
bytes = muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());
bytes = audio_MIN (bytes, INT_MAX);
samples = bytes >> hw->info.shift;
no->old_ticks = now;
@@ -62,10 +60,10 @@ static int no_run_out (HWVoiceOut *hw, int live)
static int no_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write(sw, buf, len);
return audio_pcm_sw_write (sw, buf, len);
}
static int no_init_out(HWVoiceOut *hw, struct audsettings *as, void *drv_opaque)
static int no_init_out (HWVoiceOut *hw, struct audsettings *as)
{
audio_pcm_init_info (&hw->info, as);
hw->samples = 1024;
@@ -84,7 +82,7 @@ static int no_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static int no_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int no_init_in (HWVoiceIn *hw, struct audsettings *as)
{
audio_pcm_init_info (&hw->info, as);
hw->samples = 1024;
@@ -107,7 +105,7 @@ static int no_run_in (HWVoiceIn *hw)
int64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
int64_t ticks = now - no->old_ticks;
int64_t bytes =
muldiv64(ticks, hw->info.bytes_per_second, NANOSECONDS_PER_SECOND);
muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());
no->old_ticks = now;
bytes = audio_MIN (bytes, INT_MAX);

View File

@@ -21,15 +21,15 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/ioctl.h>
#include <sys/soundcard.h>
#include "qemu-common.h"
#include "qemu/main-loop.h"
#include "qemu/host-utils.h"
#include "audio.h"
#include "trace.h"
#define AUDIO_CAP "oss"
#include "audio_int.h"
@@ -38,16 +38,6 @@
#define USE_DSP_POLICY
#endif
typedef struct OSSConf {
int try_mmap;
int nfrags;
int fragsize;
const char *devpath_out;
const char *devpath_in;
int exclusive;
int policy;
} OSSConf;
typedef struct OSSVoiceOut {
HWVoiceOut hw;
void *pcm_buf;
@@ -57,7 +47,6 @@ typedef struct OSSVoiceOut {
int fragsize;
int mmapped;
int pending;
OSSConf *conf;
} OSSVoiceOut;
typedef struct OSSVoiceIn {
@@ -66,9 +55,28 @@ typedef struct OSSVoiceIn {
int fd;
int nfrags;
int fragsize;
OSSConf *conf;
} OSSVoiceIn;
static struct {
int try_mmap;
int nfrags;
int fragsize;
const char *devpath_out;
const char *devpath_in;
int debug;
int exclusive;
int policy;
} conf = {
.try_mmap = 0,
.nfrags = 4,
.fragsize = 4096,
.devpath_out = "/dev/dsp",
.devpath_in = "/dev/dsp",
.debug = 0,
.exclusive = 0,
.policy = 5
};
struct oss_params {
int freq;
audfmt_e fmt;
@@ -130,18 +138,18 @@ static void oss_helper_poll_in (void *opaque)
audio_run ("oss_poll_in");
}
static void oss_poll_out (HWVoiceOut *hw)
static int oss_poll_out (HWVoiceOut *hw)
{
OSSVoiceOut *oss = (OSSVoiceOut *) hw;
qemu_set_fd_handler (oss->fd, NULL, oss_helper_poll_out, NULL);
return qemu_set_fd_handler (oss->fd, NULL, oss_helper_poll_out, NULL);
}
static void oss_poll_in (HWVoiceIn *hw)
static int oss_poll_in (HWVoiceIn *hw)
{
OSSVoiceIn *oss = (OSSVoiceIn *) hw;
qemu_set_fd_handler (oss->fd, oss_helper_poll_in, NULL, NULL);
return qemu_set_fd_handler (oss->fd, oss_helper_poll_in, NULL, NULL);
}
static int oss_write (SWVoiceOut *sw, void *buf, int len)
@@ -264,18 +272,18 @@ static int oss_get_version (int fd, int *version, const char *typ)
#endif
static int oss_open (int in, struct oss_params *req,
struct oss_params *obt, int *pfd, OSSConf* conf)
struct oss_params *obt, int *pfd)
{
int fd;
int oflags = conf->exclusive ? O_EXCL : 0;
int oflags = conf.exclusive ? O_EXCL : 0;
audio_buf_info abinfo;
int fmt, freq, nchannels;
int setfragment = 1;
const char *dspname = in ? conf->devpath_in : conf->devpath_out;
const char *dspname = in ? conf.devpath_in : conf.devpath_out;
const char *typ = in ? "ADC" : "DAC";
/* Kludge needed to have working mmap on Linux */
oflags |= conf->try_mmap ? O_RDWR : (in ? O_RDONLY : O_WRONLY);
oflags |= conf.try_mmap ? O_RDWR : (in ? O_RDONLY : O_WRONLY);
fd = open (dspname, oflags | O_NONBLOCK);
if (-1 == fd) {
@@ -309,18 +317,20 @@ static int oss_open (int in, struct oss_params *req,
}
#ifdef USE_DSP_POLICY
if (conf->policy >= 0) {
if (conf.policy >= 0) {
int version;
if (!oss_get_version (fd, &version, typ)) {
trace_oss_version(version);
if (conf.debug) {
dolog ("OSS version = %#x\n", version);
}
if (version >= 0x040000) {
int policy = conf->policy;
int policy = conf.policy;
if (ioctl (fd, SNDCTL_DSP_POLICY, &policy)) {
oss_logerr2 (errno, typ,
"Failed to set timing policy to %d\n",
conf->policy);
conf.policy);
goto err;
}
setfragment = 0;
@@ -448,12 +458,19 @@ static int oss_run_out (HWVoiceOut *hw, int live)
}
if (abinfo.bytes > bufsize) {
trace_oss_invalid_available_size(abinfo.bytes, bufsize);
if (conf.debug) {
dolog ("warning: Invalid available size, size=%d bufsize=%d\n"
"please report your OS/audio hw to av1474@comtv.ru\n",
abinfo.bytes, bufsize);
}
abinfo.bytes = bufsize;
}
if (abinfo.bytes < 0) {
trace_oss_invalid_available_size(abinfo.bytes, bufsize);
if (conf.debug) {
dolog ("warning: Invalid available size, size=%d bufsize=%d\n",
abinfo.bytes, bufsize);
}
return 0;
}
@@ -493,8 +510,7 @@ static void oss_fini_out (HWVoiceOut *hw)
}
}
static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int oss_init_out (HWVoiceOut *hw, struct audsettings *as)
{
OSSVoiceOut *oss = (OSSVoiceOut *) hw;
struct oss_params req, obt;
@@ -503,17 +519,16 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
int fd;
audfmt_e effective_fmt;
struct audsettings obt_as;
OSSConf *conf = drv_opaque;
oss->fd = -1;
req.fmt = aud_to_ossfmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.fragsize = conf->fragsize;
req.nfrags = conf->nfrags;
req.fragsize = conf.fragsize;
req.nfrags = conf.nfrags;
if (oss_open (0, &req, &obt, &fd, conf)) {
if (oss_open (0, &req, &obt, &fd)) {
return -1;
}
@@ -540,7 +555,7 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
hw->samples = (obt.nfrags * obt.fragsize) >> hw->info.shift;
oss->mmapped = 0;
if (conf->try_mmap) {
if (conf.try_mmap) {
oss->pcm_buf = mmap (
NULL,
hw->samples << hw->info.shift,
@@ -600,7 +615,6 @@ static int oss_init_out(HWVoiceOut *hw, struct audsettings *as,
}
oss->fd = fd;
oss->conf = conf;
return 0;
}
@@ -620,8 +634,7 @@ static int oss_ctl_out (HWVoiceOut *hw, int cmd, ...)
va_end (ap);
ldebug ("enabling voice\n");
if (poll_mode) {
oss_poll_out (hw);
if (poll_mode && oss_poll_out (hw)) {
poll_mode = 0;
}
hw->poll_mode = poll_mode;
@@ -663,7 +676,7 @@ static int oss_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int oss_init_in (HWVoiceIn *hw, struct audsettings *as)
{
OSSVoiceIn *oss = (OSSVoiceIn *) hw;
struct oss_params req, obt;
@@ -672,16 +685,15 @@ static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
int fd;
audfmt_e effective_fmt;
struct audsettings obt_as;
OSSConf *conf = drv_opaque;
oss->fd = -1;
req.fmt = aud_to_ossfmt (as->fmt, as->endianness);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.fragsize = conf->fragsize;
req.nfrags = conf->nfrags;
if (oss_open (1, &req, &obt, &fd, conf)) {
req.fragsize = conf.fragsize;
req.nfrags = conf.nfrags;
if (oss_open (1, &req, &obt, &fd)) {
return -1;
}
@@ -715,7 +727,6 @@ static int oss_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
oss->fd = fd;
oss->conf = conf;
return 0;
}
@@ -817,8 +828,7 @@ static int oss_ctl_in (HWVoiceIn *hw, int cmd, ...)
poll_mode = va_arg (ap, int);
va_end (ap);
if (poll_mode) {
oss_poll_in (hw);
if (poll_mode && oss_poll_in (hw)) {
poll_mode = 0;
}
hw->poll_mode = poll_mode;
@@ -835,79 +845,71 @@ static int oss_ctl_in (HWVoiceIn *hw, int cmd, ...)
return 0;
}
static OSSConf glob_conf = {
.try_mmap = 0,
.nfrags = 4,
.fragsize = 4096,
.devpath_out = "/dev/dsp",
.devpath_in = "/dev/dsp",
.exclusive = 0,
.policy = 5
};
static void *oss_audio_init (void)
{
OSSConf *conf = g_malloc(sizeof(OSSConf));
*conf = glob_conf;
if (access(conf->devpath_in, R_OK | W_OK) < 0 ||
access(conf->devpath_out, R_OK | W_OK) < 0) {
g_free(conf);
if (access(conf.devpath_in, R_OK | W_OK) < 0 ||
access(conf.devpath_out, R_OK | W_OK) < 0) {
return NULL;
}
return conf;
return &conf;
}
static void oss_audio_fini (void *opaque)
{
g_free(opaque);
(void) opaque;
}
static struct audio_option oss_options[] = {
{
.name = "FRAGSIZE",
.tag = AUD_OPT_INT,
.valp = &glob_conf.fragsize,
.valp = &conf.fragsize,
.descr = "Fragment size in bytes"
},
{
.name = "NFRAGS",
.tag = AUD_OPT_INT,
.valp = &glob_conf.nfrags,
.valp = &conf.nfrags,
.descr = "Number of fragments"
},
{
.name = "MMAP",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.try_mmap,
.valp = &conf.try_mmap,
.descr = "Try using memory mapped access"
},
{
.name = "DAC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.devpath_out,
.valp = &conf.devpath_out,
.descr = "Path to DAC device"
},
{
.name = "ADC_DEV",
.tag = AUD_OPT_STR,
.valp = &glob_conf.devpath_in,
.valp = &conf.devpath_in,
.descr = "Path to ADC device"
},
{
.name = "EXCLUSIVE",
.tag = AUD_OPT_BOOL,
.valp = &glob_conf.exclusive,
.descr = "Open device in exclusive mode (vmix won't work)"
.valp = &conf.exclusive,
.descr = "Open device in exclusive mode (vmix wont work)"
},
#ifdef USE_DSP_POLICY
{
.name = "POLICY",
.tag = AUD_OPT_INT,
.valp = &glob_conf.policy,
.valp = &conf.policy,
.descr = "Set the timing policy of the device, -1 to use fragment mode",
},
#endif
{
.name = "DEBUG",
.tag = AUD_OPT_BOOL,
.valp = &conf.debug,
.descr = "Turn on some debugging messages"
},
{ /* End of list */ }
};

View File

@@ -1,5 +1,4 @@
/* public domain */
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "audio.h"
@@ -9,19 +8,6 @@
#include "audio_int.h"
#include "audio_pt_int.h"
typedef struct {
int samples;
char *server;
char *sink;
char *source;
} PAConf;
typedef struct {
PAConf conf;
pa_threaded_mainloop *mainloop;
pa_context *context;
} paaudio;
typedef struct {
HWVoiceOut hw;
int done;
@@ -31,7 +17,6 @@ typedef struct {
pa_stream *stream;
void *pcm_buf;
struct audio_pt pt;
paaudio *g;
} PAVoiceOut;
typedef struct {
@@ -45,10 +30,20 @@ typedef struct {
struct audio_pt pt;
const void *read_data;
size_t read_index, read_length;
paaudio *g;
} PAVoiceIn;
static void qpa_audio_fini(void *opaque);
typedef struct {
int samples;
char *server;
char *sink;
char *source;
pa_threaded_mainloop *mainloop;
pa_context *context;
} paaudio;
static paaudio glob_paaudio = {
.samples = 4096,
};
static void GCC_FMT_ATTR (2, 3) qpa_logerr (int err, const char *fmt, ...)
{
@@ -111,7 +106,7 @@ static inline int PA_STREAM_IS_GOOD(pa_stream_state_t x)
static int qpa_simple_read (PAVoiceIn *p, void *data, size_t length, int *rerror)
{
paaudio *g = p->g;
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_lock (g->mainloop);
@@ -165,7 +160,7 @@ unlock_and_fail:
static int qpa_simple_write (PAVoiceOut *p, const void *data, size_t length, int *rerror)
{
paaudio *g = p->g;
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_lock (g->mainloop);
@@ -227,7 +222,7 @@ static void *qpa_thread_out (void *arg)
}
}
decr = to_mix = audio_MIN (pa->live, pa->g->conf.samples >> 2);
decr = to_mix = audio_MIN (pa->live, glob_paaudio.samples >> 2);
rpos = pa->rpos;
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
@@ -319,7 +314,7 @@ static void *qpa_thread_in (void *arg)
}
}
incr = to_grab = audio_MIN (pa->dead, pa->g->conf.samples >> 2);
incr = to_grab = audio_MIN (pa->dead, glob_paaudio.samples >> 2);
wpos = pa->wpos;
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
@@ -435,7 +430,7 @@ static audfmt_e pa_to_audfmt (pa_sample_format_t fmt, int *endianness)
static void context_state_cb (pa_context *c, void *userdata)
{
paaudio *g = userdata;
paaudio *g = &glob_paaudio;
switch (pa_context_get_state(c)) {
case PA_CONTEXT_READY:
@@ -454,7 +449,7 @@ static void context_state_cb (pa_context *c, void *userdata)
static void stream_state_cb (pa_stream *s, void * userdata)
{
paaudio *g = userdata;
paaudio *g = &glob_paaudio;
switch (pa_stream_get_state (s)) {
@@ -472,21 +467,23 @@ static void stream_state_cb (pa_stream *s, void * userdata)
static void stream_request_cb (pa_stream *s, size_t length, void *userdata)
{
paaudio *g = userdata;
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_signal (g->mainloop, 0);
}
static pa_stream *qpa_simple_new (
paaudio *g,
const char *server,
const char *name,
pa_stream_direction_t dir,
const char *dev,
const char *stream_name,
const pa_sample_spec *ss,
const pa_channel_map *map,
const pa_buffer_attr *attr,
int *rerror)
{
paaudio *g = &glob_paaudio;
int r;
pa_stream *stream;
@@ -537,15 +534,13 @@ fail:
return NULL;
}
static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int error;
pa_sample_spec ss;
pa_buffer_attr ba;
static pa_sample_spec ss;
static pa_buffer_attr ba;
struct audsettings obt_as = *as;
PAVoiceOut *pa = (PAVoiceOut *) hw;
paaudio *g = pa->g = drv_opaque;
ss.format = audfmt_to_pa (as->fmt, as->endianness);
ss.channels = as->nchannels;
@@ -563,10 +558,11 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
obt_as.fmt = pa_to_audfmt (ss.format, &obt_as.endianness);
pa->stream = qpa_simple_new (
g,
glob_paaudio.server,
"qemu",
PA_STREAM_PLAYBACK,
g->conf.sink,
glob_paaudio.sink,
"pcm.playback",
&ss,
NULL, /* channel map */
&ba, /* buffering attributes */
@@ -578,7 +574,7 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = g->conf.samples;
hw->samples = glob_paaudio.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->rpos = hw->rpos;
if (!pa->pcm_buf) {
@@ -605,13 +601,12 @@ static int qpa_init_out(HWVoiceOut *hw, struct audsettings *as,
return -1;
}
static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int qpa_init_in (HWVoiceIn *hw, struct audsettings *as)
{
int error;
pa_sample_spec ss;
static pa_sample_spec ss;
struct audsettings obt_as = *as;
PAVoiceIn *pa = (PAVoiceIn *) hw;
paaudio *g = pa->g = drv_opaque;
ss.format = audfmt_to_pa (as->fmt, as->endianness);
ss.channels = as->nchannels;
@@ -620,10 +615,11 @@ static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
obt_as.fmt = pa_to_audfmt (ss.format, &obt_as.endianness);
pa->stream = qpa_simple_new (
g,
glob_paaudio.server,
"qemu",
PA_STREAM_RECORD,
g->conf.source,
glob_paaudio.source,
"pcm.capture",
&ss,
NULL, /* channel map */
NULL, /* buffering attributes */
@@ -635,7 +631,7 @@ static int qpa_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = g->conf.samples;
hw->samples = glob_paaudio.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->wpos = hw->wpos;
if (!pa->pcm_buf) {
@@ -707,7 +703,7 @@ static int qpa_ctl_out (HWVoiceOut *hw, int cmd, ...)
PAVoiceOut *pa = (PAVoiceOut *) hw;
pa_operation *op;
pa_cvolume v;
paaudio *g = pa->g;
paaudio *g = &glob_paaudio;
#ifdef PA_CHECK_VERSION /* macro is present in 0.9.16+ */
pa_cvolume_init (&v); /* function is present in 0.9.13+ */
@@ -759,7 +755,7 @@ static int qpa_ctl_in (HWVoiceIn *hw, int cmd, ...)
PAVoiceIn *pa = (PAVoiceIn *) hw;
pa_operation *op;
pa_cvolume v;
paaudio *g = pa->g;
paaudio *g = &glob_paaudio;
#ifdef PA_CHECK_VERSION
pa_cvolume_init (&v);
@@ -809,31 +805,23 @@ static int qpa_ctl_in (HWVoiceIn *hw, int cmd, ...)
}
/* common */
static PAConf glob_conf = {
.samples = 4096,
};
static void *qpa_audio_init (void)
{
paaudio *g = g_malloc(sizeof(paaudio));
g->conf = glob_conf;
g->mainloop = NULL;
g->context = NULL;
paaudio *g = &glob_paaudio;
g->mainloop = pa_threaded_mainloop_new ();
if (!g->mainloop) {
goto fail;
}
g->context = pa_context_new (pa_threaded_mainloop_get_api (g->mainloop),
g->conf.server);
g->context = pa_context_new (pa_threaded_mainloop_get_api (g->mainloop), glob_paaudio.server);
if (!g->context) {
goto fail;
}
pa_context_set_state_callback (g->context, context_state_cb, g);
if (pa_context_connect (g->context, g->conf.server, 0, NULL) < 0) {
if (pa_context_connect (g->context, glob_paaudio.server, 0, NULL) < 0) {
qpa_logerr (pa_context_errno (g->context),
"pa_context_connect() failed\n");
goto fail;
@@ -866,13 +854,12 @@ static void *qpa_audio_init (void)
pa_threaded_mainloop_unlock (g->mainloop);
return g;
return &glob_paaudio;
unlock_and_fail:
pa_threaded_mainloop_unlock (g->mainloop);
fail:
AUD_log (AUDIO_CAP, "Failed to initialize PA context");
qpa_audio_fini(g);
return NULL;
}
@@ -887,38 +874,39 @@ static void qpa_audio_fini (void *opaque)
if (g->context) {
pa_context_disconnect (g->context);
pa_context_unref (g->context);
g->context = NULL;
}
if (g->mainloop) {
pa_threaded_mainloop_free (g->mainloop);
}
g_free(g);
g->mainloop = NULL;
}
struct audio_option qpa_options[] = {
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &glob_conf.samples,
.valp = &glob_paaudio.samples,
.descr = "buffer size in samples"
},
{
.name = "SERVER",
.tag = AUD_OPT_STR,
.valp = &glob_conf.server,
.valp = &glob_paaudio.server,
.descr = "server address"
},
{
.name = "SINK",
.tag = AUD_OPT_STR,
.valp = &glob_conf.sink,
.valp = &glob_paaudio.sink,
.descr = "sink device name"
},
{
.name = "SOURCE",
.tag = AUD_OPT_STR,
.valp = &glob_conf.source,
.valp = &glob_paaudio.source,
.descr = "source device name"
},
{ /* End of list */ }

View File

@@ -21,7 +21,6 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include <SDL.h>
#include <SDL_thread.h>
#include "qemu-common.h"
@@ -56,7 +55,6 @@ static struct SDLAudioState {
SDL_mutex *mutex;
SDL_sem *sem;
int initialized;
bool driver_created;
} glob_sdl;
typedef struct SDLAudioState SDLAudioState;
@@ -334,8 +332,7 @@ static void sdl_fini_out (HWVoiceOut *hw)
sdl_close (&glob_sdl);
}
static int sdl_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int sdl_init_out (HWVoiceOut *hw, struct audsettings *as)
{
SDLVoiceOut *sdl = (SDLVoiceOut *) hw;
SDLAudioState *s = &glob_sdl;
@@ -395,10 +392,6 @@ static int sdl_ctl_out (HWVoiceOut *hw, int cmd, ...)
static void *sdl_audio_init (void)
{
SDLAudioState *s = &glob_sdl;
if (s->driver_created) {
sdl_logerr("Can't create multiple sdl backends\n");
return NULL;
}
if (SDL_InitSubSystem (SDL_INIT_AUDIO)) {
sdl_logerr ("SDL failed to initialize audio subsystem\n");
@@ -420,7 +413,6 @@ static void *sdl_audio_init (void)
return NULL;
}
s->driver_created = true;
return s;
}
@@ -431,7 +423,6 @@ static void sdl_audio_fini (void *opaque)
SDL_DestroySemaphore (s->sem);
SDL_DestroyMutex (s->mutex);
SDL_QuitSubSystem (SDL_INIT_AUDIO);
s->driver_created = false;
}
static struct audio_option sdl_options[] = {

View File

@@ -17,10 +17,7 @@
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "hw/hw.h"
#include "qemu/host-utils.h"
#include "qemu/error-report.h"
#include "qemu/timer.h"
#include "ui/qemu-spice.h"
@@ -105,11 +102,11 @@ static int rate_get_samples (struct audio_pcm_info *info, SpiceRateCtl *rate)
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
ticks = now - rate->start_ticks;
bytes = muldiv64(ticks, info->bytes_per_second, NANOSECONDS_PER_SECOND);
bytes = muldiv64 (ticks, info->bytes_per_second, get_ticks_per_sec ());
samples = (bytes - rate->bytes_sent) >> info->shift;
if (samples < 0 || samples > 65536) {
error_report("Resetting rate control (%" PRId64 " samples)", samples);
rate_start(rate);
rate_start (rate);
samples = 0;
}
rate->bytes_sent += samples << info->shift;
@@ -118,8 +115,7 @@ static int rate_get_samples (struct audio_pcm_info *info, SpiceRateCtl *rate)
/* playback */
static int line_out_init(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int line_out_init (HWVoiceOut *hw, struct audsettings *as)
{
SpiceVoiceOut *out = container_of (hw, SpiceVoiceOut, hw);
struct audsettings settings;
@@ -247,7 +243,7 @@ static int line_out_ctl (HWVoiceOut *hw, int cmd, ...)
/* record */
static int line_in_init(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
static int line_in_init (HWVoiceIn *hw, struct audsettings *as)
{
SpiceVoiceIn *in = container_of (hw, SpiceVoiceIn, hw);
struct audsettings settings;

View File

@@ -21,8 +21,7 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu/host-utils.h"
#include "hw/hw.h"
#include "qemu/timer.h"
#include "audio.h"
@@ -37,10 +36,15 @@ typedef struct WAVVoiceOut {
int total_samples;
} WAVVoiceOut;
typedef struct {
static struct {
struct audsettings settings;
const char *wav_path;
} WAVConf;
} conf = {
.settings.freq = 44100,
.settings.nchannels = 2,
.settings.fmt = AUD_FMT_S16,
.wav_path = "qemu.wav"
};
static int wav_run_out (HWVoiceOut *hw, int live)
{
@@ -51,7 +55,7 @@ static int wav_run_out (HWVoiceOut *hw, int live)
int64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
int64_t ticks = now - wav->old_ticks;
int64_t bytes =
muldiv64(ticks, hw->info.bytes_per_second, NANOSECONDS_PER_SECOND);
muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());
if (bytes > INT_MAX) {
samples = INT_MAX >> hw->info.shift;
@@ -101,8 +105,7 @@ static void le_store (uint8_t *buf, uint32_t val, int len)
}
}
static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
void *drv_opaque)
static int wav_init_out (HWVoiceOut *hw, struct audsettings *as)
{
WAVVoiceOut *wav = (WAVVoiceOut *) hw;
int bits16 = 0, stereo = 0;
@@ -112,8 +115,9 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
0x02, 0x00, 0x44, 0xac, 0x00, 0x00, 0x10, 0xb1, 0x02, 0x00, 0x04,
0x00, 0x10, 0x00, 0x64, 0x61, 0x74, 0x61, 0x00, 0x00, 0x00, 0x00
};
WAVConf *conf = drv_opaque;
struct audsettings wav_as = conf->settings;
struct audsettings wav_as = conf.settings;
(void) as;
stereo = wav_as.nchannels == 2;
switch (wav_as.fmt) {
@@ -151,10 +155,10 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
le_store (hdr + 28, hw->info.freq << (bits16 + stereo), 4);
le_store (hdr + 32, 1 << (bits16 + stereo), 2);
wav->f = fopen (conf->wav_path, "wb");
wav->f = fopen (conf.wav_path, "wb");
if (!wav->f) {
dolog ("Failed to open wave file `%s'\nReason: %s\n",
conf->wav_path, strerror (errno));
conf.wav_path, strerror (errno));
g_free (wav->pcm_buf);
wav->pcm_buf = NULL;
return -1;
@@ -222,49 +226,40 @@ static int wav_ctl_out (HWVoiceOut *hw, int cmd, ...)
return 0;
}
static WAVConf glob_conf = {
.settings.freq = 44100,
.settings.nchannels = 2,
.settings.fmt = AUD_FMT_S16,
.wav_path = "qemu.wav"
};
static void *wav_audio_init (void)
{
WAVConf *conf = g_malloc(sizeof(WAVConf));
*conf = glob_conf;
return conf;
return &conf;
}
static void wav_audio_fini (void *opaque)
{
(void) opaque;
ldebug ("wav_fini");
g_free(opaque);
}
static struct audio_option wav_options[] = {
{
.name = "FREQUENCY",
.tag = AUD_OPT_INT,
.valp = &glob_conf.settings.freq,
.valp = &conf.settings.freq,
.descr = "Frequency"
},
{
.name = "FORMAT",
.tag = AUD_OPT_FMT,
.valp = &glob_conf.settings.fmt,
.valp = &conf.settings.fmt,
.descr = "Format"
},
{
.name = "DAC_FIXED_CHANNELS",
.tag = AUD_OPT_INT,
.valp = &glob_conf.settings.nchannels,
.valp = &conf.settings.nchannels,
.descr = "Number of channels (1 - mono, 2 - stereo)"
},
{
.name = "PATH",
.tag = AUD_OPT_STR,
.valp = &glob_conf.wav_path,
.valp = &conf.wav_path,
.descr = "Path to wave file"
},
{ /* End of list */ }

View File

@@ -1,7 +1,5 @@
#include "qemu/osdep.h"
#include "hw/hw.h"
#include "monitor/monitor.h"
#include "qemu/error-report.h"
#include "audio.h"
typedef struct {

717
audio/winwaveaudio.c Normal file
View File

@@ -0,0 +1,717 @@
/* public domain */
#include "qemu-common.h"
#include "sysemu/sysemu.h"
#include "audio.h"
#define AUDIO_CAP "winwave"
#include "audio_int.h"
#include <windows.h>
#include <mmsystem.h>
#include "audio_win_int.h"
static struct {
int dac_headers;
int dac_samples;
int adc_headers;
int adc_samples;
} conf = {
.dac_headers = 4,
.dac_samples = 1024,
.adc_headers = 4,
.adc_samples = 1024
};
typedef struct {
HWVoiceOut hw;
HWAVEOUT hwo;
WAVEHDR *hdrs;
HANDLE event;
void *pcm_buf;
int avail;
int pending;
int curhdr;
int paused;
CRITICAL_SECTION crit_sect;
} WaveVoiceOut;
typedef struct {
HWVoiceIn hw;
HWAVEIN hwi;
WAVEHDR *hdrs;
HANDLE event;
void *pcm_buf;
int curhdr;
int paused;
int rpos;
int avail;
CRITICAL_SECTION crit_sect;
} WaveVoiceIn;
static void winwave_log_mmresult (MMRESULT mr)
{
const char *str = "BUG";
switch (mr) {
case MMSYSERR_NOERROR:
str = "Success";
break;
case MMSYSERR_INVALHANDLE:
str = "Specified device handle is invalid";
break;
case MMSYSERR_BADDEVICEID:
str = "Specified device id is out of range";
break;
case MMSYSERR_NODRIVER:
str = "No device driver is present";
break;
case MMSYSERR_NOMEM:
str = "Unable to allocate or lock memory";
break;
case WAVERR_SYNC:
str = "Device is synchronous but waveOutOpen was called "
"without using the WINWAVE_ALLOWSYNC flag";
break;
case WAVERR_UNPREPARED:
str = "The data block pointed to by the pwh parameter "
"hasn't been prepared";
break;
case WAVERR_STILLPLAYING:
str = "There are still buffers in the queue";
break;
default:
dolog ("Reason: Unknown (MMRESULT %#x)\n", mr);
return;
}
dolog ("Reason: %s\n", str);
}
static void GCC_FMT_ATTR (2, 3) winwave_logerr (
MMRESULT mr,
const char *fmt,
...
)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
AUD_log (NULL, " failed\n");
winwave_log_mmresult (mr);
}
static void winwave_anal_close_out (WaveVoiceOut *wave)
{
MMRESULT mr;
mr = waveOutClose (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutClose");
}
wave->hwo = NULL;
}
static void CALLBACK winwave_callback_out (
HWAVEOUT hwo,
UINT msg,
DWORD_PTR dwInstance,
DWORD_PTR dwParam1,
DWORD_PTR dwParam2
)
{
WaveVoiceOut *wave = (WaveVoiceOut *) dwInstance;
switch (msg) {
case WOM_DONE:
{
WAVEHDR *h = (WAVEHDR *) dwParam1;
if (!h->dwUser) {
h->dwUser = 1;
EnterCriticalSection (&wave->crit_sect);
{
wave->avail += conf.dac_samples;
}
LeaveCriticalSection (&wave->crit_sect);
if (wave->hw.poll_mode) {
if (!SetEvent (wave->event)) {
dolog ("DAC SetEvent failed %lx\n", GetLastError ());
}
}
}
}
break;
case WOM_CLOSE:
case WOM_OPEN:
break;
default:
dolog ("unknown wave out callback msg %x\n", msg);
}
}
static int winwave_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int i;
int err;
MMRESULT mr;
WAVEFORMATEX wfx;
WaveVoiceOut *wave;
wave = (WaveVoiceOut *) hw;
InitializeCriticalSection (&wave->crit_sect);
err = waveformat_from_audio_settings (&wfx, as);
if (err) {
goto err0;
}
mr = waveOutOpen (&wave->hwo, WAVE_MAPPER, &wfx,
(DWORD_PTR) winwave_callback_out,
(DWORD_PTR) wave, CALLBACK_FUNCTION);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutOpen");
goto err1;
}
wave->hdrs = audio_calloc (AUDIO_FUNC, conf.dac_headers,
sizeof (*wave->hdrs));
if (!wave->hdrs) {
goto err2;
}
audio_pcm_init_info (&hw->info, as);
hw->samples = conf.dac_samples * conf.dac_headers;
wave->avail = hw->samples;
wave->pcm_buf = audio_calloc (AUDIO_FUNC, conf.dac_samples,
conf.dac_headers << hw->info.shift);
if (!wave->pcm_buf) {
goto err3;
}
for (i = 0; i < conf.dac_headers; ++i) {
WAVEHDR *h = &wave->hdrs[i];
h->dwUser = 0;
h->dwBufferLength = conf.dac_samples << hw->info.shift;
h->lpData = advance (wave->pcm_buf, i * h->dwBufferLength);
h->dwFlags = 0;
mr = waveOutPrepareHeader (wave->hwo, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutPrepareHeader(%d)", i);
goto err4;
}
}
return 0;
err4:
g_free (wave->pcm_buf);
err3:
g_free (wave->hdrs);
err2:
winwave_anal_close_out (wave);
err1:
err0:
return -1;
}
static int winwave_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write (sw, buf, len);
}
static int winwave_run_out (HWVoiceOut *hw, int live)
{
WaveVoiceOut *wave = (WaveVoiceOut *) hw;
int decr;
int doreset;
EnterCriticalSection (&wave->crit_sect);
{
decr = audio_MIN (live, wave->avail);
decr = audio_pcm_hw_clip_out (hw, wave->pcm_buf, decr, wave->pending);
wave->pending += decr;
wave->avail -= decr;
}
LeaveCriticalSection (&wave->crit_sect);
doreset = hw->poll_mode && (wave->pending >= conf.dac_samples);
if (doreset && !ResetEvent (wave->event)) {
dolog ("DAC ResetEvent failed %lx\n", GetLastError ());
}
while (wave->pending >= conf.dac_samples) {
MMRESULT mr;
WAVEHDR *h = &wave->hdrs[wave->curhdr];
h->dwUser = 0;
mr = waveOutWrite (wave->hwo, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutWrite(%d)", wave->curhdr);
break;
}
wave->pending -= conf.dac_samples;
wave->curhdr = (wave->curhdr + 1) % conf.dac_headers;
}
return decr;
}
static void winwave_poll (void *opaque)
{
(void) opaque;
audio_run ("winwave_poll");
}
static void winwave_fini_out (HWVoiceOut *hw)
{
int i;
MMRESULT mr;
WaveVoiceOut *wave = (WaveVoiceOut *) hw;
mr = waveOutReset (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutReset");
}
for (i = 0; i < conf.dac_headers; ++i) {
mr = waveOutUnprepareHeader (wave->hwo, &wave->hdrs[i],
sizeof (wave->hdrs[i]));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutUnprepareHeader(%d)", i);
}
}
winwave_anal_close_out (wave);
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
if (!CloseHandle (wave->event)) {
dolog ("DAC CloseHandle failed %lx\n", GetLastError ());
}
wave->event = NULL;
}
g_free (wave->pcm_buf);
wave->pcm_buf = NULL;
g_free (wave->hdrs);
wave->hdrs = NULL;
}
static int winwave_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
MMRESULT mr;
WaveVoiceOut *wave = (WaveVoiceOut *) hw;
switch (cmd) {
case VOICE_ENABLE:
{
va_list ap;
int poll_mode;
va_start (ap, cmd);
poll_mode = va_arg (ap, int);
va_end (ap);
if (poll_mode && !wave->event) {
wave->event = CreateEvent (NULL, TRUE, TRUE, NULL);
if (!wave->event) {
dolog ("DAC CreateEvent: %lx, poll mode will be disabled\n",
GetLastError ());
}
}
if (wave->event) {
int ret;
ret = qemu_add_wait_object (wave->event, winwave_poll, wave);
hw->poll_mode = (ret == 0);
}
else {
hw->poll_mode = 0;
}
wave->paused = 0;
}
return 0;
case VOICE_DISABLE:
if (!wave->paused) {
mr = waveOutReset (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutReset");
}
else {
wave->paused = 1;
}
}
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
}
return 0;
}
return -1;
}
static void winwave_anal_close_in (WaveVoiceIn *wave)
{
MMRESULT mr;
mr = waveInClose (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInClose");
}
wave->hwi = NULL;
}
static void CALLBACK winwave_callback_in (
HWAVEIN *hwi,
UINT msg,
DWORD_PTR dwInstance,
DWORD_PTR dwParam1,
DWORD_PTR dwParam2
)
{
WaveVoiceIn *wave = (WaveVoiceIn *) dwInstance;
switch (msg) {
case WIM_DATA:
{
WAVEHDR *h = (WAVEHDR *) dwParam1;
if (!h->dwUser) {
h->dwUser = 1;
EnterCriticalSection (&wave->crit_sect);
{
wave->avail += conf.adc_samples;
}
LeaveCriticalSection (&wave->crit_sect);
if (wave->hw.poll_mode) {
if (!SetEvent (wave->event)) {
dolog ("ADC SetEvent failed %lx\n", GetLastError ());
}
}
}
}
break;
case WIM_CLOSE:
case WIM_OPEN:
break;
default:
dolog ("unknown wave in callback msg %x\n", msg);
}
}
static void winwave_add_buffers (WaveVoiceIn *wave, int samples)
{
int doreset;
doreset = wave->hw.poll_mode && (samples >= conf.adc_samples);
if (doreset && !ResetEvent (wave->event)) {
dolog ("ADC ResetEvent failed %lx\n", GetLastError ());
}
while (samples >= conf.adc_samples) {
MMRESULT mr;
WAVEHDR *h = &wave->hdrs[wave->curhdr];
h->dwUser = 0;
mr = waveInAddBuffer (wave->hwi, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInAddBuffer(%d)", wave->curhdr);
}
wave->curhdr = (wave->curhdr + 1) % conf.adc_headers;
samples -= conf.adc_samples;
}
}
static int winwave_init_in (HWVoiceIn *hw, struct audsettings *as)
{
int i;
int err;
MMRESULT mr;
WAVEFORMATEX wfx;
WaveVoiceIn *wave;
wave = (WaveVoiceIn *) hw;
InitializeCriticalSection (&wave->crit_sect);
err = waveformat_from_audio_settings (&wfx, as);
if (err) {
goto err0;
}
mr = waveInOpen (&wave->hwi, WAVE_MAPPER, &wfx,
(DWORD_PTR) winwave_callback_in,
(DWORD_PTR) wave, CALLBACK_FUNCTION);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInOpen");
goto err1;
}
wave->hdrs = audio_calloc (AUDIO_FUNC, conf.dac_headers,
sizeof (*wave->hdrs));
if (!wave->hdrs) {
goto err2;
}
audio_pcm_init_info (&hw->info, as);
hw->samples = conf.adc_samples * conf.adc_headers;
wave->avail = 0;
wave->pcm_buf = audio_calloc (AUDIO_FUNC, conf.adc_samples,
conf.adc_headers << hw->info.shift);
if (!wave->pcm_buf) {
goto err3;
}
for (i = 0; i < conf.adc_headers; ++i) {
WAVEHDR *h = &wave->hdrs[i];
h->dwUser = 0;
h->dwBufferLength = conf.adc_samples << hw->info.shift;
h->lpData = advance (wave->pcm_buf, i * h->dwBufferLength);
h->dwFlags = 0;
mr = waveInPrepareHeader (wave->hwi, h, sizeof (*h));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInPrepareHeader(%d)", i);
goto err4;
}
}
wave->paused = 1;
winwave_add_buffers (wave, hw->samples);
return 0;
err4:
g_free (wave->pcm_buf);
err3:
g_free (wave->hdrs);
err2:
winwave_anal_close_in (wave);
err1:
err0:
return -1;
}
static void winwave_fini_in (HWVoiceIn *hw)
{
int i;
MMRESULT mr;
WaveVoiceIn *wave = (WaveVoiceIn *) hw;
mr = waveInReset (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInReset");
}
for (i = 0; i < conf.adc_headers; ++i) {
mr = waveInUnprepareHeader (wave->hwi, &wave->hdrs[i],
sizeof (wave->hdrs[i]));
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInUnprepareHeader(%d)", i);
}
}
winwave_anal_close_in (wave);
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
if (!CloseHandle (wave->event)) {
dolog ("ADC CloseHandle failed %lx\n", GetLastError ());
}
wave->event = NULL;
}
g_free (wave->pcm_buf);
wave->pcm_buf = NULL;
g_free (wave->hdrs);
wave->hdrs = NULL;
}
static int winwave_run_in (HWVoiceIn *hw)
{
WaveVoiceIn *wave = (WaveVoiceIn *) hw;
int live = audio_pcm_hw_get_live_in (hw);
int dead = hw->samples - live;
int decr, ret;
if (!dead) {
return 0;
}
EnterCriticalSection (&wave->crit_sect);
{
decr = audio_MIN (dead, wave->avail);
wave->avail -= decr;
}
LeaveCriticalSection (&wave->crit_sect);
ret = decr;
while (decr) {
int left = hw->samples - hw->wpos;
int conv = audio_MIN (left, decr);
hw->conv (hw->conv_buf + hw->wpos,
advance (wave->pcm_buf, wave->rpos << hw->info.shift),
conv);
wave->rpos = (wave->rpos + conv) % hw->samples;
hw->wpos = (hw->wpos + conv) % hw->samples;
decr -= conv;
}
winwave_add_buffers (wave, ret);
return ret;
}
static int winwave_read (SWVoiceIn *sw, void *buf, int size)
{
return audio_pcm_sw_read (sw, buf, size);
}
static int winwave_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
MMRESULT mr;
WaveVoiceIn *wave = (WaveVoiceIn *) hw;
switch (cmd) {
case VOICE_ENABLE:
{
va_list ap;
int poll_mode;
va_start (ap, cmd);
poll_mode = va_arg (ap, int);
va_end (ap);
if (poll_mode && !wave->event) {
wave->event = CreateEvent (NULL, TRUE, TRUE, NULL);
if (!wave->event) {
dolog ("ADC CreateEvent: %lx, poll mode will be disabled\n",
GetLastError ());
}
}
if (wave->event) {
int ret;
ret = qemu_add_wait_object (wave->event, winwave_poll, wave);
hw->poll_mode = (ret == 0);
}
else {
hw->poll_mode = 0;
}
if (wave->paused) {
mr = waveInStart (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInStart");
}
wave->paused = 0;
}
}
return 0;
case VOICE_DISABLE:
if (!wave->paused) {
mr = waveInStop (wave->hwi);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveInStop");
}
else {
wave->paused = 1;
}
}
if (wave->event) {
qemu_del_wait_object (wave->event, winwave_poll, wave);
}
return 0;
}
return 0;
}
static void *winwave_audio_init (void)
{
return &conf;
}
static void winwave_audio_fini (void *opaque)
{
(void) opaque;
}
static struct audio_option winwave_options[] = {
{
.name = "DAC_HEADERS",
.tag = AUD_OPT_INT,
.valp = &conf.dac_headers,
.descr = "DAC number of headers",
},
{
.name = "DAC_SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.dac_samples,
.descr = "DAC number of samples per header",
},
{
.name = "ADC_HEADERS",
.tag = AUD_OPT_INT,
.valp = &conf.adc_headers,
.descr = "ADC number of headers",
},
{
.name = "ADC_SAMPLES",
.tag = AUD_OPT_INT,
.valp = &conf.adc_samples,
.descr = "ADC number of samples per header",
},
{ /* End of list */ }
};
static struct audio_pcm_ops winwave_pcm_ops = {
.init_out = winwave_init_out,
.fini_out = winwave_fini_out,
.run_out = winwave_run_out,
.write = winwave_write,
.ctl_out = winwave_ctl_out,
.init_in = winwave_init_in,
.fini_in = winwave_fini_in,
.run_in = winwave_run_in,
.read = winwave_read,
.ctl_in = winwave_ctl_in
};
struct audio_driver winwave_audio_driver = {
.name = "winwave",
.descr = "Windows Waveform Audio http://msdn.microsoft.com",
.options = winwave_options,
.init = winwave_audio_init,
.fini = winwave_audio_fini,
.pcm_ops = &winwave_pcm_ops,
.can_be_default = 1,
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (WaveVoiceOut),
.voice_size_in = sizeof (WaveVoiceIn)
};

View File

@@ -21,8 +21,6 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "sysemu/char.h"
#include "qemu/timer.h"
@@ -305,7 +303,7 @@ static int baum_eat_packet(BaumDriverState *baum, const uint8_t *buf, int len)
return 0;
cur++;
}
DPRINTF("Dropped %td bytes!\n", cur - buf);
DPRINTF("Dropped %d bytes!\n", cur - buf);
}
#define EAT(c) do {\
@@ -337,7 +335,7 @@ static int baum_eat_packet(BaumDriverState *baum, const uint8_t *buf, int len)
/* Allow 100ms to complete the DisplayData packet */
timer_mod(baum->cellCount_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
NANOSECONDS_PER_SECOND / 10);
get_ticks_per_sec() / 10);
for (i = 0; i < baum->x * baum->y ; i++) {
EAT(c);
cells[i] = c;
@@ -563,12 +561,8 @@ static void baum_close(struct CharDriverState *chr)
g_free(baum);
}
static CharDriverState *chr_baum_init(const char *id,
ChardevBackend *backend,
ChardevReturn *ret,
Error **errp)
CharDriverState *chr_baum_init(void)
{
ChardevCommon *common = backend->u.braille.data;
BaumDriverState *baum;
CharDriverState *chr;
brlapi_handle_t *handle;
@@ -579,12 +573,8 @@ static CharDriverState *chr_baum_init(const char *id,
#endif
int tty;
chr = qemu_chr_alloc(common, errp);
if (!chr) {
return NULL;
}
baum = g_malloc0(sizeof(BaumDriverState));
baum->chr = chr;
baum->chr = chr = qemu_chr_alloc();
chr->opaque = baum;
chr->chr_write = baum_write;
@@ -596,16 +586,14 @@ static CharDriverState *chr_baum_init(const char *id,
baum->brlapi_fd = brlapi__openConnection(handle, NULL, NULL);
if (baum->brlapi_fd == -1) {
error_setg(errp, "brlapi__openConnection: %s",
brlapi_strerror(brlapi_error_location()));
brlapi_perror("baum_init: brlapi_openConnection");
goto fail_handle;
}
baum->cellCount_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, baum_cellCount_timer_cb, baum);
if (brlapi__getDisplaySize(handle, &baum->x, &baum->y) == -1) {
error_setg(errp, "brlapi__getDisplaySize: %s",
brlapi_strerror(brlapi_error_location()));
brlapi_perror("baum_init: brlapi_getDisplaySize");
goto fail;
}
@@ -621,8 +609,7 @@ static CharDriverState *chr_baum_init(const char *id,
tty = BRLAPI_TTY_DEFAULT;
if (brlapi__enterTtyMode(handle, tty, NULL) == -1) {
error_setg(errp, "brlapi__enterTtyMode: %s",
brlapi_strerror(brlapi_error_location()));
brlapi_perror("baum_init: brlapi_enterTtyMode");
goto fail;
}
@@ -642,8 +629,7 @@ fail_handle:
static void register_types(void)
{
register_char_driver("braille", CHARDEV_BACKEND_KIND_BRAILLE, NULL,
chr_baum_init);
register_char_driver("braille", CHARDEV_BACKEND_KIND_BRAILLE, NULL);
}
type_init(register_types);

View File

@@ -9,8 +9,6 @@
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "sysemu/hostmem.h"
#include "sysemu/sysemu.h"
@@ -45,21 +43,18 @@ file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
return;
}
if (!fb->mem_path) {
error_setg(errp, "mem-path property not set");
error_setg(errp, "mem_path property not set");
return;
}
#ifndef CONFIG_LINUX
error_setg(errp, "-mem-path not supported on this host");
#else
if (!memory_region_size(&backend->mr)) {
gchar *path;
backend->force_prealloc = mem_prealloc;
path = object_get_canonical_path(OBJECT(backend));
memory_region_init_ram_from_file(&backend->mr, OBJECT(backend),
path,
object_get_canonical_path(OBJECT(backend)),
backend->size, fb->share,
fb->mem_path, errp);
g_free(path);
}
#endif
}
@@ -88,7 +83,9 @@ static void set_mem_path(Object *o, const char *str, Error **errp)
error_setg(errp, "cannot change property value");
return;
}
g_free(fb->mem_path);
if (fb->mem_path) {
g_free(fb->mem_path);
}
fb->mem_path = g_strdup(str);
}
@@ -121,19 +118,11 @@ file_backend_instance_init(Object *o)
set_mem_path, NULL);
}
static void file_backend_instance_finalize(Object *o)
{
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
g_free(fb->mem_path);
}
static const TypeInfo file_backend_info = {
.name = TYPE_MEMORY_BACKEND_FILE,
.parent = TYPE_MEMORY_BACKEND,
.class_init = file_backend_class_init,
.instance_init = file_backend_instance_init,
.instance_finalize = file_backend_instance_finalize,
.instance_size = sizeof(HostMemoryBackendFile),
};

View File

@@ -9,9 +9,7 @@
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/hostmem.h"
#include "qapi/error.h"
#include "qom/object_interfaces.h"
#define TYPE_MEMORY_BACKEND_RAM "memory-backend-ram"

View File

@@ -9,13 +9,11 @@
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/hostmem.h"
#include "hw/boards.h"
#include "qapi/error.h"
#include "qapi/visitor.h"
#include "qapi-types.h"
#include "qapi-visit.h"
#include "qapi/qmp/qerror.h"
#include "qemu/config-file.h"
#include "qom/object_interfaces.h"
@@ -28,18 +26,18 @@ QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_INTERLEAVE != MPOL_INTERLEAVE);
#endif
static void
host_memory_backend_get_size(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
host_memory_backend_get_size(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
uint64_t value = backend->size;
visit_type_size(v, name, &value, errp);
visit_type_size(v, &value, name, errp);
}
static void
host_memory_backend_set_size(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
host_memory_backend_set_size(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
Error *local_err = NULL;
@@ -50,7 +48,7 @@ host_memory_backend_set_size(Object *obj, Visitor *v, const char *name,
goto out;
}
visit_type_size(v, name, &value, &local_err);
visit_type_size(v, &value, name, &local_err);
if (local_err) {
goto out;
}
@@ -65,8 +63,8 @@ out:
}
static void
host_memory_backend_get_host_nodes(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
host_memory_backend_get_host_nodes(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
uint16List *host_nodes = NULL;
@@ -93,18 +91,18 @@ host_memory_backend_get_host_nodes(Object *obj, Visitor *v, const char *name,
node = &(*node)->next;
} while (true);
visit_type_uint16List(v, name, &host_nodes, errp);
visit_type_uint16List(v, &host_nodes, name, errp);
}
static void
host_memory_backend_set_host_nodes(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
host_memory_backend_set_host_nodes(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
#ifdef CONFIG_NUMA
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
uint16List *l = NULL;
visit_type_uint16List(v, name, &l, errp);
visit_type_uint16List(v, &l, name, errp);
while (l) {
bitmap_set(backend->host_nodes, l->value, 1);
@@ -115,17 +113,24 @@ host_memory_backend_set_host_nodes(Object *obj, Visitor *v, const char *name,
#endif
}
static int
host_memory_backend_get_policy(Object *obj, Error **errp G_GNUC_UNUSED)
static void
host_memory_backend_get_policy(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
return backend->policy;
int policy = backend->policy;
visit_type_enum(v, &policy, HostMemPolicy_lookup, NULL, name, errp);
}
static void
host_memory_backend_set_policy(Object *obj, int policy, Error **errp)
host_memory_backend_set_policy(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
int policy;
visit_type_enum(v, &policy, HostMemPolicy_lookup, NULL, name, errp);
backend->policy = policy;
#ifndef CONFIG_NUMA
@@ -225,10 +230,11 @@ static void host_memory_backend_set_prealloc(Object *obj, bool value,
static void host_memory_backend_init(Object *obj)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
MachineState *machine = MACHINE(qdev_get_machine());
backend->merge = machine_mem_merge(machine);
backend->dump = machine_dump_guest_core(machine);
backend->merge = qemu_opt_get_bool(qemu_get_machine_opts(),
"mem-merge", true);
backend->dump = qemu_opt_get_bool(qemu_get_machine_opts(),
"dump-guest-core", true);
backend->prealloc = mem_prealloc;
object_property_add_bool(obj, "merge",
@@ -246,10 +252,9 @@ static void host_memory_backend_init(Object *obj)
object_property_add(obj, "host-nodes", "int",
host_memory_backend_get_host_nodes,
host_memory_backend_set_host_nodes, NULL, NULL, NULL);
object_property_add_enum(obj, "policy", "HostMemPolicy",
HostMemPolicy_lookup,
host_memory_backend_get_policy,
host_memory_backend_set_policy, NULL);
object_property_add(obj, "policy", "str",
host_memory_backend_get_policy,
host_memory_backend_set_policy, NULL, NULL, NULL);
}
MemoryRegion *
@@ -315,11 +320,9 @@ host_memory_backend_memory_complete(UserCreatable *uc, Error **errp)
assert(maxnode <= MAX_NODES);
if (mbind(ptr, sz, backend->policy,
maxnode ? backend->host_nodes : NULL, maxnode + 1, flags)) {
if (backend->policy != MPOL_DEFAULT || errno != ENOSYS) {
error_setg_errno(errp, errno,
"cannot bind memory to host NUMA nodes");
return;
}
error_setg_errno(errp, errno,
"cannot bind memory to host NUMA nodes");
return;
}
#endif
/* Preallocate memory after the NUMA policy has been instantiated.
@@ -332,26 +335,12 @@ host_memory_backend_memory_complete(UserCreatable *uc, Error **errp)
}
}
static bool
host_memory_backend_can_be_deleted(UserCreatable *uc, Error **errp)
{
MemoryRegion *mr;
mr = host_memory_backend_get_memory(MEMORY_BACKEND(uc), errp);
if (memory_region_is_mapped(mr)) {
return false;
} else {
return true;
}
}
static void
host_memory_backend_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
ucc->complete = host_memory_backend_memory_complete;
ucc->can_be_deleted = host_memory_backend_can_be_deleted;
}
static const TypeInfo host_memory_backend_info = {

View File

@@ -21,7 +21,7 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include <stdlib.h>
#include "qemu-common.h"
#include "sysemu/char.h"
#include "ui/console.h"
@@ -63,18 +63,11 @@ static void msmouse_chr_close (struct CharDriverState *chr)
g_free (chr);
}
static CharDriverState *qemu_chr_open_msmouse(const char *id,
ChardevBackend *backend,
ChardevReturn *ret,
Error **errp)
CharDriverState *qemu_chr_open_msmouse(void)
{
ChardevCommon *common = backend->u.msmouse.data;
CharDriverState *chr;
chr = qemu_chr_alloc(common, errp);
if (!chr) {
return NULL;
}
chr = qemu_chr_alloc();
chr->chr_write = msmouse_chr_write;
chr->chr_close = msmouse_chr_close;
chr->explicit_be_open = true;
@@ -86,8 +79,7 @@ static CharDriverState *qemu_chr_open_msmouse(const char *id,
static void register_types(void)
{
register_char_driver("msmouse", CHARDEV_BACKEND_KIND_MSMOUSE, NULL,
qemu_chr_open_msmouse);
register_char_driver("msmouse", CHARDEV_BACKEND_KIND_MSMOUSE, NULL);
}
type_init(register_types);

View File

@@ -10,10 +10,8 @@
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/rng.h"
#include "sysemu/char.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "hw/qdev.h" /* just for DEFINE_PROP_CHR */
@@ -26,12 +24,33 @@ typedef struct RngEgd
CharDriverState *chr;
char *chr_name;
GSList *requests;
} RngEgd;
static void rng_egd_request_entropy(RngBackend *b, RngRequest *req)
typedef struct RngRequest
{
EntropyReceiveFunc *receive_entropy;
uint8_t *data;
void *opaque;
size_t offset;
size_t size;
} RngRequest;
static void rng_egd_request_entropy(RngBackend *b, size_t size,
EntropyReceiveFunc *receive_entropy,
void *opaque)
{
RngEgd *s = RNG_EGD(b);
size_t size = req->size;
RngRequest *req;
req = g_malloc(sizeof(*req));
req->offset = 0;
req->size = size;
req->receive_entropy = receive_entropy;
req->opaque = opaque;
req->data = g_malloc(req->size);
while (size > 0) {
uint8_t header[2];
@@ -45,15 +64,24 @@ static void rng_egd_request_entropy(RngBackend *b, RngRequest *req)
size -= len;
}
s->requests = g_slist_append(s->requests, req);
}
static void rng_egd_free_request(RngRequest *req)
{
g_free(req->data);
g_free(req);
}
static int rng_egd_chr_can_read(void *opaque)
{
RngEgd *s = RNG_EGD(opaque);
RngRequest *req;
GSList *i;
int size = 0;
QSIMPLEQ_FOREACH(req, &s->parent.requests, next) {
for (i = s->requests; i; i = i->next) {
RngRequest *req = i->data;
size += req->size - req->offset;
}
@@ -65,8 +93,8 @@ static void rng_egd_chr_read(void *opaque, const uint8_t *buf, int size)
RngEgd *s = RNG_EGD(opaque);
size_t buf_offset = 0;
while (size > 0 && !QSIMPLEQ_EMPTY(&s->parent.requests)) {
RngRequest *req = QSIMPLEQ_FIRST(&s->parent.requests);
while (size > 0 && s->requests) {
RngRequest *req = s->requests->data;
int len = MIN(size, req->size - req->offset);
memcpy(req->data + req->offset, buf + buf_offset, len);
@@ -75,32 +103,56 @@ static void rng_egd_chr_read(void *opaque, const uint8_t *buf, int size)
size -= len;
if (req->offset == req->size) {
s->requests = g_slist_remove_link(s->requests, s->requests);
req->receive_entropy(req->opaque, req->data, req->size);
rng_backend_finalize_request(&s->parent, req);
rng_egd_free_request(req);
}
}
}
static void rng_egd_free_requests(RngEgd *s)
{
GSList *i;
for (i = s->requests; i; i = i->next) {
rng_egd_free_request(i->data);
}
g_slist_free(s->requests);
s->requests = NULL;
}
static void rng_egd_cancel_requests(RngBackend *b)
{
RngEgd *s = RNG_EGD(b);
/* We simply delete the list of pending requests. If there is data in the
* queue waiting to be read, this is okay, because there will always be
* more data than we requested originally
*/
rng_egd_free_requests(s);
}
static void rng_egd_opened(RngBackend *b, Error **errp)
{
RngEgd *s = RNG_EGD(b);
if (s->chr_name == NULL) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
"chardev", "a valid character device");
error_set(errp, QERR_INVALID_PARAMETER_VALUE,
"chardev", "a valid character device");
return;
}
s->chr = qemu_chr_find(s->chr_name);
if (s->chr == NULL) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
"Device '%s' not found", s->chr_name);
error_set(errp, QERR_DEVICE_NOT_FOUND, s->chr_name);
return;
}
if (qemu_chr_fe_claim(s->chr) != 0) {
error_setg(errp, QERR_DEVICE_IN_USE, s->chr_name);
error_set(errp, QERR_DEVICE_IN_USE, s->chr_name);
return;
}
@@ -115,7 +167,7 @@ static void rng_egd_set_chardev(Object *obj, const char *value, Error **errp)
RngEgd *s = RNG_EGD(b);
if (b->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
error_set(errp, QERR_PERMISSION_DENIED);
} else {
g_free(s->chr_name);
s->chr_name = g_strdup(value);
@@ -150,6 +202,8 @@ static void rng_egd_finalize(Object *obj)
}
g_free(s->chr_name);
rng_egd_free_requests(s);
}
static void rng_egd_class_init(ObjectClass *klass, void *data)
@@ -157,6 +211,7 @@ static void rng_egd_class_init(ObjectClass *klass, void *data)
RngBackendClass *rbc = RNG_BACKEND_CLASS(klass);
rbc->request_entropy = rng_egd_request_entropy;
rbc->cancel_requests = rng_egd_cancel_requests;
rbc->opened = rng_egd_opened;
}

View File

@@ -10,19 +10,21 @@
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/rng-random.h"
#include "sysemu/rng.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "qemu/main-loop.h"
struct RngRandom
struct RndRandom
{
RngBackend parent;
int fd;
char *filename;
EntropyReceiveFunc *receive_func;
void *opaque;
size_t size;
};
/**
@@ -34,45 +36,46 @@ struct RngRandom
static void entropy_available(void *opaque)
{
RngRandom *s = RNG_RANDOM(opaque);
RndRandom *s = RNG_RANDOM(opaque);
uint8_t buffer[s->size];
ssize_t len;
while (!QSIMPLEQ_EMPTY(&s->parent.requests)) {
RngRequest *req = QSIMPLEQ_FIRST(&s->parent.requests);
ssize_t len;
len = read(s->fd, req->data, req->size);
if (len < 0 && errno == EAGAIN) {
return;
}
g_assert(len != -1);
req->receive_entropy(req->opaque, req->data, len);
rng_backend_finalize_request(&s->parent, req);
len = read(s->fd, buffer, s->size);
if (len < 0 && errno == EAGAIN) {
return;
}
g_assert(len != -1);
s->receive_func(s->opaque, buffer, len);
s->receive_func = NULL;
/* We've drained all requests, the fd handler can be reset. */
qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
}
static void rng_random_request_entropy(RngBackend *b, RngRequest *req)
static void rng_random_request_entropy(RngBackend *b, size_t size,
EntropyReceiveFunc *receive_entropy,
void *opaque)
{
RngRandom *s = RNG_RANDOM(b);
RndRandom *s = RNG_RANDOM(b);
if (QSIMPLEQ_EMPTY(&s->parent.requests)) {
/* If there are no pending requests yet, we need to
* install our fd handler. */
qemu_set_fd_handler(s->fd, entropy_available, NULL, s);
if (s->receive_func) {
s->receive_func(s->opaque, NULL, 0);
}
s->receive_func = receive_entropy;
s->opaque = opaque;
s->size = size;
qemu_set_fd_handler(s->fd, entropy_available, NULL, s);
}
static void rng_random_opened(RngBackend *b, Error **errp)
{
RngRandom *s = RNG_RANDOM(b);
RndRandom *s = RNG_RANDOM(b);
if (s->filename == NULL) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
"filename", "a valid filename");
error_set(errp, QERR_INVALID_PARAMETER_VALUE,
"filename", "a valid filename");
} else {
s->fd = qemu_open(s->filename, O_RDONLY | O_NONBLOCK);
if (s->fd == -1) {
@@ -83,19 +86,23 @@ static void rng_random_opened(RngBackend *b, Error **errp)
static char *rng_random_get_filename(Object *obj, Error **errp)
{
RngRandom *s = RNG_RANDOM(obj);
RndRandom *s = RNG_RANDOM(obj);
return g_strdup(s->filename);
if (s->filename) {
return g_strdup(s->filename);
}
return NULL;
}
static void rng_random_set_filename(Object *obj, const char *filename,
Error **errp)
{
RngBackend *b = RNG_BACKEND(obj);
RngRandom *s = RNG_RANDOM(obj);
RndRandom *s = RNG_RANDOM(obj);
if (b->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
error_set(errp, QERR_PERMISSION_DENIED);
return;
}
@@ -105,7 +112,7 @@ static void rng_random_set_filename(Object *obj, const char *filename,
static void rng_random_init(Object *obj)
{
RngRandom *s = RNG_RANDOM(obj);
RndRandom *s = RNG_RANDOM(obj);
object_property_add_str(obj, "filename",
rng_random_get_filename,
@@ -118,7 +125,7 @@ static void rng_random_init(Object *obj)
static void rng_random_finalize(Object *obj)
{
RngRandom *s = RNG_RANDOM(obj);
RndRandom *s = RNG_RANDOM(obj);
if (s->fd != -1) {
qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
@@ -139,7 +146,7 @@ static void rng_random_class_init(ObjectClass *klass, void *data)
static const TypeInfo rng_random_info = {
.name = TYPE_RNG_RANDOM,
.parent = TYPE_RNG_BACKEND,
.instance_size = sizeof(RngRandom),
.instance_size = sizeof(RndRandom),
.class_init = rng_random_class_init,
.instance_init = rng_random_init,
.instance_finalize = rng_random_finalize,

View File

@@ -10,9 +10,7 @@
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "sysemu/rng.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "qom/object_interfaces.h"
@@ -21,20 +19,18 @@ void rng_backend_request_entropy(RngBackend *s, size_t size,
void *opaque)
{
RngBackendClass *k = RNG_BACKEND_GET_CLASS(s);
RngRequest *req;
if (k->request_entropy) {
req = g_malloc(sizeof(*req));
k->request_entropy(s, size, receive_entropy, opaque);
}
}
req->offset = 0;
req->size = size;
req->receive_entropy = receive_entropy;
req->opaque = opaque;
req->data = g_malloc(req->size);
void rng_backend_cancel_requests(RngBackend *s)
{
RngBackendClass *k = RNG_BACKEND_GET_CLASS(s);
k->request_entropy(s, req);
QSIMPLEQ_INSERT_TAIL(&s->requests, req, next);
if (k->cancel_requests) {
k->cancel_requests(s);
}
}
@@ -61,7 +57,7 @@ static void rng_backend_prop_set_opened(Object *obj, bool value, Error **errp)
}
if (!value && s->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
error_set(errp, QERR_PERMISSION_DENIED);
return;
}
@@ -76,48 +72,14 @@ static void rng_backend_prop_set_opened(Object *obj, bool value, Error **errp)
s->opened = true;
}
static void rng_backend_free_request(RngRequest *req)
{
g_free(req->data);
g_free(req);
}
static void rng_backend_free_requests(RngBackend *s)
{
RngRequest *req, *next;
QSIMPLEQ_FOREACH_SAFE(req, &s->requests, next, next) {
rng_backend_free_request(req);
}
QSIMPLEQ_INIT(&s->requests);
}
void rng_backend_finalize_request(RngBackend *s, RngRequest *req)
{
QSIMPLEQ_REMOVE(&s->requests, req, RngRequest, next);
rng_backend_free_request(req);
}
static void rng_backend_init(Object *obj)
{
RngBackend *s = RNG_BACKEND(obj);
QSIMPLEQ_INIT(&s->requests);
object_property_add_bool(obj, "opened",
rng_backend_prop_get_opened,
rng_backend_prop_set_opened,
NULL);
}
static void rng_backend_finalize(Object *obj)
{
RngBackend *s = RNG_BACKEND(obj);
rng_backend_free_requests(s);
}
static void rng_backend_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
@@ -130,7 +92,6 @@ static const TypeInfo rng_backend_info = {
.parent = TYPE_OBJECT,
.instance_size = sizeof(RngBackend),
.instance_init = rng_backend_init,
.instance_finalize = rng_backend_finalize,
.class_size = sizeof(RngBackendClass),
.class_init = rng_backend_class_init,
.abstract = true,

View File

@@ -23,7 +23,6 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "sysemu/char.h"
@@ -109,16 +108,13 @@ static void testdev_close(struct CharDriverState *chr)
g_free(testdev);
}
static CharDriverState *chr_testdev_init(const char *id,
ChardevBackend *backend,
ChardevReturn *ret,
Error **errp)
CharDriverState *chr_testdev_init(void)
{
TestdevCharState *testdev;
CharDriverState *chr;
testdev = g_new0(TestdevCharState, 1);
testdev->chr = chr = g_new0(CharDriverState, 1);
testdev = g_malloc0(sizeof(TestdevCharState));
testdev->chr = chr = g_malloc0(sizeof(CharDriverState));
chr->opaque = testdev;
chr->chr_write = testdev_write;
@@ -129,8 +125,7 @@ static CharDriverState *chr_testdev_init(const char *id,
static void register_types(void)
{
register_char_driver("testdev", CHARDEV_BACKEND_KIND_TESTDEV, NULL,
chr_testdev_init);
register_char_driver("testdev", CHARDEV_BACKEND_KIND_TESTDEV, NULL);
}
type_init(register_types);

View File

@@ -12,9 +12,7 @@
* Based on backends/rng.c by Anthony Liguori
*/
#include "qemu/osdep.h"
#include "sysemu/tpm_backend.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "sysemu/tpm.h"
#include "qemu/thread.h"
@@ -38,7 +36,7 @@ void tpm_backend_destroy(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
k->ops->destroy(s);
return k->ops->destroy(s);
}
int tpm_backend_init(TPMBackend *s, TPMState *state,
@@ -98,20 +96,6 @@ bool tpm_backend_get_tpm_established_flag(TPMBackend *s)
return k->ops->get_tpm_established_flag(s);
}
int tpm_backend_reset_tpm_established_flag(TPMBackend *s, uint8_t locty)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->reset_tpm_established_flag(s, locty);
}
TPMVersion tpm_backend_get_tpm_version(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->get_tpm_version(s);
}
static bool tpm_backend_prop_get_opened(Object *obj, Error **errp)
{
TPMBackend *s = TPM_BACKEND(obj);
@@ -135,7 +119,7 @@ static void tpm_backend_prop_set_opened(Object *obj, bool value, Error **errp)
}
if (!value && s->opened) {
error_setg(errp, QERR_PERMISSION_DENIED);
error_set(errp, QERR_PERMISSION_DENIED);
return;
}
@@ -181,6 +165,17 @@ void tpm_backend_thread_end(TPMBackendThread *tbt)
}
}
void tpm_backend_thread_tpm_reset(TPMBackendThread *tbt,
GFunc func, gpointer user_data)
{
if (!tbt->pool) {
tpm_backend_thread_create(tbt, func, user_data);
} else {
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_TPM_RESET,
NULL);
}
}
static const TypeInfo tpm_backend_info = {
.name = TYPE_TPM_BACKEND,
.parent = TYPE_OBJECT,

View File

@@ -24,45 +24,17 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "monitor/monitor.h"
#include "exec/cpu-common.h"
#include "sysemu/kvm.h"
#include "sysemu/balloon.h"
#include "trace.h"
#include "qmp-commands.h"
#include "qapi/qmp/qerror.h"
#include "qapi/qmp/qjson.h"
static QEMUBalloonEvent *balloon_event_fn;
static QEMUBalloonStatus *balloon_stat_fn;
static void *balloon_opaque;
static bool balloon_inhibited;
bool qemu_balloon_is_inhibited(void)
{
return balloon_inhibited;
}
void qemu_balloon_inhibit(bool state)
{
balloon_inhibited = state;
}
static bool have_balloon(Error **errp)
{
if (kvm_enabled() && !kvm_has_sync_mmu()) {
error_set(errp, ERROR_CLASS_KVM_MISSING_CAP,
"Using KVM without synchronous MMU, balloon unavailable");
return false;
}
if (!balloon_event_fn) {
error_set(errp, ERROR_CLASS_DEVICE_NOT_ACTIVE,
"No balloon device has been activated");
return false;
}
return true;
}
int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
QEMUBalloonStatus *stat_func, void *opaque)
@@ -71,6 +43,7 @@ int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
/* We're already registered one balloon handler. How many can
* a guest really have?
*/
error_report("Another balloon device already registered");
return -1;
}
balloon_event_fn = event_func;
@@ -89,30 +62,58 @@ void qemu_remove_balloon_handler(void *opaque)
balloon_opaque = NULL;
}
static int qemu_balloon(ram_addr_t target)
{
if (!balloon_event_fn) {
return 0;
}
trace_balloon_event(balloon_opaque, target);
balloon_event_fn(balloon_opaque, target);
return 1;
}
static int qemu_balloon_status(BalloonInfo *info)
{
if (!balloon_stat_fn) {
return 0;
}
balloon_stat_fn(balloon_opaque, info);
return 1;
}
BalloonInfo *qmp_query_balloon(Error **errp)
{
BalloonInfo *info;
if (!have_balloon(errp)) {
if (kvm_enabled() && !kvm_has_sync_mmu()) {
error_set(errp, QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return NULL;
}
info = g_malloc0(sizeof(*info));
balloon_stat_fn(balloon_opaque, info);
if (qemu_balloon_status(info) == 0) {
error_set(errp, QERR_DEVICE_NOT_ACTIVE, "balloon");
qapi_free_BalloonInfo(info);
return NULL;
}
return info;
}
void qmp_balloon(int64_t target, Error **errp)
void qmp_balloon(int64_t value, Error **errp)
{
if (!have_balloon(errp)) {
if (kvm_enabled() && !kvm_has_sync_mmu()) {
error_set(errp, QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return;
}
if (target <= 0) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "target", "a size");
if (value <= 0) {
error_set(errp, QERR_INVALID_PARAMETER_VALUE, "target", "a size");
return;
}
trace_balloon_event(balloon_opaque, target);
balloon_event_fn(balloon_opaque, target);
if (qemu_balloon(value) == 0) {
error_set(errp, QERR_DEVICE_NOT_ACTIVE, "balloon");
}
}

View File

@@ -13,20 +13,17 @@
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/block.h"
#include "qemu/error-report.h"
#include "qemu/main-loop.h"
#include "hw/hw.h"
#include "qemu/cutils.h"
#include "qemu/queue.h"
#include "qemu/timer.h"
#include "migration/block.h"
#include "migration/migration.h"
#include "sysemu/blockdev.h"
#include "sysemu/block-backend.h"
#include <assert.h>
#define BLOCK_SIZE (1 << 20)
#define BDRV_SECTORS_PER_DIRTY_CHUNK (BLOCK_SIZE >> BDRV_SECTOR_BITS)
@@ -38,8 +35,6 @@
#define MAX_IS_ALLOCATED_SEARCH 65536
#define MAX_INFLIGHT_IO 512
//#define DEBUG_BLK_MIGRATION
#ifdef DEBUG_BLK_MIGRATION
@@ -56,25 +51,17 @@ typedef struct BlkMigDevState {
int shared_base;
int64_t total_sectors;
QSIMPLEQ_ENTRY(BlkMigDevState) entry;
Error *blocker;
/* Only used by migration thread. Does not need a lock. */
int bulk_completed;
int64_t cur_sector;
int64_t cur_dirty;
/* Data in the aio_bitmap is protected by block migration lock.
* Allocation and free happen during setup and cleanup respectively.
*/
unsigned long *aio_bitmap;
/* Protected by block migration lock. */
unsigned long *aio_bitmap;
int64_t completed_sectors;
/* During migration this is protected by iothread lock / AioContext.
* Allocation and free happen during setup and cleanup respectively.
*/
BdrvDirtyBitmap *dirty_bitmap;
Error *blocker;
} BlkMigDevState;
typedef struct BlkMigBlock {
@@ -110,7 +97,7 @@ typedef struct BlkMigState {
int prev_progress;
int bulk_completed;
/* Lock must be taken _inside_ the iothread lock and any AioContexts. */
/* Lock must be taken _inside_ the iothread lock. */
QemuMutex lock;
} BlkMigState;
@@ -274,13 +261,11 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
if (bmds->shared_base) {
qemu_mutex_lock_iothread();
aio_context_acquire(bdrv_get_aio_context(bs));
while (cur_sector < total_sectors &&
!bdrv_is_allocated(bs, cur_sector, MAX_IS_ALLOCATED_SEARCH,
&nr_sectors)) {
cur_sector += nr_sectors;
}
aio_context_release(bdrv_get_aio_context(bs));
qemu_mutex_unlock_iothread();
}
@@ -314,21 +299,11 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
block_mig_state.submitted++;
blk_mig_unlock();
/* We do not know if bs is under the main thread (and thus does
* not acquire the AioContext when doing AIO) or rather under
* dataplane. Thus acquire both the iothread mutex and the
* AioContext.
*
* This is ugly and will disappear when we make bdrv_* thread-safe,
* without the need to acquire the AioContext.
*/
qemu_mutex_lock_iothread();
aio_context_acquire(bdrv_get_aio_context(bmds->bs));
blk->aiocb = bdrv_aio_readv(bs, cur_sector, &blk->qiov,
nr_sectors, blk_mig_read_cb, blk);
bdrv_reset_dirty_bitmap(bmds->dirty_bitmap, cur_sector, nr_sectors);
aio_context_release(bdrv_get_aio_context(bmds->bs));
bdrv_reset_dirty(bs, cur_sector, nr_sectors);
qemu_mutex_unlock_iothread();
bmds->cur_sector = cur_sector + nr_sectors;
@@ -343,10 +318,8 @@ static int set_dirty_tracking(void)
int ret;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
aio_context_acquire(bdrv_get_aio_context(bmds->bs));
bmds->dirty_bitmap = bdrv_create_dirty_bitmap(bmds->bs, BLOCK_SIZE,
NULL, NULL);
aio_context_release(bdrv_get_aio_context(bmds->bs));
NULL);
if (!bmds->dirty_bitmap) {
ret = -errno;
goto fail;
@@ -357,24 +330,18 @@ static int set_dirty_tracking(void)
fail:
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
if (bmds->dirty_bitmap) {
aio_context_acquire(bdrv_get_aio_context(bmds->bs));
bdrv_release_dirty_bitmap(bmds->bs, bmds->dirty_bitmap);
aio_context_release(bdrv_get_aio_context(bmds->bs));
}
}
return ret;
}
/* Called with iothread lock taken. */
static void unset_dirty_tracking(void)
{
BlkMigDevState *bmds;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
aio_context_acquire(bdrv_get_aio_context(bmds->bs));
bdrv_release_dirty_bitmap(bmds->bs, bmds->dirty_bitmap);
aio_context_release(bdrv_get_aio_context(bmds->bs));
}
}
@@ -383,7 +350,6 @@ static void init_blk_migration(QEMUFile *f)
BlockDriverState *bs;
BlkMigDevState *bmds;
int64_t sectors;
BdrvNextIterator it;
block_mig_state.submitted = 0;
block_mig_state.read_done = 0;
@@ -393,8 +359,7 @@ static void init_blk_migration(QEMUFile *f)
block_mig_state.bulk_completed = 0;
block_mig_state.zero_blocks = migrate_zero_blocks();
for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
for (bs = bdrv_next(NULL); bs; bs = bdrv_next(bs)) {
if (bdrv_is_read_only(bs)) {
continue;
}
@@ -476,7 +441,7 @@ static void blk_mig_reset_dirty_cursor(void)
}
}
/* Called with iothread lock and AioContext taken. */
/* Called with iothread lock taken. */
static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
int is_async)
@@ -491,7 +456,7 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
blk_mig_lock();
if (bmds_aio_inflight(bmds, sector)) {
blk_mig_unlock();
bdrv_drain(bmds->bs);
bdrv_drain_all();
} else {
blk_mig_unlock();
}
@@ -531,7 +496,7 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
g_free(blk);
}
bdrv_reset_dirty_bitmap(bmds->dirty_bitmap, sector, nr_sectors);
bdrv_reset_dirty(bmds->bs, sector, nr_sectors);
break;
}
sector += BDRV_SECTORS_PER_DIRTY_CHUNK;
@@ -559,9 +524,7 @@ static int blk_mig_save_dirty_block(QEMUFile *f, int is_async)
int ret = 1;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
aio_context_acquire(bdrv_get_aio_context(bmds->bs));
ret = mig_save_device_dirty(f, bmds, is_async);
aio_context_release(bdrv_get_aio_context(bmds->bs));
if (ret <= 0) {
break;
}
@@ -619,9 +582,7 @@ static int64_t get_remaining_dirty(void)
int64_t dirty = 0;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
aio_context_acquire(bdrv_get_aio_context(bmds->bs));
dirty += bdrv_get_dirty_count(bmds->dirty_bitmap);
aio_context_release(bdrv_get_aio_context(bmds->bs));
dirty += bdrv_get_dirty_count(bmds->bs, bmds->dirty_bitmap);
}
return dirty << BDRV_SECTOR_BITS;
@@ -629,32 +590,25 @@ static int64_t get_remaining_dirty(void)
/* Called with iothread lock taken. */
static void block_migration_cleanup(void *opaque)
static void blk_mig_cleanup(void)
{
BlkMigDevState *bmds;
BlkMigBlock *blk;
AioContext *ctx;
bdrv_drain_all();
unset_dirty_tracking();
blk_mig_lock();
while ((bmds = QSIMPLEQ_FIRST(&block_mig_state.bmds_list)) != NULL) {
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.bmds_list, entry);
bdrv_op_unblock_all(bmds->bs, bmds->blocker);
error_free(bmds->blocker);
/* Save ctx, because bmds->bs can disappear during bdrv_unref. */
ctx = bdrv_get_aio_context(bmds->bs);
aio_context_acquire(ctx);
bdrv_unref(bmds->bs);
aio_context_release(ctx);
g_free(bmds->aio_bitmap);
g_free(bmds);
}
blk_mig_lock();
while ((blk = QSIMPLEQ_FIRST(&block_mig_state.blk_list)) != NULL) {
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.blk_list, entry);
g_free(blk->buf);
@@ -663,6 +617,11 @@ static void block_migration_cleanup(void *opaque)
blk_mig_unlock();
}
static void block_migration_cancel(void *opaque)
{
blk_mig_cleanup();
}
static int block_save_setup(QEMUFile *f, void *opaque)
{
int ret;
@@ -676,12 +635,13 @@ static int block_save_setup(QEMUFile *f, void *opaque)
/* start track dirty blocks */
ret = set_dirty_tracking();
qemu_mutex_unlock_iothread();
if (ret) {
qemu_mutex_unlock_iothread();
return ret;
}
qemu_mutex_unlock_iothread();
ret = flush_blks(f);
blk_mig_reset_dirty_cursor();
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
@@ -709,10 +669,7 @@ static int block_save_iterate(QEMUFile *f, void *opaque)
blk_mig_lock();
while ((block_mig_state.submitted +
block_mig_state.read_done) * BLOCK_SIZE <
qemu_file_get_rate_limit(f) &&
(block_mig_state.submitted +
block_mig_state.read_done) <
MAX_INFLIGHT_IO) {
qemu_file_get_rate_limit(f)) {
blk_mig_unlock();
if (block_mig_state.bulk_completed == 0) {
/* first finish the bulk phase */
@@ -792,33 +749,30 @@ static int block_save_complete(QEMUFile *f, void *opaque)
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
blk_mig_cleanup();
return 0;
}
static void block_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
uint64_t *non_postcopiable_pending,
uint64_t *postcopiable_pending)
static uint64_t block_save_pending(QEMUFile *f, void *opaque, uint64_t max_size)
{
/* Estimate pending number of bytes to send */
uint64_t pending;
qemu_mutex_lock_iothread();
pending = get_remaining_dirty();
qemu_mutex_unlock_iothread();
blk_mig_lock();
pending += block_mig_state.submitted * BLOCK_SIZE +
block_mig_state.read_done * BLOCK_SIZE;
blk_mig_unlock();
pending = get_remaining_dirty() +
block_mig_state.submitted * BLOCK_SIZE +
block_mig_state.read_done * BLOCK_SIZE;
/* Report at least one block pending during bulk phase */
if (pending <= max_size && !block_mig_state.bulk_completed) {
pending = max_size + BLOCK_SIZE;
}
blk_mig_unlock();
qemu_mutex_unlock_iothread();
DPRINTF("Enter save live pending %" PRIu64 "\n", pending);
/* We don't do postcopy */
*non_postcopiable_pending += pending;
return pending;
}
static int block_load(QEMUFile *f, void *opaque, int version_id)
@@ -828,8 +782,6 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
char device_name[256];
int64_t addr;
BlockDriverState *bs, *bs_prev = NULL;
BlockBackend *blk;
Error *local_err = NULL;
uint8_t *buf;
int64_t total_sectors = 0;
int nr_sectors;
@@ -847,15 +799,9 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
qemu_get_buffer(f, (uint8_t *)device_name, len);
device_name[len] = '\0';
blk = blk_by_name(device_name);
if (!blk) {
fprintf(stderr, "Error unknown block device %s\n",
device_name);
return -EINVAL;
}
bs = blk_bs(blk);
bs = bdrv_find(device_name);
if (!bs) {
fprintf(stderr, "Block device %s has no medium\n",
fprintf(stderr, "Error unknown block device %s\n",
device_name);
return -EINVAL;
}
@@ -868,12 +814,6 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
device_name);
return -EINVAL;
}
bdrv_invalidate_cache(bs, &local_err);
if (local_err) {
error_report_err(local_err);
return -EINVAL;
}
}
if (total_sectors - addr < BDRV_SECTORS_PER_DIRTY_CHUNK) {
@@ -934,10 +874,10 @@ static SaveVMHandlers savevm_block_handlers = {
.set_params = block_set_params,
.save_live_setup = block_save_setup,
.save_live_iterate = block_save_iterate,
.save_live_complete_precopy = block_save_complete,
.save_live_complete = block_save_complete,
.save_live_pending = block_save_pending,
.load_state = block_load,
.cleanup = block_migration_cleanup,
.cancel = block_migration_cancel,
.is_active = block_is_active,
};

4594
block.c

File diff suppressed because it is too large Load Diff

View File

@@ -1,16 +1,15 @@
block-obj-y += raw_bsd.o qcow.o vdi.o vmdk.o cloop.o bochs.o vpc.o vvfat.o
block-obj-y += raw_bsd.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat.o
block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o
block-obj-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o
block-obj-y += qed-check.o
block-obj-$(CONFIG_VHDX) += vhdx.o vhdx-endian.o vhdx-log.o
block-obj-y += quorum.o
block-obj-y += parallels.o blkdebug.o blkverify.o blkreplay.o
block-obj-$(CONFIG_QUORUM) += quorum.o
block-obj-y += parallels.o blkdebug.o blkverify.o
block-obj-y += block-backend.o snapshot.o qapi.o
block-obj-$(CONFIG_WIN32) += raw-win32.o win32-aio.o
block-obj-$(CONFIG_POSIX) += raw-posix.o
block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
block-obj-y += null.o mirror.o io.o
block-obj-y += throttle-groups.o
block-obj-y += null.o mirror.o
block-obj-y += nbd.o nbd-client.o sheepdog.o
block-obj-$(CONFIG_LIBISCSI) += iscsi.o
@@ -20,10 +19,7 @@ block-obj-$(CONFIG_RBD) += rbd.o
block-obj-$(CONFIG_GLUSTERFS) += gluster.o
block-obj-$(CONFIG_ARCHIPELAGO) += archipelago.o
block-obj-$(CONFIG_LIBSSH2) += ssh.o
block-obj-y += accounting.o dirty-bitmap.o
block-obj-y += write-threshold.o
block-obj-y += crypto.o
block-obj-y += accounting.o
common-obj-y += stream.o
common-obj-y += commit.o
@@ -40,7 +36,5 @@ gluster.o-libs := $(GLUSTERFS_LIBS)
ssh.o-cflags := $(LIBSSH2_CFLAGS)
ssh.o-libs := $(LIBSSH2_LIBS)
archipelago.o-libs := $(ARCHIPELAGO_LIBS)
block-obj-m += dmg.o
dmg.o-libs := $(BZIP2_LIBS)
qcow.o-libs := -lz
linux-aio.o-libs := -laio

View File

@@ -2,7 +2,6 @@
* QEMU System Emulator block accounting
*
* Copyright (c) 2011 Christoph Hellwig
* Copyright (c) 2015 Igalia, S.L.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
@@ -23,58 +22,8 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "block/accounting.h"
#include "block/block_int.h"
#include "qemu/timer.h"
#include "sysemu/qtest.h"
static QEMUClockType clock_type = QEMU_CLOCK_REALTIME;
static const int qtest_latency_ns = NANOSECONDS_PER_SECOND / 1000;
void block_acct_init(BlockAcctStats *stats, bool account_invalid,
bool account_failed)
{
stats->account_invalid = account_invalid;
stats->account_failed = account_failed;
if (qtest_enabled()) {
clock_type = QEMU_CLOCK_VIRTUAL;
}
}
void block_acct_cleanup(BlockAcctStats *stats)
{
BlockAcctTimedStats *s, *next;
QSLIST_FOREACH_SAFE(s, &stats->intervals, entries, next) {
g_free(s);
}
}
void block_acct_add_interval(BlockAcctStats *stats, unsigned interval_length)
{
BlockAcctTimedStats *s;
unsigned i;
s = g_new0(BlockAcctTimedStats, 1);
s->interval_length = interval_length;
QSLIST_INSERT_HEAD(&stats->intervals, s, entries);
for (i = 0; i < BLOCK_MAX_IOTYPE; i++) {
timed_average_init(&s->latency[i], clock_type,
(uint64_t) interval_length * NANOSECONDS_PER_SECOND);
}
}
BlockAcctTimedStats *block_acct_interval_next(BlockAcctStats *stats,
BlockAcctTimedStats *s)
{
if (s == NULL) {
return QSLIST_FIRST(&stats->intervals);
} else {
return QSLIST_NEXT(s, entries);
}
}
void block_acct_start(BlockAcctStats *stats, BlockAcctCookie *cookie,
int64_t bytes, enum BlockAcctType type)
@@ -82,92 +31,24 @@ void block_acct_start(BlockAcctStats *stats, BlockAcctCookie *cookie,
assert(type < BLOCK_MAX_IOTYPE);
cookie->bytes = bytes;
cookie->start_time_ns = qemu_clock_get_ns(clock_type);
cookie->start_time_ns = get_clock();
cookie->type = type;
}
void block_acct_done(BlockAcctStats *stats, BlockAcctCookie *cookie)
{
BlockAcctTimedStats *s;
int64_t time_ns = qemu_clock_get_ns(clock_type);
int64_t latency_ns = time_ns - cookie->start_time_ns;
if (qtest_enabled()) {
latency_ns = qtest_latency_ns;
}
assert(cookie->type < BLOCK_MAX_IOTYPE);
stats->nr_bytes[cookie->type] += cookie->bytes;
stats->nr_ops[cookie->type]++;
stats->total_time_ns[cookie->type] += latency_ns;
stats->last_access_time_ns = time_ns;
stats->total_time_ns[cookie->type] += get_clock() - cookie->start_time_ns;
}
QSLIST_FOREACH(s, &stats->intervals, entries) {
timed_average_account(&s->latency[cookie->type], latency_ns);
void block_acct_highest_sector(BlockAcctStats *stats, int64_t sector_num,
unsigned int nb_sectors)
{
if (stats->wr_highest_sector < sector_num + nb_sectors - 1) {
stats->wr_highest_sector = sector_num + nb_sectors - 1;
}
}
void block_acct_failed(BlockAcctStats *stats, BlockAcctCookie *cookie)
{
assert(cookie->type < BLOCK_MAX_IOTYPE);
stats->failed_ops[cookie->type]++;
if (stats->account_failed) {
BlockAcctTimedStats *s;
int64_t time_ns = qemu_clock_get_ns(clock_type);
int64_t latency_ns = time_ns - cookie->start_time_ns;
if (qtest_enabled()) {
latency_ns = qtest_latency_ns;
}
stats->total_time_ns[cookie->type] += latency_ns;
stats->last_access_time_ns = time_ns;
QSLIST_FOREACH(s, &stats->intervals, entries) {
timed_average_account(&s->latency[cookie->type], latency_ns);
}
}
}
void block_acct_invalid(BlockAcctStats *stats, enum BlockAcctType type)
{
assert(type < BLOCK_MAX_IOTYPE);
/* block_acct_done() and block_acct_failed() update
* total_time_ns[], but this one does not. The reason is that
* invalid requests are accounted during their submission,
* therefore there's no actual I/O involved. */
stats->invalid_ops[type]++;
if (stats->account_invalid) {
stats->last_access_time_ns = qemu_clock_get_ns(clock_type);
}
}
void block_acct_merge_done(BlockAcctStats *stats, enum BlockAcctType type,
int num_requests)
{
assert(type < BLOCK_MAX_IOTYPE);
stats->merged[type] += num_requests;
}
int64_t block_acct_idle_time_ns(BlockAcctStats *stats)
{
return qemu_clock_get_ns(clock_type) - stats->last_access_time_ns;
}
double block_acct_queue_depth(BlockAcctTimedStats *stats,
enum BlockAcctType type)
{
uint64_t sum, elapsed;
assert(type < BLOCK_MAX_IOTYPE);
sum = timed_average_sum(&stats->latency[type], &elapsed);
return (double) sum / elapsed;
}

View File

@@ -50,8 +50,7 @@
*
*/
#include "qemu/osdep.h"
#include "qemu/cutils.h"
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/error-report.h"
#include "qemu/thread.h"
@@ -60,6 +59,7 @@
#include "qapi/qmp/qjson.h"
#include "qemu/atomic.h"
#include <inttypes.h>
#include <xseg/xseg.h>
#include <xseg/protocol.h>
@@ -291,7 +291,7 @@ static int qemu_archipelago_init(BDRVArchipelagoState *s)
ret = qemu_archipelago_xseg_init(s);
if (ret < 0) {
error_report("Cannot initialize XSEG. Aborting...");
error_report("Cannot initialize XSEG. Aborting...\n");
goto err_exit;
}
@@ -645,7 +645,7 @@ static int qemu_archipelago_create_volume(Error **errp, const char *volname,
target = xseg_get_target(xseg, req);
if (!target) {
error_setg(errp, "Cannot get XSEG target.");
error_setg(errp, "Cannot get XSEG target.\n");
goto err_exit;
}
memcpy(target, volname, targetlen);
@@ -889,7 +889,7 @@ static BlockAIOCB *qemu_archipelago_aio_rw(BlockDriverState *bs,
return &aio_cb->common;
err_exit:
error_report("qemu_archipelago_aio_rw(): I/O Error");
error_report("qemu_archipelago_aio_rw(): I/O Error\n");
qemu_aio_unref(aio_cb);
return NULL;
}

View File

@@ -11,20 +11,20 @@
*
*/
#include "qemu/osdep.h"
#include <stdio.h>
#include <errno.h>
#include <unistd.h>
#include "trace.h"
#include "block/block.h"
#include "block/block_int.h"
#include "block/blockjob.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "qemu/ratelimit.h"
#include "qemu/cutils.h"
#include "sysemu/block-backend.h"
#include "qemu/bitmap.h"
#define BACKUP_CLUSTER_SIZE_DEFAULT (1 << 16)
#define BACKUP_CLUSTER_BITS 16
#define BACKUP_CLUSTER_SIZE (1 << BACKUP_CLUSTER_BITS)
#define BACKUP_SECTORS_PER_CLUSTER (BACKUP_CLUSTER_SIZE / BDRV_SECTOR_SIZE)
#define SLICE_TIME 100000000ULL /* ns */
typedef struct CowRequest {
@@ -36,27 +36,17 @@ typedef struct CowRequest {
typedef struct BackupBlockJob {
BlockJob common;
BlockBackend *target;
/* bitmap for sync=incremental */
BdrvDirtyBitmap *sync_bitmap;
BlockDriverState *target;
MirrorSyncMode sync_mode;
RateLimit limit;
BlockdevOnError on_source_error;
BlockdevOnError on_target_error;
CoRwlock flush_rwlock;
uint64_t sectors_read;
unsigned long *done_bitmap;
int64_t cluster_size;
NotifierWithReturn before_write;
HBitmap *bitmap;
QLIST_HEAD(, CowRequest) inflight_reqs;
} BackupBlockJob;
/* Size of a cluster in sectors, instead of bytes. */
static inline int64_t cluster_size_sectors(BackupBlockJob *job)
{
return job->cluster_size / BDRV_SECTOR_SIZE;
}
/* See if in-flight requests overlap and wait for them to complete */
static void coroutine_fn wait_for_overlapping_requests(BackupBlockJob *job,
int64_t start,
@@ -94,25 +84,23 @@ static void cow_request_end(CowRequest *req)
qemu_co_queue_restart_all(&req->wait_queue);
}
static int coroutine_fn backup_do_cow(BackupBlockJob *job,
static int coroutine_fn backup_do_cow(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
bool *error_is_read,
bool is_write_notifier)
bool *error_is_read)
{
BlockBackend *blk = job->common.blk;
BackupBlockJob *job = (BackupBlockJob *)bs->job;
CowRequest cow_request;
struct iovec iov;
QEMUIOVector bounce_qiov;
void *bounce_buffer = NULL;
int ret = 0;
int64_t sectors_per_cluster = cluster_size_sectors(job);
int64_t start, end;
int n;
qemu_co_rwlock_rdlock(&job->flush_rwlock);
start = sector_num / sectors_per_cluster;
end = DIV_ROUND_UP(sector_num + nb_sectors, sectors_per_cluster);
start = sector_num / BACKUP_SECTORS_PER_CLUSTER;
end = DIV_ROUND_UP(sector_num + nb_sectors, BACKUP_SECTORS_PER_CLUSTER);
trace_backup_do_cow_enter(job, start, sector_num, nb_sectors);
@@ -120,27 +108,26 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
cow_request_begin(&cow_request, job, start, end);
for (; start < end; start++) {
if (test_bit(start, job->done_bitmap)) {
if (hbitmap_get(job->bitmap, start)) {
trace_backup_do_cow_skip(job, start);
continue; /* already copied */
}
trace_backup_do_cow_process(job, start);
n = MIN(sectors_per_cluster,
n = MIN(BACKUP_SECTORS_PER_CLUSTER,
job->common.len / BDRV_SECTOR_SIZE -
start * sectors_per_cluster);
start * BACKUP_SECTORS_PER_CLUSTER);
if (!bounce_buffer) {
bounce_buffer = blk_blockalign(blk, job->cluster_size);
bounce_buffer = qemu_blockalign(bs, BACKUP_CLUSTER_SIZE);
}
iov.iov_base = bounce_buffer;
iov.iov_len = n * BDRV_SECTOR_SIZE;
qemu_iovec_init_external(&bounce_qiov, &iov, 1);
ret = blk_co_preadv(blk, start * job->cluster_size,
bounce_qiov.size, &bounce_qiov,
is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0);
ret = bdrv_co_readv(bs, start * BACKUP_SECTORS_PER_CLUSTER, n,
&bounce_qiov);
if (ret < 0) {
trace_backup_do_cow_read_fail(job, start, ret);
if (error_is_read) {
@@ -150,11 +137,13 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
}
if (buffer_is_zero(iov.iov_base, iov.iov_len)) {
ret = blk_co_pwrite_zeroes(job->target, start * job->cluster_size,
bounce_qiov.size, BDRV_REQ_MAY_UNMAP);
ret = bdrv_co_write_zeroes(job->target,
start * BACKUP_SECTORS_PER_CLUSTER,
n, BDRV_REQ_MAY_UNMAP);
} else {
ret = blk_co_pwritev(job->target, start * job->cluster_size,
bounce_qiov.size, &bounce_qiov, 0);
ret = bdrv_co_writev(job->target,
start * BACKUP_SECTORS_PER_CLUSTER, n,
&bounce_qiov);
}
if (ret < 0) {
trace_backup_do_cow_write_fail(job, start, ret);
@@ -164,7 +153,7 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
goto out;
}
set_bit(start, job->done_bitmap);
hbitmap_set(job->bitmap, start, 1);
/* Publish progress, guest I/O counts as progress too. Note that the
* offset field is an opaque progress value, it is not a disk offset.
@@ -191,16 +180,14 @@ static int coroutine_fn backup_before_write_notify(
NotifierWithReturn *notifier,
void *opaque)
{
BackupBlockJob *job = container_of(notifier, BackupBlockJob, before_write);
BdrvTrackedRequest *req = opaque;
int64_t sector_num = req->offset >> BDRV_SECTOR_BITS;
int nb_sectors = req->bytes >> BDRV_SECTOR_BITS;
assert(req->bs == blk_bs(job->common.blk));
assert((req->offset & (BDRV_SECTOR_SIZE - 1)) == 0);
assert((req->bytes & (BDRV_SECTOR_SIZE - 1)) == 0);
return backup_do_cow(job, sector_num, nb_sectors, NULL, true);
return backup_do_cow(req->bs, sector_num, nb_sectors, NULL);
}
static void backup_set_speed(BlockJob *job, int64_t speed, Error **errp)
@@ -208,61 +195,35 @@ static void backup_set_speed(BlockJob *job, int64_t speed, Error **errp)
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
if (speed < 0) {
error_setg(errp, QERR_INVALID_PARAMETER, "speed");
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
}
static void backup_cleanup_sync_bitmap(BackupBlockJob *job, int ret)
{
BdrvDirtyBitmap *bm;
BlockDriverState *bs = blk_bs(job->common.blk);
if (ret < 0 || block_job_is_cancelled(&job->common)) {
/* Merge the successor back into the parent, delete nothing. */
bm = bdrv_reclaim_dirty_bitmap(bs, job->sync_bitmap, NULL);
assert(bm);
} else {
/* Everything is fine, delete this bitmap and install the backup. */
bm = bdrv_dirty_bitmap_abdicate(bs, job->sync_bitmap, NULL);
assert(bm);
}
}
static void backup_commit(BlockJob *job)
static void backup_iostatus_reset(BlockJob *job)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
if (s->sync_bitmap) {
backup_cleanup_sync_bitmap(s, 0);
}
}
static void backup_abort(BlockJob *job)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
if (s->sync_bitmap) {
backup_cleanup_sync_bitmap(s, -1);
}
bdrv_iostatus_reset(s->target);
}
static const BlockJobDriver backup_job_driver = {
.instance_size = sizeof(BackupBlockJob),
.job_type = BLOCK_JOB_TYPE_BACKUP,
.set_speed = backup_set_speed,
.commit = backup_commit,
.abort = backup_abort,
.iostatus_reset = backup_iostatus_reset,
};
static BlockErrorAction backup_error_action(BackupBlockJob *job,
bool read, int error)
{
if (read) {
return block_job_error_action(&job->common, job->on_source_error,
true, error);
return block_job_error_action(&job->common, job->common.bs,
job->on_source_error, true, error);
} else {
return block_job_error_action(&job->common, job->on_target_error,
false, error);
return block_job_error_action(&job->common, job->target,
job->on_target_error, false, error);
}
}
@@ -275,118 +236,39 @@ static void backup_complete(BlockJob *job, void *opaque)
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
BackupCompleteData *data = opaque;
blk_unref(s->target);
bdrv_unref(s->target);
block_job_completed(job, data->ret);
g_free(data);
}
static bool coroutine_fn yield_and_check(BackupBlockJob *job)
{
if (block_job_is_cancelled(&job->common)) {
return true;
}
/* we need to yield so that bdrv_drain_all() returns.
* (without, VM does not reboot)
*/
if (job->common.speed) {
uint64_t delay_ns = ratelimit_calculate_delay(&job->limit,
job->sectors_read);
job->sectors_read = 0;
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, delay_ns);
} else {
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, 0);
}
if (block_job_is_cancelled(&job->common)) {
return true;
}
return false;
}
static int coroutine_fn backup_run_incremental(BackupBlockJob *job)
{
bool error_is_read;
int ret = 0;
int clusters_per_iter;
uint32_t granularity;
int64_t sector;
int64_t cluster;
int64_t end;
int64_t last_cluster = -1;
int64_t sectors_per_cluster = cluster_size_sectors(job);
HBitmapIter hbi;
granularity = bdrv_dirty_bitmap_granularity(job->sync_bitmap);
clusters_per_iter = MAX((granularity / job->cluster_size), 1);
bdrv_dirty_iter_init(job->sync_bitmap, &hbi);
/* Find the next dirty sector(s) */
while ((sector = hbitmap_iter_next(&hbi)) != -1) {
cluster = sector / sectors_per_cluster;
/* Fake progress updates for any clusters we skipped */
if (cluster != last_cluster + 1) {
job->common.offset += ((cluster - last_cluster - 1) *
job->cluster_size);
}
for (end = cluster + clusters_per_iter; cluster < end; cluster++) {
do {
if (yield_and_check(job)) {
return ret;
}
ret = backup_do_cow(job, cluster * sectors_per_cluster,
sectors_per_cluster, &error_is_read,
false);
if ((ret < 0) &&
backup_error_action(job, error_is_read, -ret) ==
BLOCK_ERROR_ACTION_REPORT) {
return ret;
}
} while (ret < 0);
}
/* If the bitmap granularity is smaller than the backup granularity,
* we need to advance the iterator pointer to the next cluster. */
if (granularity < job->cluster_size) {
bdrv_set_dirty_iter(&hbi, cluster * sectors_per_cluster);
}
last_cluster = cluster - 1;
}
/* Play some final catchup with the progress meter */
end = DIV_ROUND_UP(job->common.len, job->cluster_size);
if (last_cluster + 1 < end) {
job->common.offset += ((end - last_cluster - 1) * job->cluster_size);
}
return ret;
}
static void coroutine_fn backup_run(void *opaque)
{
BackupBlockJob *job = opaque;
BackupCompleteData *data;
BlockDriverState *bs = blk_bs(job->common.blk);
BlockBackend *target = job->target;
BlockDriverState *bs = job->common.bs;
BlockDriverState *target = job->target;
BlockdevOnError on_target_error = job->on_target_error;
NotifierWithReturn before_write = {
.notify = backup_before_write_notify,
};
int64_t start, end;
int64_t sectors_per_cluster = cluster_size_sectors(job);
int ret = 0;
QLIST_INIT(&job->inflight_reqs);
qemu_co_rwlock_init(&job->flush_rwlock);
start = 0;
end = DIV_ROUND_UP(job->common.len, job->cluster_size);
end = DIV_ROUND_UP(job->common.len / BDRV_SECTOR_SIZE,
BACKUP_SECTORS_PER_CLUSTER);
job->done_bitmap = bitmap_new(end);
job->bitmap = hbitmap_alloc(end, 0);
job->before_write.notify = backup_before_write_notify;
bdrv_add_before_write_notifier(bs, &job->before_write);
bdrv_set_enable_write_cache(target, true);
bdrv_set_on_error(target, on_target_error, on_target_error);
bdrv_iostatus_enable(target);
bdrv_add_before_write_notifier(bs, &before_write);
if (job->sync_mode == MIRROR_SYNC_MODE_NONE) {
while (!block_job_is_cancelled(&job->common)) {
@@ -396,13 +278,28 @@ static void coroutine_fn backup_run(void *opaque)
qemu_coroutine_yield();
job->common.busy = true;
}
} else if (job->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
ret = backup_run_incremental(job);
} else {
/* Both FULL and TOP SYNC_MODE's require copying.. */
for (; start < end; start++) {
bool error_is_read;
if (yield_and_check(job)) {
if (block_job_is_cancelled(&job->common)) {
break;
}
/* we need to yield so that qemu_aio_flush() returns.
* (without, VM does not reboot)
*/
if (job->common.speed) {
uint64_t delay_ns = ratelimit_calculate_delay(
&job->limit, job->sectors_read);
job->sectors_read = 0;
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, delay_ns);
} else {
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, 0);
}
if (block_job_is_cancelled(&job->common)) {
break;
}
@@ -413,7 +310,7 @@ static void coroutine_fn backup_run(void *opaque)
/* Check to see if these blocks are already in the
* backing file. */
for (i = 0; i < sectors_per_cluster;) {
for (i = 0; i < BACKUP_SECTORS_PER_CLUSTER;) {
/* bdrv_is_allocated() only returns true/false based
* on the first set of sectors it comes across that
* are are all in the same state.
@@ -422,8 +319,8 @@ static void coroutine_fn backup_run(void *opaque)
* needed but at some point that is always the case. */
alloced =
bdrv_is_allocated(bs,
start * sectors_per_cluster + i,
sectors_per_cluster - i, &n);
start * BACKUP_SECTORS_PER_CLUSTER + i,
BACKUP_SECTORS_PER_CLUSTER - i, &n);
i += n;
if (alloced == 1 || n == 0) {
@@ -438,8 +335,8 @@ static void coroutine_fn backup_run(void *opaque)
}
}
/* FULL sync mode we copy the whole drive. */
ret = backup_do_cow(job, start * sectors_per_cluster,
sectors_per_cluster, &error_is_read, false);
ret = backup_do_cow(bs, start * BACKUP_SECTORS_PER_CLUSTER,
BACKUP_SECTORS_PER_CLUSTER, &error_is_read);
if (ret < 0) {
/* Depending on error action, fail now or retry cluster */
BlockErrorAction action =
@@ -454,14 +351,15 @@ static void coroutine_fn backup_run(void *opaque)
}
}
notifier_with_return_remove(&job->before_write);
notifier_with_return_remove(&before_write);
/* wait until pending backup_do_cow() calls have completed */
qemu_co_rwlock_wrlock(&job->flush_rwlock);
qemu_co_rwlock_unlock(&job->flush_rwlock);
g_free(job->done_bitmap);
bdrv_op_unblock_all(blk_bs(target), job->common.blocker);
hbitmap_free(job->bitmap);
bdrv_iostatus_disable(target);
data = g_malloc(sizeof(*data));
data->ret = ret;
@@ -470,62 +368,21 @@ static void coroutine_fn backup_run(void *opaque)
void backup_start(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, MirrorSyncMode sync_mode,
BdrvDirtyBitmap *sync_bitmap,
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockCompletionFunc *cb, void *opaque,
BlockJobTxn *txn, Error **errp)
Error **errp)
{
int64_t len;
BlockDriverInfo bdi;
BackupBlockJob *job = NULL;
int ret;
assert(bs);
assert(target);
assert(cb);
if (bs == target) {
error_setg(errp, "Source and target cannot be the same");
return;
}
if (!bdrv_is_inserted(bs)) {
error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(bs));
return;
}
if (!bdrv_is_inserted(target)) {
error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(target));
return;
}
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) {
return;
}
if (bdrv_op_is_blocked(target, BLOCK_OP_TYPE_BACKUP_TARGET, errp)) {
return;
}
if (sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
if (!sync_bitmap) {
error_setg(errp, "must provide a valid bitmap name for "
"\"incremental\" sync mode");
return;
}
/* Create a new bitmap, and freeze/disable this one. */
if (bdrv_dirty_bitmap_create_successor(bs, sync_bitmap, errp) < 0) {
return;
}
} else if (sync_bitmap) {
error_setg(errp,
"a sync_bitmap was provided to backup_run, "
"but received an incompatible sync_mode (%s)",
MirrorSyncMode_lookup[sync_mode]);
if ((on_source_error == BLOCKDEV_ON_ERROR_STOP ||
on_source_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_set(errp, QERR_INVALID_PARAMETER, "on-source-error");
return;
}
@@ -533,54 +390,20 @@ void backup_start(BlockDriverState *bs, BlockDriverState *target,
if (len < 0) {
error_setg_errno(errp, -len, "unable to get length for '%s'",
bdrv_get_device_name(bs));
goto error;
return;
}
job = block_job_create(&backup_job_driver, bs, speed, cb, opaque, errp);
BackupBlockJob *job = block_job_create(&backup_job_driver, bs, speed,
cb, opaque, errp);
if (!job) {
goto error;
return;
}
job->target = blk_new();
blk_insert_bs(job->target, target);
job->on_source_error = on_source_error;
job->on_target_error = on_target_error;
job->target = target;
job->sync_mode = sync_mode;
job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ?
sync_bitmap : NULL;
/* If there is no backing file on the target, we cannot rely on COW if our
* backup cluster size is smaller than the target cluster size. Even for
* targets with a backing file, try to avoid COW if possible. */
ret = bdrv_get_info(target, &bdi);
if (ret < 0 && !target->backing) {
error_setg_errno(errp, -ret,
"Couldn't determine the cluster size of the target image, "
"which has no backing file");
error_append_hint(errp,
"Aborting, since this may create an unusable destination image\n");
goto error;
} else if (ret < 0 && target->backing) {
/* Not fatal; just trudge on ahead. */
job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT;
} else {
job->cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT, bdi.cluster_size);
}
bdrv_op_block_all(target, job->common.blocker);
job->common.len = len;
job->common.co = qemu_coroutine_create(backup_run);
block_job_txn_add_job(txn, &job->common);
qemu_coroutine_enter(job->common.co, job);
return;
error:
if (sync_bitmap) {
bdrv_reclaim_dirty_bitmap(bs, sync_bitmap, NULL);
}
if (job) {
blk_unref(job->target);
block_job_unref(&job->common);
}
}

View File

@@ -22,9 +22,7 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu/cutils.h"
#include "qemu-common.h"
#include "qemu/config-file.h"
#include "block/block_int.h"
#include "qemu/module.h"
@@ -32,13 +30,12 @@
#include "qapi/qmp/qdict.h"
#include "qapi/qmp/qint.h"
#include "qapi/qmp/qstring.h"
#include "sysemu/qtest.h"
typedef struct BDRVBlkdebugState {
int state;
int new_state;
QLIST_HEAD(, BlkdebugRule) rules[BLKDBG__MAX];
QLIST_HEAD(, BlkdebugRule) rules[BLKDBG_EVENT_MAX];
QSIMPLEQ_HEAD(, BlkdebugRule) active_rules;
QLIST_HEAD(, BlkdebugSuspendedReq) suspended_reqs;
} BDRVBlkdebugState;
@@ -66,7 +63,7 @@ enum {
};
typedef struct BlkdebugRule {
BlkdebugEvent event;
BlkDebugEvent event;
int action;
int state;
union {
@@ -145,12 +142,69 @@ static QemuOptsList *config_groups[] = {
NULL
};
static int get_event_by_name(const char *name, BlkdebugEvent *event)
static const char *event_names[BLKDBG_EVENT_MAX] = {
[BLKDBG_L1_UPDATE] = "l1_update",
[BLKDBG_L1_GROW_ALLOC_TABLE] = "l1_grow.alloc_table",
[BLKDBG_L1_GROW_WRITE_TABLE] = "l1_grow.write_table",
[BLKDBG_L1_GROW_ACTIVATE_TABLE] = "l1_grow.activate_table",
[BLKDBG_L2_LOAD] = "l2_load",
[BLKDBG_L2_UPDATE] = "l2_update",
[BLKDBG_L2_UPDATE_COMPRESSED] = "l2_update_compressed",
[BLKDBG_L2_ALLOC_COW_READ] = "l2_alloc.cow_read",
[BLKDBG_L2_ALLOC_WRITE] = "l2_alloc.write",
[BLKDBG_READ_AIO] = "read_aio",
[BLKDBG_READ_BACKING_AIO] = "read_backing_aio",
[BLKDBG_READ_COMPRESSED] = "read_compressed",
[BLKDBG_WRITE_AIO] = "write_aio",
[BLKDBG_WRITE_COMPRESSED] = "write_compressed",
[BLKDBG_VMSTATE_LOAD] = "vmstate_load",
[BLKDBG_VMSTATE_SAVE] = "vmstate_save",
[BLKDBG_COW_READ] = "cow_read",
[BLKDBG_COW_WRITE] = "cow_write",
[BLKDBG_REFTABLE_LOAD] = "reftable_load",
[BLKDBG_REFTABLE_GROW] = "reftable_grow",
[BLKDBG_REFTABLE_UPDATE] = "reftable_update",
[BLKDBG_REFBLOCK_LOAD] = "refblock_load",
[BLKDBG_REFBLOCK_UPDATE] = "refblock_update",
[BLKDBG_REFBLOCK_UPDATE_PART] = "refblock_update_part",
[BLKDBG_REFBLOCK_ALLOC] = "refblock_alloc",
[BLKDBG_REFBLOCK_ALLOC_HOOKUP] = "refblock_alloc.hookup",
[BLKDBG_REFBLOCK_ALLOC_WRITE] = "refblock_alloc.write",
[BLKDBG_REFBLOCK_ALLOC_WRITE_BLOCKS] = "refblock_alloc.write_blocks",
[BLKDBG_REFBLOCK_ALLOC_WRITE_TABLE] = "refblock_alloc.write_table",
[BLKDBG_REFBLOCK_ALLOC_SWITCH_TABLE] = "refblock_alloc.switch_table",
[BLKDBG_CLUSTER_ALLOC] = "cluster_alloc",
[BLKDBG_CLUSTER_ALLOC_BYTES] = "cluster_alloc_bytes",
[BLKDBG_CLUSTER_FREE] = "cluster_free",
[BLKDBG_FLUSH_TO_OS] = "flush_to_os",
[BLKDBG_FLUSH_TO_DISK] = "flush_to_disk",
[BLKDBG_PWRITEV_RMW_HEAD] = "pwritev_rmw.head",
[BLKDBG_PWRITEV_RMW_AFTER_HEAD] = "pwritev_rmw.after_head",
[BLKDBG_PWRITEV_RMW_TAIL] = "pwritev_rmw.tail",
[BLKDBG_PWRITEV_RMW_AFTER_TAIL] = "pwritev_rmw.after_tail",
[BLKDBG_PWRITEV] = "pwritev",
[BLKDBG_PWRITEV_ZERO] = "pwritev_zero",
[BLKDBG_PWRITEV_DONE] = "pwritev_done",
[BLKDBG_EMPTY_IMAGE_PREPARE] = "empty_image_prepare",
};
static int get_event_by_name(const char *name, BlkDebugEvent *event)
{
int i;
for (i = 0; i < BLKDBG__MAX; i++) {
if (!strcmp(BlkdebugEvent_lookup[i], name)) {
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
if (!strcmp(event_names[i], name)) {
*event = i;
return 0;
}
@@ -162,23 +216,24 @@ static int get_event_by_name(const char *name, BlkdebugEvent *event)
struct add_rule_data {
BDRVBlkdebugState *s;
int action;
Error **errp;
};
static int add_rule(void *opaque, QemuOpts *opts, Error **errp)
static int add_rule(QemuOpts *opts, void *opaque)
{
struct add_rule_data *d = opaque;
BDRVBlkdebugState *s = d->s;
const char* event_name;
BlkdebugEvent event;
BlkDebugEvent event;
struct BlkdebugRule *rule;
/* Find the right event for the rule */
event_name = qemu_opt_get(opts, "event");
if (!event_name) {
error_setg(errp, "Missing event name for rule");
error_setg(d->errp, "Missing event name for rule");
return -1;
} else if (get_event_by_name(event_name, &event) < 0) {
error_setg(errp, "Invalid event name \"%s\"", event_name);
error_setg(d->errp, "Invalid event name \"%s\"", event_name);
return -1;
}
@@ -264,7 +319,8 @@ static int read_config(BDRVBlkdebugState *s, const char *filename,
d.s = s;
d.action = ACTION_INJECT_ERROR;
qemu_opts_foreach(&inject_error_opts, add_rule, &d, &local_err);
d.errp = &local_err;
qemu_opts_foreach(&inject_error_opts, add_rule, &d, 1);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
@@ -272,7 +328,7 @@ static int read_config(BDRVBlkdebugState *s, const char *filename,
}
d.action = ACTION_SET_STATE;
qemu_opts_foreach(&set_state_opts, add_rule, &d, &local_err);
qemu_opts_foreach(&set_state_opts, add_rule, &d, 1);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
@@ -372,11 +428,11 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
/* Set initial state */
s->state = 1;
/* Open the image file */
bs->file = bdrv_open_child(qemu_opt_get(opts, "x-image"), options, "image",
bs, &child_file, false, &local_err);
if (local_err) {
ret = -EINVAL;
/* Open the backing file */
assert(bs->file == NULL);
ret = bdrv_open_image(&bs->file, qemu_opt_get(opts, "x-image"), options, "image",
flags | BDRV_O_PROTOCOL, false, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto out;
}
@@ -395,7 +451,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
goto out;
fail_unref:
bdrv_unref_child(bs, bs->file);
bdrv_unref(bs->file);
out:
qemu_opts_del(opts);
return ret;
@@ -416,14 +472,12 @@ static BlockAIOCB *inject_error(BlockDriverState *bs,
int error = rule->options.inject.error;
struct BlkdebugAIOCB *acb;
QEMUBH *bh;
bool immediately = rule->options.inject.immediately;
if (rule->options.inject.once) {
QSIMPLEQ_REMOVE(&s->active_rules, rule, BlkdebugRule, active_next);
remove_rule(rule);
QSIMPLEQ_INIT(&s->active_rules);
}
if (immediately) {
if (rule->options.inject.immediately) {
return NULL;
}
@@ -456,8 +510,7 @@ static BlockAIOCB *blkdebug_aio_readv(BlockDriverState *bs,
return inject_error(bs, cb, opaque, rule);
}
return bdrv_aio_readv(bs->file->bs, sector_num, qiov, nb_sectors,
cb, opaque);
return bdrv_aio_readv(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
}
static BlockAIOCB *blkdebug_aio_writev(BlockDriverState *bs,
@@ -479,8 +532,7 @@ static BlockAIOCB *blkdebug_aio_writev(BlockDriverState *bs,
return inject_error(bs, cb, opaque, rule);
}
return bdrv_aio_writev(bs->file->bs, sector_num, qiov, nb_sectors,
cb, opaque);
return bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
}
static BlockAIOCB *blkdebug_aio_flush(BlockDriverState *bs,
@@ -499,7 +551,7 @@ static BlockAIOCB *blkdebug_aio_flush(BlockDriverState *bs,
return inject_error(bs, cb, opaque, rule);
}
return bdrv_aio_flush(bs->file->bs, cb, opaque);
return bdrv_aio_flush(bs->file, cb, opaque);
}
@@ -509,7 +561,7 @@ static void blkdebug_close(BlockDriverState *bs)
BlkdebugRule *rule, *next;
int i;
for (i = 0; i < BLKDBG__MAX; i++) {
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
QLIST_FOREACH_SAFE(rule, &s->rules[i], next, next) {
remove_rule(rule);
}
@@ -529,13 +581,9 @@ static void suspend_request(BlockDriverState *bs, BlkdebugRule *rule)
remove_rule(rule);
QLIST_INSERT_HEAD(&s->suspended_reqs, &r, next);
if (!qtest_enabled()) {
printf("blkdebug: Suspended request '%s'\n", r.tag);
}
printf("blkdebug: Suspended request '%s'\n", r.tag);
qemu_coroutine_yield();
if (!qtest_enabled()) {
printf("blkdebug: Resuming request '%s'\n", r.tag);
}
printf("blkdebug: Resuming request '%s'\n", r.tag);
QLIST_REMOVE(&r, next);
g_free(r.tag);
@@ -572,13 +620,13 @@ static bool process_rule(BlockDriverState *bs, struct BlkdebugRule *rule,
return injected;
}
static void blkdebug_debug_event(BlockDriverState *bs, BlkdebugEvent event)
static void blkdebug_debug_event(BlockDriverState *bs, BlkDebugEvent event)
{
BDRVBlkdebugState *s = bs->opaque;
struct BlkdebugRule *rule, *next;
bool injected;
assert((int)event >= 0 && event < BLKDBG__MAX);
assert((int)event >= 0 && event < BLKDBG_EVENT_MAX);
injected = false;
s->new_state = s->state;
@@ -593,7 +641,7 @@ static int blkdebug_debug_breakpoint(BlockDriverState *bs, const char *event,
{
BDRVBlkdebugState *s = bs->opaque;
struct BlkdebugRule *rule;
BlkdebugEvent blkdebug_event;
BlkDebugEvent blkdebug_event;
if (get_event_by_name(event, &blkdebug_event) < 0) {
return -ENOENT;
@@ -635,7 +683,7 @@ static int blkdebug_debug_remove_breakpoint(BlockDriverState *bs,
BlkdebugRule *rule, *next;
int i, ret = -ENOENT;
for (i = 0; i < BLKDBG__MAX; i++) {
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
QLIST_FOREACH_SAFE(rule, &s->rules[i], next, next) {
if (rule->action == ACTION_SUSPEND &&
!strcmp(rule->options.suspend.tag, tag)) {
@@ -668,62 +716,99 @@ static bool blkdebug_debug_is_suspended(BlockDriverState *bs, const char *tag)
static int64_t blkdebug_getlength(BlockDriverState *bs)
{
return bdrv_getlength(bs->file->bs);
return bdrv_getlength(bs->file);
}
static int blkdebug_truncate(BlockDriverState *bs, int64_t offset)
{
return bdrv_truncate(bs->file->bs, offset);
}
static void blkdebug_refresh_filename(BlockDriverState *bs, QDict *options)
static void blkdebug_refresh_filename(BlockDriverState *bs)
{
BDRVBlkdebugState *s = bs->opaque;
struct BlkdebugRule *rule;
QDict *opts;
const QDictEntry *e;
bool force_json = false;
QList *inject_error_list = NULL, *set_state_list = NULL;
QList *suspend_list = NULL;
int event;
for (e = qdict_first(options); e; e = qdict_next(options, e)) {
if (strcmp(qdict_entry_key(e), "config") &&
strcmp(qdict_entry_key(e), "x-image"))
{
force_json = true;
break;
}
}
if (force_json && !bs->file->bs->full_open_options) {
if (!bs->file->full_open_options) {
/* The config file cannot be recreated, so creating a plain filename
* is impossible */
return;
}
if (!force_json && bs->file->bs->exact_filename[0]) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"blkdebug:%s:%s",
qdict_get_try_str(options, "config") ?: "",
bs->file->bs->exact_filename);
}
opts = qdict_new();
qdict_put_obj(opts, "driver", QOBJECT(qstring_from_str("blkdebug")));
QINCREF(bs->file->bs->full_open_options);
qdict_put_obj(opts, "image", QOBJECT(bs->file->bs->full_open_options));
QINCREF(bs->file->full_open_options);
qdict_put_obj(opts, "image", QOBJECT(bs->file->full_open_options));
for (e = qdict_first(options); e; e = qdict_next(options, e)) {
if (strcmp(qdict_entry_key(e), "x-image")) {
qobject_incref(qdict_entry_value(e));
qdict_put_obj(opts, qdict_entry_key(e), qdict_entry_value(e));
for (event = 0; event < BLKDBG_EVENT_MAX; event++) {
QLIST_FOREACH(rule, &s->rules[event], next) {
if (rule->action == ACTION_INJECT_ERROR) {
QDict *inject_error = qdict_new();
qdict_put_obj(inject_error, "event", QOBJECT(qstring_from_str(
BlkdebugEvent_lookup[rule->event])));
qdict_put_obj(inject_error, "state",
QOBJECT(qint_from_int(rule->state)));
qdict_put_obj(inject_error, "errno", QOBJECT(qint_from_int(
rule->options.inject.error)));
qdict_put_obj(inject_error, "sector", QOBJECT(qint_from_int(
rule->options.inject.sector)));
qdict_put_obj(inject_error, "once", QOBJECT(qbool_from_int(
rule->options.inject.once)));
qdict_put_obj(inject_error, "immediately",
QOBJECT(qbool_from_int(
rule->options.inject.immediately)));
if (!inject_error_list) {
inject_error_list = qlist_new();
}
qlist_append_obj(inject_error_list, QOBJECT(inject_error));
} else if (rule->action == ACTION_SET_STATE) {
QDict *set_state = qdict_new();
qdict_put_obj(set_state, "event", QOBJECT(qstring_from_str(
BlkdebugEvent_lookup[rule->event])));
qdict_put_obj(set_state, "state",
QOBJECT(qint_from_int(rule->state)));
qdict_put_obj(set_state, "new_state", QOBJECT(qint_from_int(
rule->options.set_state.new_state)));
if (!set_state_list) {
set_state_list = qlist_new();
}
qlist_append_obj(set_state_list, QOBJECT(set_state));
} else if (rule->action == ACTION_SUSPEND) {
QDict *suspend = qdict_new();
qdict_put_obj(suspend, "event", QOBJECT(qstring_from_str(
BlkdebugEvent_lookup[rule->event])));
qdict_put_obj(suspend, "state",
QOBJECT(qint_from_int(rule->state)));
qdict_put_obj(suspend, "tag", QOBJECT(qstring_from_str(
rule->options.suspend.tag)));
if (!suspend_list) {
suspend_list = qlist_new();
}
qlist_append_obj(suspend_list, QOBJECT(suspend));
}
}
}
bs->full_open_options = opts;
}
if (inject_error_list) {
qdict_put_obj(opts, "inject-error", QOBJECT(inject_error_list));
}
if (set_state_list) {
qdict_put_obj(opts, "set-state", QOBJECT(set_state_list));
}
if (suspend_list) {
qdict_put_obj(opts, "suspend", QOBJECT(suspend_list));
}
static int blkdebug_reopen_prepare(BDRVReopenState *reopen_state,
BlockReopenQueue *queue, Error **errp)
{
return 0;
bs->full_open_options = opts;
}
static BlockDriver bdrv_blkdebug = {
@@ -734,9 +819,7 @@ static BlockDriver bdrv_blkdebug = {
.bdrv_parse_filename = blkdebug_parse_filename,
.bdrv_file_open = blkdebug_open,
.bdrv_close = blkdebug_close,
.bdrv_reopen_prepare = blkdebug_reopen_prepare,
.bdrv_getlength = blkdebug_getlength,
.bdrv_truncate = blkdebug_truncate,
.bdrv_refresh_filename = blkdebug_refresh_filename,
.bdrv_aio_readv = blkdebug_aio_readv,

View File

@@ -1,160 +0,0 @@
/*
* Block protocol for record/replay
*
* Copyright (c) 2010-2016 Institute for System Programming
* of the Russian Academy of Sciences.
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "block/block_int.h"
#include "sysemu/replay.h"
#include "qapi/error.h"
typedef struct Request {
Coroutine *co;
QEMUBH *bh;
} Request;
/* Next request id.
This counter is global, because requests from different
block devices should not get overlapping ids. */
static uint64_t request_id;
static int blkreplay_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
Error *local_err = NULL;
int ret;
/* Open the image file */
bs->file = bdrv_open_child(NULL, options, "image",
bs, &child_file, false, &local_err);
if (local_err) {
ret = -EINVAL;
error_propagate(errp, local_err);
goto fail;
}
ret = 0;
fail:
if (ret < 0) {
bdrv_unref_child(bs, bs->file);
}
return ret;
}
static void blkreplay_close(BlockDriverState *bs)
{
}
static int64_t blkreplay_getlength(BlockDriverState *bs)
{
return bdrv_getlength(bs->file->bs);
}
/* This bh is used for synchronization of return from coroutines.
It continues yielded coroutine which then finishes its execution.
BH is called adjusted to some replay checkpoint, therefore
record and replay will always finish coroutines deterministically.
*/
static void blkreplay_bh_cb(void *opaque)
{
Request *req = opaque;
qemu_coroutine_enter(req->co, NULL);
qemu_bh_delete(req->bh);
g_free(req);
}
static void block_request_create(uint64_t reqid, BlockDriverState *bs,
Coroutine *co)
{
Request *req = g_new(Request, 1);
*req = (Request) {
.co = co,
.bh = aio_bh_new(bdrv_get_aio_context(bs), blkreplay_bh_cb, req),
};
replay_block_event(req->bh, reqid);
}
static int coroutine_fn blkreplay_co_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
{
uint64_t reqid = request_id++;
int ret = bdrv_co_readv(bs->file->bs, sector_num, nb_sectors, qiov);
block_request_create(reqid, bs, qemu_coroutine_self());
qemu_coroutine_yield();
return ret;
}
static int coroutine_fn blkreplay_co_writev(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
{
uint64_t reqid = request_id++;
int ret = bdrv_co_writev(bs->file->bs, sector_num, nb_sectors, qiov);
block_request_create(reqid, bs, qemu_coroutine_self());
qemu_coroutine_yield();
return ret;
}
static int coroutine_fn blkreplay_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, BdrvRequestFlags flags)
{
uint64_t reqid = request_id++;
int ret = bdrv_co_write_zeroes(bs->file->bs, sector_num, nb_sectors, flags);
block_request_create(reqid, bs, qemu_coroutine_self());
qemu_coroutine_yield();
return ret;
}
static int coroutine_fn blkreplay_co_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors)
{
uint64_t reqid = request_id++;
int ret = bdrv_co_discard(bs->file->bs, sector_num, nb_sectors);
block_request_create(reqid, bs, qemu_coroutine_self());
qemu_coroutine_yield();
return ret;
}
static int coroutine_fn blkreplay_co_flush(BlockDriverState *bs)
{
uint64_t reqid = request_id++;
int ret = bdrv_co_flush(bs->file->bs);
block_request_create(reqid, bs, qemu_coroutine_self());
qemu_coroutine_yield();
return ret;
}
static BlockDriver bdrv_blkreplay = {
.format_name = "blkreplay",
.protocol_name = "blkreplay",
.instance_size = 0,
.bdrv_file_open = blkreplay_open,
.bdrv_close = blkreplay_close,
.bdrv_getlength = blkreplay_getlength,
.bdrv_co_readv = blkreplay_co_readv,
.bdrv_co_writev = blkreplay_co_writev,
.bdrv_co_write_zeroes = blkreplay_co_write_zeroes,
.bdrv_co_discard = blkreplay_co_discard,
.bdrv_co_flush = blkreplay_co_flush,
};
static void bdrv_blkreplay_init(void)
{
bdrv_register(&bdrv_blkreplay);
}
block_init(bdrv_blkreplay_init);

View File

@@ -7,16 +7,14 @@
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include <stdarg.h>
#include "qemu/sockets.h" /* for EINPROGRESS on Windows */
#include "block/block_int.h"
#include "qapi/qmp/qdict.h"
#include "qapi/qmp/qstring.h"
#include "qemu/cutils.h"
typedef struct {
BdrvChild *test_file;
BlockDriverState *test_file;
} BDRVBlkverifyState;
typedef struct BlkverifyAIOCB BlkverifyAIOCB;
@@ -125,29 +123,26 @@ static int blkverify_open(BlockDriverState *bs, QDict *options, int flags,
}
/* Open the raw file */
bs->file = bdrv_open_child(qemu_opt_get(opts, "x-raw"), options, "raw",
bs, &child_file, false, &local_err);
if (local_err) {
ret = -EINVAL;
assert(bs->file == NULL);
ret = bdrv_open_image(&bs->file, qemu_opt_get(opts, "x-raw"), options,
"raw", flags | BDRV_O_PROTOCOL, false, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto fail;
}
/* Open the test file */
s->test_file = bdrv_open_child(qemu_opt_get(opts, "x-image"), options,
"test", bs, &child_format, false,
&local_err);
if (local_err) {
ret = -EINVAL;
assert(s->test_file == NULL);
ret = bdrv_open_image(&s->test_file, qemu_opt_get(opts, "x-image"), options,
"test", flags, false, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
s->test_file = NULL;
goto fail;
}
ret = 0;
fail:
if (ret < 0) {
bdrv_unref_child(bs, bs->file);
}
qemu_opts_del(opts);
return ret;
}
@@ -156,7 +151,7 @@ static void blkverify_close(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
bdrv_unref_child(bs, s->test_file);
bdrv_unref(s->test_file);
s->test_file = NULL;
}
@@ -164,7 +159,7 @@ static int64_t blkverify_getlength(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
return bdrv_getlength(s->test_file->bs);
return bdrv_getlength(s->test_file);
}
static BlkverifyAIOCB *blkverify_aio_get(BlockDriverState *bs, bool is_write,
@@ -243,13 +238,13 @@ static BlockAIOCB *blkverify_aio_readv(BlockDriverState *bs,
nb_sectors, cb, opaque);
acb->verify = blkverify_verify_readv;
acb->buf = qemu_blockalign(bs->file->bs, qiov->size);
acb->buf = qemu_blockalign(bs->file, qiov->size);
qemu_iovec_init(&acb->raw_qiov, acb->qiov->niov);
qemu_iovec_clone(&acb->raw_qiov, qiov, acb->buf);
bdrv_aio_readv(s->test_file->bs, sector_num, qiov, nb_sectors,
bdrv_aio_readv(s->test_file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb);
bdrv_aio_readv(bs->file->bs, sector_num, &acb->raw_qiov, nb_sectors,
bdrv_aio_readv(bs->file, sector_num, &acb->raw_qiov, nb_sectors,
blkverify_aio_cb, acb);
return &acb->common;
}
@@ -262,9 +257,9 @@ static BlockAIOCB *blkverify_aio_writev(BlockDriverState *bs,
BlkverifyAIOCB *acb = blkverify_aio_get(bs, true, sector_num, qiov,
nb_sectors, cb, opaque);
bdrv_aio_writev(s->test_file->bs, sector_num, qiov, nb_sectors,
bdrv_aio_writev(s->test_file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb);
bdrv_aio_writev(bs->file->bs, sector_num, qiov, nb_sectors,
bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb);
return &acb->common;
}
@@ -276,7 +271,7 @@ static BlockAIOCB *blkverify_aio_flush(BlockDriverState *bs,
BDRVBlkverifyState *s = bs->opaque;
/* Only flush test file, the raw file is not important */
return bdrv_aio_flush(s->test_file->bs, cb, opaque);
return bdrv_aio_flush(s->test_file, cb, opaque);
}
static bool blkverify_recurse_is_first_non_filter(BlockDriverState *bs,
@@ -284,44 +279,54 @@ static bool blkverify_recurse_is_first_non_filter(BlockDriverState *bs,
{
BDRVBlkverifyState *s = bs->opaque;
bool perm = bdrv_recurse_is_first_non_filter(bs->file->bs, candidate);
bool perm = bdrv_recurse_is_first_non_filter(bs->file, candidate);
if (perm) {
return true;
}
return bdrv_recurse_is_first_non_filter(s->test_file->bs, candidate);
return bdrv_recurse_is_first_non_filter(s->test_file, candidate);
}
static void blkverify_refresh_filename(BlockDriverState *bs, QDict *options)
/* Propagate AioContext changes to ->test_file */
static void blkverify_detach_aio_context(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
/* bs->file->bs has already been refreshed */
bdrv_refresh_filename(s->test_file->bs);
bdrv_detach_aio_context(s->test_file);
}
if (bs->file->bs->full_open_options
&& s->test_file->bs->full_open_options)
{
static void blkverify_attach_aio_context(BlockDriverState *bs,
AioContext *new_context)
{
BDRVBlkverifyState *s = bs->opaque;
bdrv_attach_aio_context(s->test_file, new_context);
}
static void blkverify_refresh_filename(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
/* bs->file has already been refreshed */
bdrv_refresh_filename(s->test_file);
if (bs->file->full_open_options && s->test_file->full_open_options) {
QDict *opts = qdict_new();
qdict_put_obj(opts, "driver", QOBJECT(qstring_from_str("blkverify")));
QINCREF(bs->file->bs->full_open_options);
qdict_put_obj(opts, "raw", QOBJECT(bs->file->bs->full_open_options));
QINCREF(s->test_file->bs->full_open_options);
qdict_put_obj(opts, "test",
QOBJECT(s->test_file->bs->full_open_options));
QINCREF(bs->file->full_open_options);
qdict_put_obj(opts, "raw", QOBJECT(bs->file->full_open_options));
QINCREF(s->test_file->full_open_options);
qdict_put_obj(opts, "test", QOBJECT(s->test_file->full_open_options));
bs->full_open_options = opts;
}
if (bs->file->bs->exact_filename[0]
&& s->test_file->bs->exact_filename[0])
{
if (bs->file->exact_filename[0] && s->test_file->exact_filename[0]) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"blkverify:%s:%s",
bs->file->bs->exact_filename,
s->test_file->bs->exact_filename);
bs->file->exact_filename, s->test_file->exact_filename);
}
}
@@ -340,6 +345,9 @@ static BlockDriver bdrv_blkverify = {
.bdrv_aio_writev = blkverify_aio_writev,
.bdrv_aio_flush = blkverify_aio_flush,
.bdrv_attach_aio_context = blkverify_attach_aio_context,
.bdrv_detach_aio_context = blkverify_detach_aio_context,
.is_filter = true,
.bdrv_recurse_is_first_non_filter = blkverify_recurse_is_first_non_filter,
};

File diff suppressed because it is too large Load Diff

View File

@@ -22,12 +22,9 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "qemu/bswap.h"
/**************************************************************/
@@ -105,9 +102,8 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
int ret;
bs->read_only = 1; // no write support yet
bs->request_alignment = BDRV_SECTOR_SIZE; /* No sub-sector I/O supported */
ret = bdrv_pread(bs->file->bs, 0, &bochs, sizeof(bochs));
ret = bdrv_pread(bs->file, 0, &bochs, sizeof(bochs));
if (ret < 0) {
return ret;
}
@@ -141,7 +137,7 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
return -ENOMEM;
}
ret = bdrv_pread(bs->file->bs, le32_to_cpu(bochs.header), s->catalog_bitmap,
ret = bdrv_pread(bs->file, le32_to_cpu(bochs.header), s->catalog_bitmap,
s->catalog_size * 4);
if (ret < 0) {
goto fail;
@@ -210,7 +206,7 @@ static int64_t seek_to_sector(BlockDriverState *bs, int64_t sector_num)
(s->extent_blocks + s->bitmap_blocks));
/* read in bitmap for current extent */
ret = bdrv_pread(bs->file->bs, bitmap_offset + (extent_offset / 8),
ret = bdrv_pread(bs->file, bitmap_offset + (extent_offset / 8),
&bitmap_entry, 1);
if (ret < 0) {
return ret;
@@ -223,52 +219,38 @@ static int64_t seek_to_sector(BlockDriverState *bs, int64_t sector_num)
return bitmap_offset + (512 * (s->bitmap_blocks + extent_offset));
}
static int coroutine_fn
bochs_co_preadv(BlockDriverState *bs, uint64_t offset, uint64_t bytes,
QEMUIOVector *qiov, int flags)
static int bochs_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVBochsState *s = bs->opaque;
uint64_t sector_num = offset >> BDRV_SECTOR_BITS;
int nb_sectors = bytes >> BDRV_SECTOR_BITS;
uint64_t bytes_done = 0;
QEMUIOVector local_qiov;
int ret;
assert((offset & (BDRV_SECTOR_SIZE - 1)) == 0);
assert((bytes & (BDRV_SECTOR_SIZE - 1)) == 0);
qemu_iovec_init(&local_qiov, qiov->niov);
qemu_co_mutex_lock(&s->lock);
while (nb_sectors > 0) {
int64_t block_offset = seek_to_sector(bs, sector_num);
if (block_offset < 0) {
ret = block_offset;
goto fail;
}
qemu_iovec_reset(&local_qiov);
qemu_iovec_concat(&local_qiov, qiov, bytes_done, 512);
if (block_offset > 0) {
ret = bdrv_co_preadv(bs->file->bs, block_offset, 512,
&local_qiov, 0);
return block_offset;
} else if (block_offset > 0) {
ret = bdrv_pread(bs->file, block_offset, buf, 512);
if (ret < 0) {
goto fail;
return ret;
}
} else {
qemu_iovec_memset(&local_qiov, 0, 0, 512);
memset(buf, 0, 512);
}
nb_sectors--;
sector_num++;
bytes_done += 512;
buf += 512;
}
return 0;
}
ret = 0;
fail:
static coroutine_fn int bochs_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVBochsState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = bochs_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
qemu_iovec_destroy(&local_qiov);
return ret;
}
@@ -283,7 +265,7 @@ static BlockDriver bdrv_bochs = {
.instance_size = sizeof(BDRVBochsState),
.bdrv_probe = bochs_probe,
.bdrv_open = bochs_open,
.bdrv_co_preadv = bochs_co_preadv,
.bdrv_read = bochs_co_read,
.bdrv_close = bochs_close,
};

View File

@@ -21,12 +21,9 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "qemu/bswap.h"
#include <zlib.h>
/* Maximum compressed block size */
@@ -67,10 +64,9 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
int ret;
bs->read_only = 1;
bs->request_alignment = BDRV_SECTOR_SIZE; /* No sub-sector I/O supported */
/* read header */
ret = bdrv_pread(bs->file->bs, 128, &s->block_size, 4);
ret = bdrv_pread(bs->file, 128, &s->block_size, 4);
if (ret < 0) {
return ret;
}
@@ -96,7 +92,7 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
return -EINVAL;
}
ret = bdrv_pread(bs->file->bs, 128 + 4, &s->n_blocks, 4);
ret = bdrv_pread(bs->file, 128 + 4, &s->n_blocks, 4);
if (ret < 0) {
return ret;
}
@@ -127,7 +123,7 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
return -ENOMEM;
}
ret = bdrv_pread(bs->file->bs, 128 + 4 + 4, s->offsets, offsets_size);
ret = bdrv_pread(bs->file, 128 + 4 + 4, s->offsets, offsets_size);
if (ret < 0) {
goto fail;
}
@@ -207,8 +203,8 @@ static inline int cloop_read_block(BlockDriverState *bs, int block_num)
int ret;
uint32_t bytes = s->offsets[block_num + 1] - s->offsets[block_num];
ret = bdrv_pread(bs->file->bs, s->offsets[block_num],
s->compressed_block, bytes);
ret = bdrv_pread(bs->file, s->offsets[block_num], s->compressed_block,
bytes);
if (ret != bytes) {
return -1;
}
@@ -231,38 +227,33 @@ static inline int cloop_read_block(BlockDriverState *bs, int block_num)
return 0;
}
static int coroutine_fn
cloop_co_preadv(BlockDriverState *bs, uint64_t offset, uint64_t bytes,
QEMUIOVector *qiov, int flags)
static int cloop_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVCloopState *s = bs->opaque;
uint64_t sector_num = offset >> BDRV_SECTOR_BITS;
int nb_sectors = bytes >> BDRV_SECTOR_BITS;
int ret, i;
assert((offset & (BDRV_SECTOR_SIZE - 1)) == 0);
assert((bytes & (BDRV_SECTOR_SIZE - 1)) == 0);
qemu_co_mutex_lock(&s->lock);
int i;
for (i = 0; i < nb_sectors; i++) {
void *data;
uint32_t sector_offset_in_block =
((sector_num + i) % s->sectors_per_block),
block_num = (sector_num + i) / s->sectors_per_block;
if (cloop_read_block(bs, block_num) != 0) {
ret = -EIO;
goto fail;
return -1;
}
data = s->uncompressed_block + sector_offset_in_block * 512;
qemu_iovec_from_buf(qiov, i * 512, data, 512);
memcpy(buf + i * 512,
s->uncompressed_block + sector_offset_in_block * 512, 512);
}
return 0;
}
ret = 0;
fail:
static coroutine_fn int cloop_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVCloopState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = cloop_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
@@ -280,7 +271,7 @@ static BlockDriver bdrv_cloop = {
.instance_size = sizeof(BDRVCloopState),
.bdrv_probe = cloop_probe,
.bdrv_open = cloop_open,
.bdrv_co_preadv = cloop_co_preadv,
.bdrv_read = cloop_co_read,
.bdrv_close = cloop_close,
};

View File

@@ -12,14 +12,10 @@
*
*/
#include "qemu/osdep.h"
#include "trace.h"
#include "block/block_int.h"
#include "block/blockjob.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "qemu/ratelimit.h"
#include "sysemu/block-backend.h"
enum {
/*
@@ -36,36 +32,28 @@ typedef struct CommitBlockJob {
BlockJob common;
RateLimit limit;
BlockDriverState *active;
BlockBackend *top;
BlockBackend *base;
BlockDriverState *top;
BlockDriverState *base;
BlockdevOnError on_error;
int base_flags;
int orig_overlay_flags;
char *backing_file_str;
} CommitBlockJob;
static int coroutine_fn commit_populate(BlockBackend *bs, BlockBackend *base,
static int coroutine_fn commit_populate(BlockDriverState *bs,
BlockDriverState *base,
int64_t sector_num, int nb_sectors,
void *buf)
{
int ret = 0;
QEMUIOVector qiov;
struct iovec iov = {
.iov_base = buf,
.iov_len = nb_sectors * BDRV_SECTOR_SIZE,
};
qemu_iovec_init_external(&qiov, &iov, 1);
ret = blk_co_preadv(bs, sector_num * BDRV_SECTOR_SIZE,
qiov.size, &qiov, 0);
if (ret < 0) {
ret = bdrv_read(bs, sector_num, buf, nb_sectors);
if (ret) {
return ret;
}
ret = blk_co_pwritev(base, sector_num * BDRV_SECTOR_SIZE,
qiov.size, &qiov, 0);
if (ret < 0) {
ret = bdrv_write(base, sector_num, buf, nb_sectors);
if (ret) {
return ret;
}
@@ -81,8 +69,8 @@ static void commit_complete(BlockJob *job, void *opaque)
CommitBlockJob *s = container_of(job, CommitBlockJob, common);
CommitCompleteData *data = opaque;
BlockDriverState *active = s->active;
BlockDriverState *top = blk_bs(s->top);
BlockDriverState *base = blk_bs(s->base);
BlockDriverState *top = s->top;
BlockDriverState *base = s->base;
BlockDriverState *overlay_bs;
int ret = data->ret;
@@ -102,8 +90,6 @@ static void commit_complete(BlockJob *job, void *opaque)
bdrv_reopen(overlay_bs, s->orig_overlay_flags, NULL);
}
g_free(s->backing_file_str);
blk_unref(s->top);
blk_unref(s->base);
block_job_completed(&s->common, ret);
g_free(data);
}
@@ -112,6 +98,8 @@ static void coroutine_fn commit_run(void *opaque)
{
CommitBlockJob *s = opaque;
CommitCompleteData *data;
BlockDriverState *top = s->top;
BlockDriverState *base = s->base;
int64_t sector_num, end;
int ret = 0;
int n = 0;
@@ -119,27 +107,27 @@ static void coroutine_fn commit_run(void *opaque)
int bytes_written = 0;
int64_t base_len;
ret = s->common.len = blk_getlength(s->top);
ret = s->common.len = bdrv_getlength(top);
if (s->common.len < 0) {
goto out;
}
ret = base_len = blk_getlength(s->base);
ret = base_len = bdrv_getlength(base);
if (base_len < 0) {
goto out;
}
if (base_len < s->common.len) {
ret = blk_truncate(s->base, s->common.len);
ret = bdrv_truncate(base, s->common.len);
if (ret) {
goto out;
}
}
end = s->common.len >> BDRV_SECTOR_BITS;
buf = blk_blockalign(s->top, COMMIT_BUFFER_SIZE);
buf = qemu_blockalign(top, COMMIT_BUFFER_SIZE);
for (sector_num = 0; sector_num < end; sector_num += n) {
uint64_t delay_ns = 0;
@@ -154,8 +142,7 @@ wait:
break;
}
/* Copy if allocated above the base */
ret = bdrv_is_allocated_above(blk_bs(s->top), blk_bs(s->base),
sector_num,
ret = bdrv_is_allocated_above(top, base, sector_num,
COMMIT_BUFFER_SIZE / BDRV_SECTOR_SIZE,
&n);
copy = (ret == 1);
@@ -167,7 +154,7 @@ wait:
goto wait;
}
}
ret = commit_populate(s->top, s->base, sector_num, n, buf);
ret = commit_populate(top, base, sector_num, n, buf);
bytes_written += n * BDRV_SECTOR_SIZE;
}
if (ret < 0) {
@@ -199,7 +186,7 @@ static void commit_set_speed(BlockJob *job, int64_t speed, Error **errp)
CommitBlockJob *s = container_of(job, CommitBlockJob, common);
if (speed < 0) {
error_setg(errp, QERR_INVALID_PARAMETER, "speed");
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
@@ -223,6 +210,13 @@ void commit_start(BlockDriverState *bs, BlockDriverState *base,
BlockDriverState *overlay_bs;
Error *local_err = NULL;
if ((on_error == BLOCKDEV_ON_ERROR_STOP ||
on_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_setg(errp, "Invalid parameter combination");
return;
}
assert(top != bs);
if (top == base) {
error_setg(errp, "Invalid files for merge: top and base are the same");
@@ -240,14 +234,14 @@ void commit_start(BlockDriverState *bs, BlockDriverState *base,
orig_overlay_flags = bdrv_get_flags(overlay_bs);
/* convert base & overlay_bs to r/w, if necessary */
if (!(orig_overlay_flags & BDRV_O_RDWR)) {
reopen_queue = bdrv_reopen_queue(reopen_queue, overlay_bs, NULL,
orig_overlay_flags | BDRV_O_RDWR);
}
if (!(orig_base_flags & BDRV_O_RDWR)) {
reopen_queue = bdrv_reopen_queue(reopen_queue, base, NULL,
reopen_queue = bdrv_reopen_queue(reopen_queue, base,
orig_base_flags | BDRV_O_RDWR);
}
if (!(orig_overlay_flags & BDRV_O_RDWR)) {
reopen_queue = bdrv_reopen_queue(reopen_queue, overlay_bs,
orig_overlay_flags | BDRV_O_RDWR);
}
if (reopen_queue) {
bdrv_reopen_multiple(reopen_queue, &local_err);
if (local_err != NULL) {
@@ -262,12 +256,8 @@ void commit_start(BlockDriverState *bs, BlockDriverState *base,
return;
}
s->base = blk_new();
blk_insert_bs(s->base, base);
s->top = blk_new();
blk_insert_bs(s->top, top);
s->base = base;
s->top = top;
s->active = bs;
s->base_flags = orig_base_flags;

View File

@@ -1,588 +0,0 @@
/*
* QEMU block full disk encryption
*
* Copyright (c) 2015-2016 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "block/block_int.h"
#include "sysemu/block-backend.h"
#include "crypto/block.h"
#include "qapi/opts-visitor.h"
#include "qapi-visit.h"
#include "qapi/error.h"
#define BLOCK_CRYPTO_OPT_LUKS_KEY_SECRET "key-secret"
#define BLOCK_CRYPTO_OPT_LUKS_CIPHER_ALG "cipher-alg"
#define BLOCK_CRYPTO_OPT_LUKS_CIPHER_MODE "cipher-mode"
#define BLOCK_CRYPTO_OPT_LUKS_IVGEN_ALG "ivgen-alg"
#define BLOCK_CRYPTO_OPT_LUKS_IVGEN_HASH_ALG "ivgen-hash-alg"
#define BLOCK_CRYPTO_OPT_LUKS_HASH_ALG "hash-alg"
typedef struct BlockCrypto BlockCrypto;
struct BlockCrypto {
QCryptoBlock *block;
};
static int block_crypto_probe_generic(QCryptoBlockFormat format,
const uint8_t *buf,
int buf_size,
const char *filename)
{
if (qcrypto_block_has_format(format, buf, buf_size)) {
return 100;
} else {
return 0;
}
}
static ssize_t block_crypto_read_func(QCryptoBlock *block,
size_t offset,
uint8_t *buf,
size_t buflen,
Error **errp,
void *opaque)
{
BlockDriverState *bs = opaque;
ssize_t ret;
ret = bdrv_pread(bs->file->bs, offset, buf, buflen);
if (ret < 0) {
error_setg_errno(errp, -ret, "Could not read encryption header");
return ret;
}
return ret;
}
struct BlockCryptoCreateData {
const char *filename;
QemuOpts *opts;
BlockBackend *blk;
uint64_t size;
};
static ssize_t block_crypto_write_func(QCryptoBlock *block,
size_t offset,
const uint8_t *buf,
size_t buflen,
Error **errp,
void *opaque)
{
struct BlockCryptoCreateData *data = opaque;
ssize_t ret;
ret = blk_pwrite(data->blk, offset, buf, buflen, 0);
if (ret < 0) {
error_setg_errno(errp, -ret, "Could not write encryption header");
return ret;
}
return ret;
}
static ssize_t block_crypto_init_func(QCryptoBlock *block,
size_t headerlen,
Error **errp,
void *opaque)
{
struct BlockCryptoCreateData *data = opaque;
int ret;
/* User provided size should reflect amount of space made
* available to the guest, so we must take account of that
* which will be used by the crypto header
*/
data->size += headerlen;
qemu_opt_set_number(data->opts, BLOCK_OPT_SIZE, data->size, &error_abort);
ret = bdrv_create_file(data->filename, data->opts, errp);
if (ret < 0) {
return -1;
}
data->blk = blk_new_open(data->filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, errp);
if (!data->blk) {
return -1;
}
return 0;
}
static QemuOptsList block_crypto_runtime_opts_luks = {
.name = "crypto",
.head = QTAILQ_HEAD_INITIALIZER(block_crypto_runtime_opts_luks.head),
.desc = {
{
.name = BLOCK_CRYPTO_OPT_LUKS_KEY_SECRET,
.type = QEMU_OPT_STRING,
.help = "ID of the secret that provides the encryption key",
},
{ /* end of list */ }
},
};
static QemuOptsList block_crypto_create_opts_luks = {
.name = "crypto",
.head = QTAILQ_HEAD_INITIALIZER(block_crypto_create_opts_luks.head),
.desc = {
{
.name = BLOCK_OPT_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Virtual disk size"
},
{
.name = BLOCK_CRYPTO_OPT_LUKS_KEY_SECRET,
.type = QEMU_OPT_STRING,
.help = "ID of the secret that provides the encryption key",
},
{
.name = BLOCK_CRYPTO_OPT_LUKS_CIPHER_ALG,
.type = QEMU_OPT_STRING,
.help = "Name of encryption cipher algorithm",
},
{
.name = BLOCK_CRYPTO_OPT_LUKS_CIPHER_MODE,
.type = QEMU_OPT_STRING,
.help = "Name of encryption cipher mode",
},
{
.name = BLOCK_CRYPTO_OPT_LUKS_IVGEN_ALG,
.type = QEMU_OPT_STRING,
.help = "Name of IV generator algorithm",
},
{
.name = BLOCK_CRYPTO_OPT_LUKS_IVGEN_HASH_ALG,
.type = QEMU_OPT_STRING,
.help = "Name of IV generator hash algorithm",
},
{
.name = BLOCK_CRYPTO_OPT_LUKS_HASH_ALG,
.type = QEMU_OPT_STRING,
.help = "Name of encryption hash algorithm",
},
{ /* end of list */ }
},
};
static QCryptoBlockOpenOptions *
block_crypto_open_opts_init(QCryptoBlockFormat format,
QemuOpts *opts,
Error **errp)
{
OptsVisitor *ov;
QCryptoBlockOpenOptions *ret = NULL;
Error *local_err = NULL;
ret = g_new0(QCryptoBlockOpenOptions, 1);
ret->format = format;
ov = opts_visitor_new(opts);
visit_start_struct(opts_get_visitor(ov),
NULL, NULL, 0, &local_err);
if (local_err) {
goto out;
}
switch (format) {
case Q_CRYPTO_BLOCK_FORMAT_LUKS:
visit_type_QCryptoBlockOptionsLUKS_members(
opts_get_visitor(ov), &ret->u.luks, &local_err);
break;
default:
error_setg(&local_err, "Unsupported block format %d", format);
break;
}
if (!local_err) {
visit_check_struct(opts_get_visitor(ov), &local_err);
}
visit_end_struct(opts_get_visitor(ov));
out:
if (local_err) {
error_propagate(errp, local_err);
qapi_free_QCryptoBlockOpenOptions(ret);
ret = NULL;
}
opts_visitor_cleanup(ov);
return ret;
}
static QCryptoBlockCreateOptions *
block_crypto_create_opts_init(QCryptoBlockFormat format,
QemuOpts *opts,
Error **errp)
{
OptsVisitor *ov;
QCryptoBlockCreateOptions *ret = NULL;
Error *local_err = NULL;
ret = g_new0(QCryptoBlockCreateOptions, 1);
ret->format = format;
ov = opts_visitor_new(opts);
visit_start_struct(opts_get_visitor(ov),
NULL, NULL, 0, &local_err);
if (local_err) {
goto out;
}
switch (format) {
case Q_CRYPTO_BLOCK_FORMAT_LUKS:
visit_type_QCryptoBlockCreateOptionsLUKS_members(
opts_get_visitor(ov), &ret->u.luks, &local_err);
break;
default:
error_setg(&local_err, "Unsupported block format %d", format);
break;
}
if (!local_err) {
visit_check_struct(opts_get_visitor(ov), &local_err);
}
visit_end_struct(opts_get_visitor(ov));
out:
if (local_err) {
error_propagate(errp, local_err);
qapi_free_QCryptoBlockCreateOptions(ret);
ret = NULL;
}
opts_visitor_cleanup(ov);
return ret;
}
static int block_crypto_open_generic(QCryptoBlockFormat format,
QemuOptsList *opts_spec,
BlockDriverState *bs,
QDict *options,
int flags,
Error **errp)
{
BlockCrypto *crypto = bs->opaque;
QemuOpts *opts = NULL;
Error *local_err = NULL;
int ret = -EINVAL;
QCryptoBlockOpenOptions *open_opts = NULL;
unsigned int cflags = 0;
opts = qemu_opts_create(opts_spec, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
goto cleanup;
}
open_opts = block_crypto_open_opts_init(format, opts, errp);
if (!open_opts) {
goto cleanup;
}
if (flags & BDRV_O_NO_IO) {
cflags |= QCRYPTO_BLOCK_OPEN_NO_IO;
}
crypto->block = qcrypto_block_open(open_opts,
block_crypto_read_func,
bs,
cflags,
errp);
if (!crypto->block) {
ret = -EIO;
goto cleanup;
}
bs->encrypted = 1;
bs->valid_key = 1;
ret = 0;
cleanup:
qapi_free_QCryptoBlockOpenOptions(open_opts);
return ret;
}
static int block_crypto_create_generic(QCryptoBlockFormat format,
const char *filename,
QemuOpts *opts,
Error **errp)
{
int ret = -EINVAL;
QCryptoBlockCreateOptions *create_opts = NULL;
QCryptoBlock *crypto = NULL;
struct BlockCryptoCreateData data = {
.size = ROUND_UP(qemu_opt_get_size_del(opts, BLOCK_OPT_SIZE, 0),
BDRV_SECTOR_SIZE),
.opts = opts,
.filename = filename,
};
create_opts = block_crypto_create_opts_init(format, opts, errp);
if (!create_opts) {
return -1;
}
crypto = qcrypto_block_create(create_opts,
block_crypto_init_func,
block_crypto_write_func,
&data,
errp);
if (!crypto) {
ret = -EIO;
goto cleanup;
}
ret = 0;
cleanup:
qcrypto_block_free(crypto);
blk_unref(data.blk);
qapi_free_QCryptoBlockCreateOptions(create_opts);
return ret;
}
static int block_crypto_truncate(BlockDriverState *bs, int64_t offset)
{
BlockCrypto *crypto = bs->opaque;
size_t payload_offset =
qcrypto_block_get_payload_offset(crypto->block);
offset += payload_offset;
return bdrv_truncate(bs->file->bs, offset);
}
static void block_crypto_close(BlockDriverState *bs)
{
BlockCrypto *crypto = bs->opaque;
qcrypto_block_free(crypto->block);
}
#define BLOCK_CRYPTO_MAX_SECTORS 32
static coroutine_fn int
block_crypto_co_readv(BlockDriverState *bs, int64_t sector_num,
int remaining_sectors, QEMUIOVector *qiov)
{
BlockCrypto *crypto = bs->opaque;
int cur_nr_sectors; /* number of sectors in current iteration */
uint64_t bytes_done = 0;
uint8_t *cipher_data = NULL;
QEMUIOVector hd_qiov;
int ret = 0;
size_t payload_offset =
qcrypto_block_get_payload_offset(crypto->block) / 512;
qemu_iovec_init(&hd_qiov, qiov->niov);
/* Bounce buffer so we have a linear mem region for
* entire sector. XXX optimize so we avoid bounce
* buffer in case that qiov->niov == 1
*/
cipher_data =
qemu_try_blockalign(bs->file->bs, MIN(BLOCK_CRYPTO_MAX_SECTORS * 512,
qiov->size));
if (cipher_data == NULL) {
ret = -ENOMEM;
goto cleanup;
}
while (remaining_sectors) {
cur_nr_sectors = remaining_sectors;
if (cur_nr_sectors > BLOCK_CRYPTO_MAX_SECTORS) {
cur_nr_sectors = BLOCK_CRYPTO_MAX_SECTORS;
}
qemu_iovec_reset(&hd_qiov);
qemu_iovec_add(&hd_qiov, cipher_data, cur_nr_sectors * 512);
ret = bdrv_co_readv(bs->file->bs,
payload_offset + sector_num,
cur_nr_sectors, &hd_qiov);
if (ret < 0) {
goto cleanup;
}
if (qcrypto_block_decrypt(crypto->block,
sector_num,
cipher_data, cur_nr_sectors * 512,
NULL) < 0) {
ret = -EIO;
goto cleanup;
}
qemu_iovec_from_buf(qiov, bytes_done,
cipher_data, cur_nr_sectors * 512);
remaining_sectors -= cur_nr_sectors;
sector_num += cur_nr_sectors;
bytes_done += cur_nr_sectors * 512;
}
cleanup:
qemu_iovec_destroy(&hd_qiov);
qemu_vfree(cipher_data);
return ret;
}
static coroutine_fn int
block_crypto_co_writev(BlockDriverState *bs, int64_t sector_num,
int remaining_sectors, QEMUIOVector *qiov)
{
BlockCrypto *crypto = bs->opaque;
int cur_nr_sectors; /* number of sectors in current iteration */
uint64_t bytes_done = 0;
uint8_t *cipher_data = NULL;
QEMUIOVector hd_qiov;
int ret = 0;
size_t payload_offset =
qcrypto_block_get_payload_offset(crypto->block) / 512;
qemu_iovec_init(&hd_qiov, qiov->niov);
/* Bounce buffer so we have a linear mem region for
* entire sector. XXX optimize so we avoid bounce
* buffer in case that qiov->niov == 1
*/
cipher_data =
qemu_try_blockalign(bs->file->bs, MIN(BLOCK_CRYPTO_MAX_SECTORS * 512,
qiov->size));
if (cipher_data == NULL) {
ret = -ENOMEM;
goto cleanup;
}
while (remaining_sectors) {
cur_nr_sectors = remaining_sectors;
if (cur_nr_sectors > BLOCK_CRYPTO_MAX_SECTORS) {
cur_nr_sectors = BLOCK_CRYPTO_MAX_SECTORS;
}
qemu_iovec_to_buf(qiov, bytes_done,
cipher_data, cur_nr_sectors * 512);
if (qcrypto_block_encrypt(crypto->block,
sector_num,
cipher_data, cur_nr_sectors * 512,
NULL) < 0) {
ret = -EIO;
goto cleanup;
}
qemu_iovec_reset(&hd_qiov);
qemu_iovec_add(&hd_qiov, cipher_data, cur_nr_sectors * 512);
ret = bdrv_co_writev(bs->file->bs,
payload_offset + sector_num,
cur_nr_sectors, &hd_qiov);
if (ret < 0) {
goto cleanup;
}
remaining_sectors -= cur_nr_sectors;
sector_num += cur_nr_sectors;
bytes_done += cur_nr_sectors * 512;
}
cleanup:
qemu_iovec_destroy(&hd_qiov);
qemu_vfree(cipher_data);
return ret;
}
static int64_t block_crypto_getlength(BlockDriverState *bs)
{
BlockCrypto *crypto = bs->opaque;
int64_t len = bdrv_getlength(bs->file->bs);
ssize_t offset = qcrypto_block_get_payload_offset(crypto->block);
len -= offset;
return len;
}
static int block_crypto_probe_luks(const uint8_t *buf,
int buf_size,
const char *filename) {
return block_crypto_probe_generic(Q_CRYPTO_BLOCK_FORMAT_LUKS,
buf, buf_size, filename);
}
static int block_crypto_open_luks(BlockDriverState *bs,
QDict *options,
int flags,
Error **errp)
{
return block_crypto_open_generic(Q_CRYPTO_BLOCK_FORMAT_LUKS,
&block_crypto_runtime_opts_luks,
bs, options, flags, errp);
}
static int block_crypto_create_luks(const char *filename,
QemuOpts *opts,
Error **errp)
{
return block_crypto_create_generic(Q_CRYPTO_BLOCK_FORMAT_LUKS,
filename, opts, errp);
}
BlockDriver bdrv_crypto_luks = {
.format_name = "luks",
.instance_size = sizeof(BlockCrypto),
.bdrv_probe = block_crypto_probe_luks,
.bdrv_open = block_crypto_open_luks,
.bdrv_close = block_crypto_close,
.bdrv_create = block_crypto_create_luks,
.bdrv_truncate = block_crypto_truncate,
.create_opts = &block_crypto_create_opts_luks,
.bdrv_co_readv = block_crypto_co_readv,
.bdrv_co_writev = block_crypto_co_writev,
.bdrv_getlength = block_crypto_getlength,
};
static void block_crypto_init(void)
{
bdrv_register(&bdrv_crypto_luks);
}
block_init(block_crypto_init);

View File

@@ -21,31 +21,19 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "qemu/error-report.h"
#include "block/block_int.h"
#include "qapi/qmp/qbool.h"
#include "qapi/qmp/qstring.h"
#include "crypto/secret.h"
#include <curl/curl.h>
#include "qemu/cutils.h"
// #define DEBUG_CURL
// #define DEBUG_VERBOSE
#ifdef DEBUG_CURL
#define DEBUG_CURL_PRINT 1
#define DPRINTF(fmt, ...) do { printf(fmt, ## __VA_ARGS__); } while (0)
#else
#define DEBUG_CURL_PRINT 0
#define DPRINTF(fmt, ...) do { } while (0)
#endif
#define DPRINTF(fmt, ...) \
do { \
if (DEBUG_CURL_PRINT) { \
fprintf(stderr, fmt, ## __VA_ARGS__); \
} \
} while (0)
#if LIBCURL_VERSION_NUM >= 0x071000
/* The multi interface timer callback was introduced in 7.16.0 */
@@ -87,10 +75,6 @@ static CURLMcode __curl_multi_socket_action(CURLM *multi_handle,
#define CURL_BLOCK_OPT_SSLVERIFY "sslverify"
#define CURL_BLOCK_OPT_TIMEOUT "timeout"
#define CURL_BLOCK_OPT_COOKIE "cookie"
#define CURL_BLOCK_OPT_USERNAME "username"
#define CURL_BLOCK_OPT_PASSWORD_SECRET "password-secret"
#define CURL_BLOCK_OPT_PROXY_USERNAME "proxy-username"
#define CURL_BLOCK_OPT_PROXY_PASSWORD_SECRET "proxy-password-secret"
struct BDRVCURLState;
@@ -133,10 +117,6 @@ typedef struct BDRVCURLState {
char *cookie;
bool accept_range;
AioContext *aio_context;
char *username;
char *password;
char *proxyusername;
char *proxypassword;
} BDRVCURLState;
static void curl_clean_state(CURLState *s);
@@ -172,20 +152,18 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd);
switch (action) {
case CURL_POLL_IN:
aio_set_fd_handler(s->aio_context, fd, false,
curl_multi_read, NULL, state);
aio_set_fd_handler(s->aio_context, fd, curl_multi_read,
NULL, state);
break;
case CURL_POLL_OUT:
aio_set_fd_handler(s->aio_context, fd, false,
NULL, curl_multi_do, state);
aio_set_fd_handler(s->aio_context, fd, NULL, curl_multi_do, state);
break;
case CURL_POLL_INOUT:
aio_set_fd_handler(s->aio_context, fd, false,
curl_multi_read, curl_multi_do, state);
aio_set_fd_handler(s->aio_context, fd, curl_multi_read,
curl_multi_do, state);
break;
case CURL_POLL_REMOVE:
aio_set_fd_handler(s->aio_context, fd, false,
NULL, NULL, NULL);
aio_set_fd_handler(s->aio_context, fd, NULL, NULL, NULL);
break;
}
@@ -319,18 +297,6 @@ static void curl_multi_check_completion(BDRVCURLState *s)
/* ACBs for successful messages get completed in curl_read_cb */
if (msg->data.result != CURLE_OK) {
int i;
static int errcount = 100;
/* Don't lose the original error message from curl, since
* it contains extra data.
*/
if (errcount > 0) {
error_report("curl: %s", state->errmsg);
if (--errcount == 0) {
error_report("curl: further errors suppressed");
}
}
for (i = 0; i < CURL_NUM_ACB; i++) {
CURLAIOCB *acb = state->acb[i];
@@ -338,7 +304,7 @@ static void curl_multi_check_completion(BDRVCURLState *s)
continue;
}
acb->common.cb(acb->common.opaque, -EPROTO);
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_unref(acb);
state->acb[i] = NULL;
}
@@ -436,21 +402,6 @@ static CURLState *curl_init_state(BlockDriverState *bs, BDRVCURLState *s)
curl_easy_setopt(state->curl, CURLOPT_ERRORBUFFER, state->errmsg);
curl_easy_setopt(state->curl, CURLOPT_FAILONERROR, 1);
if (s->username) {
curl_easy_setopt(state->curl, CURLOPT_USERNAME, s->username);
}
if (s->password) {
curl_easy_setopt(state->curl, CURLOPT_PASSWORD, s->password);
}
if (s->proxyusername) {
curl_easy_setopt(state->curl,
CURLOPT_PROXYUSERNAME, s->proxyusername);
}
if (s->proxypassword) {
curl_easy_setopt(state->curl,
CURLOPT_PROXYPASSWORD, s->proxypassword);
}
/* Restrict supported protocols to avoid security issues in the more
* obscure protocols. For example, do not allow POP3/SMTP/IMAP see
* CVE-2013-0249.
@@ -557,31 +508,10 @@ static QemuOptsList runtime_opts = {
.type = QEMU_OPT_STRING,
.help = "Pass the cookie or list of cookies with each request"
},
{
.name = CURL_BLOCK_OPT_USERNAME,
.type = QEMU_OPT_STRING,
.help = "Username for HTTP auth"
},
{
.name = CURL_BLOCK_OPT_PASSWORD_SECRET,
.type = QEMU_OPT_STRING,
.help = "ID of secret used as password for HTTP auth",
},
{
.name = CURL_BLOCK_OPT_PROXY_USERNAME,
.type = QEMU_OPT_STRING,
.help = "Username for HTTP proxy auth"
},
{
.name = CURL_BLOCK_OPT_PROXY_PASSWORD_SECRET,
.type = QEMU_OPT_STRING,
.help = "ID of secret used as password for HTTP proxy auth",
},
{ /* end of list */ }
},
};
static int curl_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
@@ -592,7 +522,6 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags,
const char *file;
const char *cookie;
double d;
const char *secretid;
static int inited = 0;
@@ -634,26 +563,6 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags,
goto out_noclean;
}
s->username = g_strdup(qemu_opt_get(opts, CURL_BLOCK_OPT_USERNAME));
secretid = qemu_opt_get(opts, CURL_BLOCK_OPT_PASSWORD_SECRET);
if (secretid) {
s->password = qcrypto_secret_lookup_as_utf8(secretid, errp);
if (!s->password) {
goto out_noclean;
}
}
s->proxyusername = g_strdup(
qemu_opt_get(opts, CURL_BLOCK_OPT_PROXY_USERNAME));
secretid = qemu_opt_get(opts, CURL_BLOCK_OPT_PROXY_PASSWORD_SECRET);
if (secretid) {
s->proxypassword = qcrypto_secret_lookup_as_utf8(secretid, errp);
if (!s->proxypassword) {
goto out_noclean;
}
}
if (!inited) {
curl_global_init(CURL_GLOBAL_ALL);
inited = 1;

View File

@@ -1,387 +0,0 @@
/*
* Block Dirty Bitmap
*
* Copyright (c) 2016 Red Hat. Inc
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "trace.h"
#include "block/block_int.h"
#include "block/blockjob.h"
/**
* A BdrvDirtyBitmap can be in three possible states:
* (1) successor is NULL and disabled is false: full r/w mode
* (2) successor is NULL and disabled is true: read only mode ("disabled")
* (3) successor is set: frozen mode.
* A frozen bitmap cannot be renamed, deleted, anonymized, cleared, set,
* or enabled. A frozen bitmap can only abdicate() or reclaim().
*/
struct BdrvDirtyBitmap {
HBitmap *bitmap; /* Dirty sector bitmap implementation */
BdrvDirtyBitmap *successor; /* Anonymous child; implies frozen status */
char *name; /* Optional non-empty unique ID */
int64_t size; /* Size of the bitmap (Number of sectors) */
bool disabled; /* Bitmap is read-only */
QLIST_ENTRY(BdrvDirtyBitmap) list;
};
BdrvDirtyBitmap *bdrv_find_dirty_bitmap(BlockDriverState *bs, const char *name)
{
BdrvDirtyBitmap *bm;
assert(name);
QLIST_FOREACH(bm, &bs->dirty_bitmaps, list) {
if (bm->name && !strcmp(name, bm->name)) {
return bm;
}
}
return NULL;
}
void bdrv_dirty_bitmap_make_anon(BdrvDirtyBitmap *bitmap)
{
assert(!bdrv_dirty_bitmap_frozen(bitmap));
g_free(bitmap->name);
bitmap->name = NULL;
}
BdrvDirtyBitmap *bdrv_create_dirty_bitmap(BlockDriverState *bs,
uint32_t granularity,
const char *name,
Error **errp)
{
int64_t bitmap_size;
BdrvDirtyBitmap *bitmap;
uint32_t sector_granularity;
assert((granularity & (granularity - 1)) == 0);
if (name && bdrv_find_dirty_bitmap(bs, name)) {
error_setg(errp, "Bitmap already exists: %s", name);
return NULL;
}
sector_granularity = granularity >> BDRV_SECTOR_BITS;
assert(sector_granularity);
bitmap_size = bdrv_nb_sectors(bs);
if (bitmap_size < 0) {
error_setg_errno(errp, -bitmap_size, "could not get length of device");
errno = -bitmap_size;
return NULL;
}
bitmap = g_new0(BdrvDirtyBitmap, 1);
bitmap->bitmap = hbitmap_alloc(bitmap_size, ctz32(sector_granularity));
bitmap->size = bitmap_size;
bitmap->name = g_strdup(name);
bitmap->disabled = false;
QLIST_INSERT_HEAD(&bs->dirty_bitmaps, bitmap, list);
return bitmap;
}
bool bdrv_dirty_bitmap_frozen(BdrvDirtyBitmap *bitmap)
{
return bitmap->successor;
}
bool bdrv_dirty_bitmap_enabled(BdrvDirtyBitmap *bitmap)
{
return !(bitmap->disabled || bitmap->successor);
}
DirtyBitmapStatus bdrv_dirty_bitmap_status(BdrvDirtyBitmap *bitmap)
{
if (bdrv_dirty_bitmap_frozen(bitmap)) {
return DIRTY_BITMAP_STATUS_FROZEN;
} else if (!bdrv_dirty_bitmap_enabled(bitmap)) {
return DIRTY_BITMAP_STATUS_DISABLED;
} else {
return DIRTY_BITMAP_STATUS_ACTIVE;
}
}
/**
* Create a successor bitmap destined to replace this bitmap after an operation.
* Requires that the bitmap is not frozen and has no successor.
*/
int bdrv_dirty_bitmap_create_successor(BlockDriverState *bs,
BdrvDirtyBitmap *bitmap, Error **errp)
{
uint64_t granularity;
BdrvDirtyBitmap *child;
if (bdrv_dirty_bitmap_frozen(bitmap)) {
error_setg(errp, "Cannot create a successor for a bitmap that is "
"currently frozen");
return -1;
}
assert(!bitmap->successor);
/* Create an anonymous successor */
granularity = bdrv_dirty_bitmap_granularity(bitmap);
child = bdrv_create_dirty_bitmap(bs, granularity, NULL, errp);
if (!child) {
return -1;
}
/* Successor will be on or off based on our current state. */
child->disabled = bitmap->disabled;
/* Install the successor and freeze the parent */
bitmap->successor = child;
return 0;
}
/**
* For a bitmap with a successor, yield our name to the successor,
* delete the old bitmap, and return a handle to the new bitmap.
*/
BdrvDirtyBitmap *bdrv_dirty_bitmap_abdicate(BlockDriverState *bs,
BdrvDirtyBitmap *bitmap,
Error **errp)
{
char *name;
BdrvDirtyBitmap *successor = bitmap->successor;
if (successor == NULL) {
error_setg(errp, "Cannot relinquish control if "
"there's no successor present");
return NULL;
}
name = bitmap->name;
bitmap->name = NULL;
successor->name = name;
bitmap->successor = NULL;
bdrv_release_dirty_bitmap(bs, bitmap);
return successor;
}
/**
* In cases of failure where we can no longer safely delete the parent,
* we may wish to re-join the parent and child/successor.
* The merged parent will be un-frozen, but not explicitly re-enabled.
*/
BdrvDirtyBitmap *bdrv_reclaim_dirty_bitmap(BlockDriverState *bs,
BdrvDirtyBitmap *parent,
Error **errp)
{
BdrvDirtyBitmap *successor = parent->successor;
if (!successor) {
error_setg(errp, "Cannot reclaim a successor when none is present");
return NULL;
}
if (!hbitmap_merge(parent->bitmap, successor->bitmap)) {
error_setg(errp, "Merging of parent and successor bitmap failed");
return NULL;
}
bdrv_release_dirty_bitmap(bs, successor);
parent->successor = NULL;
return parent;
}
/**
* Truncates _all_ bitmaps attached to a BDS.
*/
void bdrv_dirty_bitmap_truncate(BlockDriverState *bs)
{
BdrvDirtyBitmap *bitmap;
uint64_t size = bdrv_nb_sectors(bs);
QLIST_FOREACH(bitmap, &bs->dirty_bitmaps, list) {
assert(!bdrv_dirty_bitmap_frozen(bitmap));
hbitmap_truncate(bitmap->bitmap, size);
bitmap->size = size;
}
}
static void bdrv_do_release_matching_dirty_bitmap(BlockDriverState *bs,
BdrvDirtyBitmap *bitmap,
bool only_named)
{
BdrvDirtyBitmap *bm, *next;
QLIST_FOREACH_SAFE(bm, &bs->dirty_bitmaps, list, next) {
if ((!bitmap || bm == bitmap) && (!only_named || bm->name)) {
assert(!bdrv_dirty_bitmap_frozen(bm));
QLIST_REMOVE(bm, list);
hbitmap_free(bm->bitmap);
g_free(bm->name);
g_free(bm);
if (bitmap) {
return;
}
}
}
}
void bdrv_release_dirty_bitmap(BlockDriverState *bs, BdrvDirtyBitmap *bitmap)
{
bdrv_do_release_matching_dirty_bitmap(bs, bitmap, false);
}
/**
* Release all named dirty bitmaps attached to a BDS (for use in bdrv_close()).
* There must not be any frozen bitmaps attached.
*/
void bdrv_release_named_dirty_bitmaps(BlockDriverState *bs)
{
bdrv_do_release_matching_dirty_bitmap(bs, NULL, true);
}
void bdrv_disable_dirty_bitmap(BdrvDirtyBitmap *bitmap)
{
assert(!bdrv_dirty_bitmap_frozen(bitmap));
bitmap->disabled = true;
}
void bdrv_enable_dirty_bitmap(BdrvDirtyBitmap *bitmap)
{
assert(!bdrv_dirty_bitmap_frozen(bitmap));
bitmap->disabled = false;
}
BlockDirtyInfoList *bdrv_query_dirty_bitmaps(BlockDriverState *bs)
{
BdrvDirtyBitmap *bm;
BlockDirtyInfoList *list = NULL;
BlockDirtyInfoList **plist = &list;
QLIST_FOREACH(bm, &bs->dirty_bitmaps, list) {
BlockDirtyInfo *info = g_new0(BlockDirtyInfo, 1);
BlockDirtyInfoList *entry = g_new0(BlockDirtyInfoList, 1);
info->count = bdrv_get_dirty_count(bm);
info->granularity = bdrv_dirty_bitmap_granularity(bm);
info->has_name = !!bm->name;
info->name = g_strdup(bm->name);
info->status = bdrv_dirty_bitmap_status(bm);
entry->value = info;
*plist = entry;
plist = &entry->next;
}
return list;
}
int bdrv_get_dirty(BlockDriverState *bs, BdrvDirtyBitmap *bitmap,
int64_t sector)
{
if (bitmap) {
return hbitmap_get(bitmap->bitmap, sector);
} else {
return 0;
}
}
/**
* Chooses a default granularity based on the existing cluster size,
* but clamped between [4K, 64K]. Defaults to 64K in the case that there
* is no cluster size information available.
*/
uint32_t bdrv_get_default_bitmap_granularity(BlockDriverState *bs)
{
BlockDriverInfo bdi;
uint32_t granularity;
if (bdrv_get_info(bs, &bdi) >= 0 && bdi.cluster_size > 0) {
granularity = MAX(4096, bdi.cluster_size);
granularity = MIN(65536, granularity);
} else {
granularity = 65536;
}
return granularity;
}
uint32_t bdrv_dirty_bitmap_granularity(BdrvDirtyBitmap *bitmap)
{
return BDRV_SECTOR_SIZE << hbitmap_granularity(bitmap->bitmap);
}
void bdrv_dirty_iter_init(BdrvDirtyBitmap *bitmap, HBitmapIter *hbi)
{
hbitmap_iter_init(hbi, bitmap->bitmap, 0);
}
void bdrv_set_dirty_bitmap(BdrvDirtyBitmap *bitmap,
int64_t cur_sector, int nr_sectors)
{
assert(bdrv_dirty_bitmap_enabled(bitmap));
hbitmap_set(bitmap->bitmap, cur_sector, nr_sectors);
}
void bdrv_reset_dirty_bitmap(BdrvDirtyBitmap *bitmap,
int64_t cur_sector, int nr_sectors)
{
assert(bdrv_dirty_bitmap_enabled(bitmap));
hbitmap_reset(bitmap->bitmap, cur_sector, nr_sectors);
}
void bdrv_clear_dirty_bitmap(BdrvDirtyBitmap *bitmap, HBitmap **out)
{
assert(bdrv_dirty_bitmap_enabled(bitmap));
if (!out) {
hbitmap_reset_all(bitmap->bitmap);
} else {
HBitmap *backup = bitmap->bitmap;
bitmap->bitmap = hbitmap_alloc(bitmap->size,
hbitmap_granularity(backup));
*out = backup;
}
}
void bdrv_undo_clear_dirty_bitmap(BdrvDirtyBitmap *bitmap, HBitmap *in)
{
HBitmap *tmp = bitmap->bitmap;
assert(bdrv_dirty_bitmap_enabled(bitmap));
bitmap->bitmap = in;
hbitmap_free(tmp);
}
void bdrv_set_dirty(BlockDriverState *bs, int64_t cur_sector,
int nr_sectors)
{
BdrvDirtyBitmap *bitmap;
QLIST_FOREACH(bitmap, &bs->dirty_bitmaps, list) {
if (!bdrv_dirty_bitmap_enabled(bitmap)) {
continue;
}
hbitmap_set(bitmap->bitmap, cur_sector, nr_sectors);
}
}
/**
* Advance an HBitmapIter to an arbitrary offset.
*/
void bdrv_set_dirty_iter(HBitmapIter *hbi, int64_t offset)
{
assert(hbi->hb);
hbitmap_iter_init(hbi, hbi->hb, offset);
}
int64_t bdrv_get_dirty_count(BdrvDirtyBitmap *bitmap)
{
return hbitmap_count(bitmap->bitmap);
}

View File

@@ -21,18 +21,11 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/bswap.h"
#include "qemu/error-report.h"
#include "qemu/module.h"
#include <zlib.h>
#ifdef CONFIG_BZIP2
#include <bzlib.h>
#endif
#include <glib.h>
enum {
/* Limit chunk sizes to prevent unreasonable amounts of memory being used
@@ -62,9 +55,6 @@ typedef struct BDRVDMGState {
uint8_t *compressed_chunk;
uint8_t *uncompressed_chunk;
z_stream zstream;
#ifdef CONFIG_BZIP2
bz_stream bzstream;
#endif
} BDRVDMGState;
static int dmg_probe(const uint8_t *buf, int buf_size, const char *filename)
@@ -87,7 +77,7 @@ static int read_uint64(BlockDriverState *bs, int64_t offset, uint64_t *result)
uint64_t buffer;
int ret;
ret = bdrv_pread(bs->file->bs, offset, &buffer, 8);
ret = bdrv_pread(bs->file, offset, &buffer, 8);
if (ret < 0) {
return ret;
}
@@ -101,7 +91,7 @@ static int read_uint32(BlockDriverState *bs, int64_t offset, uint32_t *result)
uint32_t buffer;
int ret;
ret = bdrv_pread(bs->file->bs, offset, &buffer, 4);
ret = bdrv_pread(bs->file, offset, &buffer, 4);
if (ret < 0) {
return ret;
}
@@ -110,16 +100,6 @@ static int read_uint32(BlockDriverState *bs, int64_t offset, uint32_t *result)
return 0;
}
static inline uint64_t buff_read_uint64(const uint8_t *buffer, int64_t offset)
{
return be64_to_cpu(*(uint64_t *)&buffer[offset]);
}
static inline uint32_t buff_read_uint32(const uint8_t *buffer, int64_t offset)
{
return be32_to_cpu(*(uint32_t *)&buffer[offset]);
}
/* Increase max chunk sizes, if necessary. This function is used to calculate
* the buffer sizes needed for compressed/uncompressed chunk I/O.
*/
@@ -132,7 +112,6 @@ static void update_max_chunk_size(BDRVDMGState *s, uint32_t chunk,
switch (s->types[chunk]) {
case 0x80000005: /* zlib compressed */
case 0x80000006: /* bzip2 compressed */
compressed_size = s->lengths[chunk];
uncompressed_sectors = s->sectorcounts[chunk];
break;
@@ -140,9 +119,7 @@ static void update_max_chunk_size(BDRVDMGState *s, uint32_t chunk,
uncompressed_sectors = (s->lengths[chunk] + 511) / 512;
break;
case 2: /* zero */
/* as the all-zeroes block may be large, it is treated specially: the
* sector is not copied from a large buffer, a simple memset is used
* instead. Therefore uncompressed_sectors does not need to be set. */
uncompressed_sectors = s->sectorcounts[chunk];
break;
}
@@ -154,374 +131,163 @@ static void update_max_chunk_size(BDRVDMGState *s, uint32_t chunk,
}
}
static int64_t dmg_find_koly_offset(BlockDriverState *file_bs, Error **errp)
{
int64_t length;
int64_t offset = 0;
uint8_t buffer[515];
int i, ret;
/* bdrv_getlength returns a multiple of block size (512), rounded up. Since
* dmg images can have odd sizes, try to look for the "koly" magic which
* marks the begin of the UDIF trailer (512 bytes). This magic can be found
* in the last 511 bytes of the second-last sector or the first 4 bytes of
* the last sector (search space: 515 bytes) */
length = bdrv_getlength(file_bs);
if (length < 0) {
error_setg_errno(errp, -length,
"Failed to get file size while reading UDIF trailer");
return length;
} else if (length < 512) {
error_setg(errp, "dmg file must be at least 512 bytes long");
return -EINVAL;
}
if (length > 511 + 512) {
offset = length - 511 - 512;
}
length = length < 515 ? length : 515;
ret = bdrv_pread(file_bs, offset, buffer, length);
if (ret < 0) {
error_setg_errno(errp, -ret, "Failed while reading UDIF trailer");
return ret;
}
for (i = 0; i < length - 3; i++) {
if (buffer[i] == 'k' && buffer[i+1] == 'o' &&
buffer[i+2] == 'l' && buffer[i+3] == 'y') {
return offset + i;
}
}
error_setg(errp, "Could not locate UDIF trailer in dmg file");
return -EINVAL;
}
/* used when building the sector table */
typedef struct DmgHeaderState {
/* used internally by dmg_read_mish_block to remember offsets of blocks
* across calls */
uint64_t data_fork_offset;
/* exported for dmg_open */
uint32_t max_compressed_size;
uint32_t max_sectors_per_chunk;
} DmgHeaderState;
static bool dmg_is_known_block_type(uint32_t entry_type)
{
switch (entry_type) {
case 0x00000001: /* uncompressed */
case 0x00000002: /* zeroes */
case 0x80000005: /* zlib */
#ifdef CONFIG_BZIP2
case 0x80000006: /* bzip2 */
#endif
return true;
default:
return false;
}
}
static int dmg_read_mish_block(BDRVDMGState *s, DmgHeaderState *ds,
uint8_t *buffer, uint32_t count)
{
uint32_t type, i;
int ret;
size_t new_size;
uint32_t chunk_count;
int64_t offset = 0;
uint64_t data_offset;
uint64_t in_offset = ds->data_fork_offset;
uint64_t out_offset;
type = buff_read_uint32(buffer, offset);
/* skip data that is not a valid MISH block (invalid magic or too small) */
if (type != 0x6d697368 || count < 244) {
/* assume success for now */
return 0;
}
/* chunk offsets are relative to this sector number */
out_offset = buff_read_uint64(buffer, offset + 8);
/* location in data fork for (compressed) blob (in bytes) */
data_offset = buff_read_uint64(buffer, offset + 0x18);
in_offset += data_offset;
/* move to begin of chunk entries */
offset += 204;
chunk_count = (count - 204) / 40;
new_size = sizeof(uint64_t) * (s->n_chunks + chunk_count);
s->types = g_realloc(s->types, new_size / 2);
s->offsets = g_realloc(s->offsets, new_size);
s->lengths = g_realloc(s->lengths, new_size);
s->sectors = g_realloc(s->sectors, new_size);
s->sectorcounts = g_realloc(s->sectorcounts, new_size);
for (i = s->n_chunks; i < s->n_chunks + chunk_count; i++) {
s->types[i] = buff_read_uint32(buffer, offset);
if (!dmg_is_known_block_type(s->types[i])) {
chunk_count--;
i--;
offset += 40;
continue;
}
/* sector number */
s->sectors[i] = buff_read_uint64(buffer, offset + 8);
s->sectors[i] += out_offset;
/* sector count */
s->sectorcounts[i] = buff_read_uint64(buffer, offset + 0x10);
/* all-zeroes sector (type 2) does not need to be "uncompressed" and can
* therefore be unbounded. */
if (s->types[i] != 2 && s->sectorcounts[i] > DMG_SECTORCOUNTS_MAX) {
error_report("sector count %" PRIu64 " for chunk %" PRIu32
" is larger than max (%u)",
s->sectorcounts[i], i, DMG_SECTORCOUNTS_MAX);
ret = -EINVAL;
goto fail;
}
/* offset in (compressed) data fork */
s->offsets[i] = buff_read_uint64(buffer, offset + 0x18);
s->offsets[i] += in_offset;
/* length in (compressed) data fork */
s->lengths[i] = buff_read_uint64(buffer, offset + 0x20);
if (s->lengths[i] > DMG_LENGTHS_MAX) {
error_report("length %" PRIu64 " for chunk %" PRIu32
" is larger than max (%u)",
s->lengths[i], i, DMG_LENGTHS_MAX);
ret = -EINVAL;
goto fail;
}
update_max_chunk_size(s, i, &ds->max_compressed_size,
&ds->max_sectors_per_chunk);
offset += 40;
}
s->n_chunks += chunk_count;
return 0;
fail:
return ret;
}
static int dmg_read_resource_fork(BlockDriverState *bs, DmgHeaderState *ds,
uint64_t info_begin, uint64_t info_length)
static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVDMGState *s = bs->opaque;
uint64_t info_begin, info_end, last_in_offset, last_out_offset;
uint32_t count, tmp;
uint32_t max_compressed_size = 1, max_sectors_per_chunk = 1, i;
int64_t offset;
int ret;
uint32_t count, rsrc_data_offset;
uint8_t *buffer = NULL;
uint64_t info_end;
uint64_t offset;
/* read offset from begin of resource fork (info_begin) to resource data */
ret = read_uint32(bs, info_begin, &rsrc_data_offset);
bs->read_only = 1;
s->n_chunks = 0;
s->offsets = s->lengths = s->sectors = s->sectorcounts = NULL;
/* read offset of info blocks */
offset = bdrv_getlength(bs->file);
if (offset < 0) {
ret = offset;
goto fail;
}
offset -= 0x1d8;
ret = read_uint64(bs, offset, &info_begin);
if (ret < 0) {
goto fail;
} else if (rsrc_data_offset > info_length) {
} else if (info_begin == 0) {
ret = -EINVAL;
goto fail;
}
/* read length of resource data */
ret = read_uint32(bs, info_begin + 8, &count);
ret = read_uint32(bs, info_begin, &tmp);
if (ret < 0) {
goto fail;
} else if (count == 0 || rsrc_data_offset + count > info_length) {
} else if (tmp != 0x100) {
ret = -EINVAL;
goto fail;
}
/* begin of resource data (consisting of one or more resources) */
offset = info_begin + rsrc_data_offset;
ret = read_uint32(bs, info_begin + 4, &count);
if (ret < 0) {
goto fail;
} else if (count == 0) {
ret = -EINVAL;
goto fail;
}
info_end = info_begin + count;
/* end of resource data (there is possibly a following resource map
* which will be ignored). */
info_end = offset + count;
offset = info_begin + 0x100;
/* read offsets (mish blocks) from one or more resources in resource data */
/* read offsets */
last_in_offset = last_out_offset = 0;
while (offset < info_end) {
/* size of following resource */
uint32_t type;
ret = read_uint32(bs, offset, &count);
if (ret < 0) {
goto fail;
} else if (count == 0 || count > info_end - offset) {
} else if (count == 0) {
ret = -EINVAL;
goto fail;
}
offset += 4;
buffer = g_realloc(buffer, count);
ret = bdrv_pread(bs->file->bs, offset, buffer, count);
ret = read_uint32(bs, offset, &type);
if (ret < 0) {
goto fail;
}
ret = dmg_read_mish_block(s, ds, buffer, count);
if (ret < 0) {
goto fail;
if (type == 0x6d697368 && count >= 244) {
size_t new_size;
uint32_t chunk_count;
offset += 4;
offset += 200;
chunk_count = (count - 204) / 40;
new_size = sizeof(uint64_t) * (s->n_chunks + chunk_count);
s->types = g_realloc(s->types, new_size / 2);
s->offsets = g_realloc(s->offsets, new_size);
s->lengths = g_realloc(s->lengths, new_size);
s->sectors = g_realloc(s->sectors, new_size);
s->sectorcounts = g_realloc(s->sectorcounts, new_size);
for (i = s->n_chunks; i < s->n_chunks + chunk_count; i++) {
ret = read_uint32(bs, offset, &s->types[i]);
if (ret < 0) {
goto fail;
}
offset += 4;
if (s->types[i] != 0x80000005 && s->types[i] != 1 &&
s->types[i] != 2) {
if (s->types[i] == 0xffffffff && i > 0) {
last_in_offset = s->offsets[i - 1] + s->lengths[i - 1];
last_out_offset = s->sectors[i - 1] +
s->sectorcounts[i - 1];
}
chunk_count--;
i--;
offset += 36;
continue;
}
offset += 4;
ret = read_uint64(bs, offset, &s->sectors[i]);
if (ret < 0) {
goto fail;
}
s->sectors[i] += last_out_offset;
offset += 8;
ret = read_uint64(bs, offset, &s->sectorcounts[i]);
if (ret < 0) {
goto fail;
}
offset += 8;
if (s->sectorcounts[i] > DMG_SECTORCOUNTS_MAX) {
error_report("sector count %" PRIu64 " for chunk %" PRIu32
" is larger than max (%u)",
s->sectorcounts[i], i, DMG_SECTORCOUNTS_MAX);
ret = -EINVAL;
goto fail;
}
ret = read_uint64(bs, offset, &s->offsets[i]);
if (ret < 0) {
goto fail;
}
s->offsets[i] += last_in_offset;
offset += 8;
ret = read_uint64(bs, offset, &s->lengths[i]);
if (ret < 0) {
goto fail;
}
offset += 8;
if (s->lengths[i] > DMG_LENGTHS_MAX) {
error_report("length %" PRIu64 " for chunk %" PRIu32
" is larger than max (%u)",
s->lengths[i], i, DMG_LENGTHS_MAX);
ret = -EINVAL;
goto fail;
}
update_max_chunk_size(s, i, &max_compressed_size,
&max_sectors_per_chunk);
}
s->n_chunks += chunk_count;
}
/* advance offset by size of resource */
offset += count;
}
ret = 0;
fail:
g_free(buffer);
return ret;
}
static int dmg_read_plist_xml(BlockDriverState *bs, DmgHeaderState *ds,
uint64_t info_begin, uint64_t info_length)
{
BDRVDMGState *s = bs->opaque;
int ret;
uint8_t *buffer = NULL;
char *data_begin, *data_end;
/* Have at least some length to avoid NULL for g_malloc. Attempt to set a
* safe upper cap on the data length. A test sample had a XML length of
* about 1 MiB. */
if (info_length == 0 || info_length > 16 * 1024 * 1024) {
ret = -EINVAL;
goto fail;
}
buffer = g_malloc(info_length + 1);
buffer[info_length] = '\0';
ret = bdrv_pread(bs->file->bs, info_begin, buffer, info_length);
if (ret != info_length) {
ret = -EINVAL;
goto fail;
}
/* look for <data>...</data>. The data is 284 (0x11c) bytes after base64
* decode. The actual data element has 431 (0x1af) bytes which includes tabs
* and line feeds. */
data_end = (char *)buffer;
while ((data_begin = strstr(data_end, "<data>")) != NULL) {
guchar *mish;
gsize out_len = 0;
data_begin += 6;
data_end = strstr(data_begin, "</data>");
/* malformed XML? */
if (data_end == NULL) {
ret = -EINVAL;
goto fail;
}
*data_end++ = '\0';
mish = g_base64_decode(data_begin, &out_len);
ret = dmg_read_mish_block(s, ds, mish, (uint32_t)out_len);
g_free(mish);
if (ret < 0) {
goto fail;
}
}
ret = 0;
fail:
g_free(buffer);
return ret;
}
static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVDMGState *s = bs->opaque;
DmgHeaderState ds;
uint64_t rsrc_fork_offset, rsrc_fork_length;
uint64_t plist_xml_offset, plist_xml_length;
int64_t offset;
int ret;
bs->read_only = 1;
bs->request_alignment = BDRV_SECTOR_SIZE; /* No sub-sector I/O supported */
s->n_chunks = 0;
s->offsets = s->lengths = s->sectors = s->sectorcounts = NULL;
/* used by dmg_read_mish_block to keep track of the current I/O position */
ds.data_fork_offset = 0;
ds.max_compressed_size = 1;
ds.max_sectors_per_chunk = 1;
/* locate the UDIF trailer */
offset = dmg_find_koly_offset(bs->file->bs, errp);
if (offset < 0) {
ret = offset;
goto fail;
}
/* offset of data fork (DataForkOffset) */
ret = read_uint64(bs, offset + 0x18, &ds.data_fork_offset);
if (ret < 0) {
goto fail;
} else if (ds.data_fork_offset > offset) {
ret = -EINVAL;
goto fail;
}
/* offset of resource fork (RsrcForkOffset) */
ret = read_uint64(bs, offset + 0x28, &rsrc_fork_offset);
if (ret < 0) {
goto fail;
}
ret = read_uint64(bs, offset + 0x30, &rsrc_fork_length);
if (ret < 0) {
goto fail;
}
if (rsrc_fork_offset >= offset ||
rsrc_fork_length > offset - rsrc_fork_offset) {
ret = -EINVAL;
goto fail;
}
/* offset of property list (XMLOffset) */
ret = read_uint64(bs, offset + 0xd8, &plist_xml_offset);
if (ret < 0) {
goto fail;
}
ret = read_uint64(bs, offset + 0xe0, &plist_xml_length);
if (ret < 0) {
goto fail;
}
if (plist_xml_offset >= offset ||
plist_xml_length > offset - plist_xml_offset) {
ret = -EINVAL;
goto fail;
}
ret = read_uint64(bs, offset + 0x1ec, (uint64_t *)&bs->total_sectors);
if (ret < 0) {
goto fail;
}
if (bs->total_sectors < 0) {
ret = -EINVAL;
goto fail;
}
if (rsrc_fork_length != 0) {
ret = dmg_read_resource_fork(bs, &ds,
rsrc_fork_offset, rsrc_fork_length);
if (ret < 0) {
goto fail;
}
} else if (plist_xml_length != 0) {
ret = dmg_read_plist_xml(bs, &ds, plist_xml_offset, plist_xml_length);
if (ret < 0) {
goto fail;
}
} else {
ret = -EINVAL;
goto fail;
}
/* initialize zlib engine */
s->compressed_chunk = qemu_try_blockalign(bs->file->bs,
ds.max_compressed_size + 1);
s->uncompressed_chunk = qemu_try_blockalign(bs->file->bs,
512 * ds.max_sectors_per_chunk);
s->compressed_chunk = qemu_try_blockalign(bs->file,
max_compressed_size + 1);
s->uncompressed_chunk = qemu_try_blockalign(bs->file,
512 * max_sectors_per_chunk);
if (s->compressed_chunk == NULL || s->uncompressed_chunk == NULL) {
ret = -ENOMEM;
goto fail;
@@ -583,20 +349,17 @@ static inline int dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
if (!is_sector_in_chunk(s, s->current_chunk, sector_num)) {
int ret;
uint32_t chunk = search_chunk(s, sector_num);
#ifdef CONFIG_BZIP2
uint64_t total_out;
#endif
if (chunk >= s->n_chunks) {
return -1;
}
s->current_chunk = s->n_chunks;
switch (s->types[chunk]) { /* block entry type */
switch (s->types[chunk]) {
case 0x80000005: { /* zlib compressed */
/* we need to buffer, because only the chunk as whole can be
* inflated. */
ret = bdrv_pread(bs->file->bs, s->offsets[chunk],
ret = bdrv_pread(bs->file, s->offsets[chunk],
s->compressed_chunk, s->lengths[chunk]);
if (ret != s->lengths[chunk]) {
return -1;
@@ -616,44 +379,15 @@ static inline int dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
return -1;
}
break; }
#ifdef CONFIG_BZIP2
case 0x80000006: /* bzip2 compressed */
/* we need to buffer, because only the chunk as whole can be
* inflated. */
ret = bdrv_pread(bs->file->bs, s->offsets[chunk],
s->compressed_chunk, s->lengths[chunk]);
if (ret != s->lengths[chunk]) {
return -1;
}
ret = BZ2_bzDecompressInit(&s->bzstream, 0, 0);
if (ret != BZ_OK) {
return -1;
}
s->bzstream.next_in = (char *)s->compressed_chunk;
s->bzstream.avail_in = (unsigned int) s->lengths[chunk];
s->bzstream.next_out = (char *)s->uncompressed_chunk;
s->bzstream.avail_out = (unsigned int) 512 * s->sectorcounts[chunk];
ret = BZ2_bzDecompress(&s->bzstream);
total_out = ((uint64_t)s->bzstream.total_out_hi32 << 32) +
s->bzstream.total_out_lo32;
BZ2_bzDecompressEnd(&s->bzstream);
if (ret != BZ_STREAM_END ||
total_out != 512 * s->sectorcounts[chunk]) {
return -1;
}
break;
#endif /* CONFIG_BZIP2 */
case 1: /* copy */
ret = bdrv_pread(bs->file->bs, s->offsets[chunk],
ret = bdrv_pread(bs->file, s->offsets[chunk],
s->uncompressed_chunk, s->lengths[chunk]);
if (ret != s->lengths[chunk]) {
return -1;
}
break;
case 2: /* zero */
/* see dmg_read, it is treated specially. No buffer needs to be
* pre-filled, the zeroes can be set directly. */
memset(s->uncompressed_chunk, 0, 512 * s->sectorcounts[chunk]);
break;
}
s->current_chunk = chunk;
@@ -661,42 +395,31 @@ static inline int dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
return 0;
}
static int coroutine_fn
dmg_co_preadv(BlockDriverState *bs, uint64_t offset, uint64_t bytes,
QEMUIOVector *qiov, int flags)
static int dmg_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVDMGState *s = bs->opaque;
uint64_t sector_num = offset >> BDRV_SECTOR_BITS;
int nb_sectors = bytes >> BDRV_SECTOR_BITS;
int ret, i;
assert((offset & (BDRV_SECTOR_SIZE - 1)) == 0);
assert((bytes & (BDRV_SECTOR_SIZE - 1)) == 0);
qemu_co_mutex_lock(&s->lock);
int i;
for (i = 0; i < nb_sectors; i++) {
uint32_t sector_offset_in_chunk;
void *data;
if (dmg_read_chunk(bs, sector_num + i) != 0) {
ret = -EIO;
goto fail;
}
/* Special case: current chunk is all zeroes. Do not perform a memcpy as
* s->uncompressed_chunk may be too small to cover the large all-zeroes
* section. dmg_read_chunk is called to find s->current_chunk */
if (s->types[s->current_chunk] == 2) { /* all zeroes block entry */
qemu_iovec_memset(qiov, i * 512, 0, 512);
continue;
return -1;
}
sector_offset_in_chunk = sector_num + i - s->sectors[s->current_chunk];
data = s->uncompressed_chunk + sector_offset_in_chunk * 512;
qemu_iovec_from_buf(qiov, i * 512, data, 512);
memcpy(buf + i * 512,
s->uncompressed_chunk + sector_offset_in_chunk * 512, 512);
}
return 0;
}
ret = 0;
fail:
static coroutine_fn int dmg_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVDMGState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = dmg_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
@@ -721,7 +444,7 @@ static BlockDriver bdrv_dmg = {
.instance_size = sizeof(BDRVDMGState),
.bdrv_probe = dmg_probe,
.bdrv_open = dmg_open,
.bdrv_co_preadv = dmg_co_preadv,
.bdrv_read = dmg_co_read,
.bdrv_close = dmg_close,
};

View File

@@ -7,10 +7,8 @@
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include <glusterfs/api/glfs.h>
#include "block/block_int.h"
#include "qapi/error.h"
#include "qemu/uri.h"
typedef struct GlusterAIOCB {
@@ -247,7 +245,7 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
if (!ret || ret == acb->size) {
acb->ret = 0; /* Success */
} else if (ret < 0) {
acb->ret = -errno; /* Read/Write failed */
acb->ret = ret; /* Read/Write failed */
} else {
acb->ret = -EIO; /* Partial read/write - fail it */
}
@@ -314,23 +312,6 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
goto out;
}
#ifdef CONFIG_GLUSTERFS_XLATOR_OPT
/* Without this, if fsync fails for a recoverable reason (for instance,
* ENOSPC), gluster will dump its cache, preventing retries. This means
* almost certain data loss. Not all gluster versions support the
* 'resync-failed-syncs-after-fsync' key value, but there is no way to
* discover during runtime if it is supported (this api returns success for
* unknown key/value pairs) */
ret = glfs_set_xlator_option(s->glfs, "*-write-behind",
"resync-failed-syncs-after-fsync",
"on");
if (ret < 0) {
error_setg_errno(errp, errno, "Unable to set xlator key/value pair");
ret = -errno;
goto out;
}
#endif
qemu_gluster_parse_flags(bdrv_flags, &open_flags);
s->fd = glfs_open(s->glfs, gconf->image, open_flags);
@@ -383,16 +364,6 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
goto exit;
}
#ifdef CONFIG_GLUSTERFS_XLATOR_OPT
ret = glfs_set_xlator_option(reop_s->glfs, "*-write-behind",
"resync-failed-syncs-after-fsync", "on");
if (ret < 0) {
error_setg_errno(errp, errno, "Unable to set xlator key/value pair");
ret = -errno;
goto exit;
}
#endif
reop_s->fd = glfs_open(reop_s->glfs, gconf->image, open_flags);
if (reop_s->fd == NULL) {
/* reops->glfs will be cleaned up in _abort */
@@ -458,23 +429,28 @@ static coroutine_fn int qemu_gluster_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, BdrvRequestFlags flags)
{
int ret;
GlusterAIOCB acb;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
BDRVGlusterState *s = bs->opaque;
off_t size = nb_sectors * BDRV_SECTOR_SIZE;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
acb.size = size;
acb.ret = 0;
acb.coroutine = qemu_coroutine_self();
acb.aio_context = bdrv_get_aio_context(bs);
acb->size = size;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
acb->aio_context = bdrv_get_aio_context(bs);
ret = glfs_zerofill_async(s->fd, offset, size, gluster_finish_aiocb, &acb);
ret = glfs_zerofill_async(s->fd, offset, size, &gluster_finish_aiocb, acb);
if (ret < 0) {
return -errno;
ret = -errno;
goto out;
}
qemu_coroutine_yield();
return acb.ret;
ret = acb->ret;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
}
static inline bool gluster_supports_zerofill(void)
@@ -565,30 +541,35 @@ static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov, int write)
{
int ret;
GlusterAIOCB acb;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
BDRVGlusterState *s = bs->opaque;
size_t size = nb_sectors * BDRV_SECTOR_SIZE;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
acb.size = size;
acb.ret = 0;
acb.coroutine = qemu_coroutine_self();
acb.aio_context = bdrv_get_aio_context(bs);
acb->size = size;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
acb->aio_context = bdrv_get_aio_context(bs);
if (write) {
ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0,
gluster_finish_aiocb, &acb);
&gluster_finish_aiocb, acb);
} else {
ret = glfs_preadv_async(s->fd, qiov->iov, qiov->niov, offset, 0,
gluster_finish_aiocb, &acb);
&gluster_finish_aiocb, acb);
}
if (ret < 0) {
return -errno;
ret = -errno;
goto out;
}
qemu_coroutine_yield();
return acb.ret;
ret = acb->ret;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
}
static int qemu_gluster_truncate(BlockDriverState *bs, int64_t offset)
@@ -616,58 +597,28 @@ static coroutine_fn int qemu_gluster_co_writev(BlockDriverState *bs,
return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 1);
}
static void qemu_gluster_close(BlockDriverState *bs)
{
BDRVGlusterState *s = bs->opaque;
if (s->fd) {
glfs_close(s->fd);
s->fd = NULL;
}
glfs_fini(s->glfs);
}
static coroutine_fn int qemu_gluster_co_flush_to_disk(BlockDriverState *bs)
{
int ret;
GlusterAIOCB acb;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
BDRVGlusterState *s = bs->opaque;
acb.size = 0;
acb.ret = 0;
acb.coroutine = qemu_coroutine_self();
acb.aio_context = bdrv_get_aio_context(bs);
acb->size = 0;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
acb->aio_context = bdrv_get_aio_context(bs);
ret = glfs_fsync_async(s->fd, gluster_finish_aiocb, &acb);
ret = glfs_fsync_async(s->fd, &gluster_finish_aiocb, acb);
if (ret < 0) {
ret = -errno;
goto error;
goto out;
}
qemu_coroutine_yield();
if (acb.ret < 0) {
ret = acb.ret;
goto error;
}
ret = acb->ret;
return acb.ret;
error:
/* Some versions of Gluster (3.5.6 -> 3.5.8?) will not retain its cache
* after a fsync failure, so we have no way of allowing the guest to safely
* continue. Gluster versions prior to 3.5.6 don't retain the cache
* either, but will invalidate the fd on error, so this is again our only
* option.
*
* The 'resync-failed-syncs-after-fsync' xlator option for the
* write-behind cache will cause later gluster versions to retain its
* cache after error, so long as the fd remains open. However, we
* currently have no way of knowing if this option is supported.
*
* TODO: Once gluster provides a way for us to determine if the option
* is supported, bypass the closure and setting drv to NULL. */
qemu_gluster_close(bs);
bs->drv = NULL;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
}
@@ -676,23 +627,28 @@ static coroutine_fn int qemu_gluster_co_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors)
{
int ret;
GlusterAIOCB acb;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
BDRVGlusterState *s = bs->opaque;
size_t size = nb_sectors * BDRV_SECTOR_SIZE;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
acb.size = 0;
acb.ret = 0;
acb.coroutine = qemu_coroutine_self();
acb.aio_context = bdrv_get_aio_context(bs);
acb->size = 0;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
acb->aio_context = bdrv_get_aio_context(bs);
ret = glfs_discard_async(s->fd, offset, size, gluster_finish_aiocb, &acb);
ret = glfs_discard_async(s->fd, offset, size, &gluster_finish_aiocb, acb);
if (ret < 0) {
return -errno;
ret = -errno;
goto out;
}
qemu_coroutine_yield();
return acb.ret;
ret = acb->ret;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
}
#endif
@@ -723,6 +679,17 @@ static int64_t qemu_gluster_allocated_file_size(BlockDriverState *bs)
}
}
static void qemu_gluster_close(BlockDriverState *bs)
{
BDRVGlusterState *s = bs->opaque;
if (s->fd) {
glfs_close(s->fd);
s->fd = NULL;
}
glfs_fini(s->glfs);
}
static int qemu_gluster_has_zero_init(BlockDriverState *bs)
{
/* GlusterFS volume could be backed by a block device */

2542
block/io.c

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,7 @@
* QEMU Block driver for iSCSI images
*
* Copyright (c) 2010-2011 Ronnie Sahlberg <ronniesahlberg@gmail.com>
* Copyright (c) 2012-2015 Peter Lieven <pl@kamp.de>
* Copyright (c) 2012-2014 Peter Lieven <pl@kamp.de>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
@@ -23,7 +23,7 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "config-host.h"
#include <poll.h>
#include <math.h>
@@ -38,8 +38,6 @@
#include "qemu/iov.h"
#include "sysemu/sysemu.h"
#include "qmp-commands.h"
#include "qapi/qmp/qstring.h"
#include "crypto/secret.h"
#include <iscsi/iscsi.h>
#include <iscsi/scsi-lowlevel.h>
@@ -58,19 +56,15 @@ typedef struct IscsiLun {
uint64_t num_blocks;
int events;
QEMUTimer *nop_timer;
QEMUTimer *event_timer;
uint8_t lbpme;
uint8_t lbprz;
uint8_t has_write_same;
struct scsi_inquiry_logical_block_provisioning lbp;
struct scsi_inquiry_block_limits bl;
unsigned char *zeroblock;
unsigned long *allocationmap;
int cluster_sectors;
bool use_16_for_rw;
bool write_protected;
bool lbpme;
bool lbprz;
bool dpofua;
bool has_write_same;
bool request_timed_out;
} IscsiLun;
typedef struct IscsiTask {
@@ -83,7 +77,6 @@ typedef struct IscsiTask {
QEMUBH *bh;
IscsiLun *iscsilun;
QEMUTimer retry_timer;
int err_code;
} IscsiTask;
typedef struct IscsiAIOCB {
@@ -96,18 +89,15 @@ typedef struct IscsiAIOCB {
int status;
int64_t sector_num;
int nb_sectors;
int ret;
#ifdef __linux__
sg_io_hdr_t *ioh;
#endif
} IscsiAIOCB;
/* libiscsi uses time_t so its enough to process events every second */
#define EVENT_INTERVAL 1000
#define NOP_INTERVAL 5000
#define MAX_NOP_FAILURES 3
#define ISCSI_CMD_RETRIES ARRAY_SIZE(iscsi_retry_times)
static const unsigned iscsi_retry_times[] = {8, 32, 128, 512, 2048, 8192, 32768};
static const unsigned iscsi_retry_times[] = {8, 32, 128, 512, 2048};
/* this threshold is a trade-off knob to choose between
* the potential additional overhead of an extra GET_LBA_STATUS request
@@ -170,70 +160,6 @@ static inline unsigned exp_random(double mean)
return -mean * log((double)rand() / RAND_MAX);
}
/* SCSI_SENSE_ASCQ_INVALID_FIELD_IN_PARAMETER_LIST was introduced in
* libiscsi 1.10.0, together with other constants we need. Use it as
* a hint that we have to define them ourselves if needed, to keep the
* minimum required libiscsi version at 1.9.0. We use an ASCQ macro for
* the test because SCSI_STATUS_* is an enum.
*
* To guard against future changes where SCSI_SENSE_ASCQ_* also becomes
* an enum, check against the LIBISCSI_API_VERSION macro, which was
* introduced in 1.11.0. If it is present, there is no need to define
* anything.
*/
#if !defined(SCSI_SENSE_ASCQ_INVALID_FIELD_IN_PARAMETER_LIST) && \
!defined(LIBISCSI_API_VERSION)
#define SCSI_STATUS_TASK_SET_FULL 0x28
#define SCSI_STATUS_TIMEOUT 0x0f000002
#define SCSI_SENSE_ASCQ_INVALID_FIELD_IN_PARAMETER_LIST 0x2600
#define SCSI_SENSE_ASCQ_PARAMETER_LIST_LENGTH_ERROR 0x1a00
#endif
static int iscsi_translate_sense(struct scsi_sense *sense)
{
int ret;
switch (sense->key) {
case SCSI_SENSE_NOT_READY:
return -EBUSY;
case SCSI_SENSE_DATA_PROTECTION:
return -EACCES;
case SCSI_SENSE_COMMAND_ABORTED:
return -ECANCELED;
case SCSI_SENSE_ILLEGAL_REQUEST:
/* Parse ASCQ */
break;
default:
return -EIO;
}
switch (sense->ascq) {
case SCSI_SENSE_ASCQ_PARAMETER_LIST_LENGTH_ERROR:
case SCSI_SENSE_ASCQ_INVALID_OPERATION_CODE:
case SCSI_SENSE_ASCQ_INVALID_FIELD_IN_CDB:
case SCSI_SENSE_ASCQ_INVALID_FIELD_IN_PARAMETER_LIST:
ret = -EINVAL;
break;
case SCSI_SENSE_ASCQ_LBA_OUT_OF_RANGE:
ret = -ENOSPC;
break;
case SCSI_SENSE_ASCQ_LOGICAL_UNIT_NOT_SUPPORTED:
ret = -ENOTSUP;
break;
case SCSI_SENSE_ASCQ_MEDIUM_NOT_PRESENT:
case SCSI_SENSE_ASCQ_MEDIUM_NOT_PRESENT_TRAY_CLOSED:
case SCSI_SENSE_ASCQ_MEDIUM_NOT_PRESENT_TRAY_OPEN:
ret = -ENOMEDIUM;
break;
case SCSI_SENSE_ASCQ_WRITE_PROTECTED:
ret = -EACCES;
break;
default:
ret = -EIO;
break;
}
return ret;
}
static void
iscsi_co_generic_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
@@ -254,19 +180,10 @@ iscsi_co_generic_cb(struct iscsi_context *iscsi, int status,
iTask->do_retry = 1;
goto out;
}
if (status == SCSI_STATUS_BUSY ||
status == SCSI_STATUS_TIMEOUT ||
status == SCSI_STATUS_TASK_SET_FULL) {
if (status == SCSI_STATUS_BUSY) {
unsigned retry_time =
exp_random(iscsi_retry_times[iTask->retries - 1]);
if (status == SCSI_STATUS_TIMEOUT) {
/* make sure the request is rescheduled AFTER the
* reconnect is initiated */
retry_time = EVENT_INTERVAL * 2;
iTask->iscsilun->request_timed_out = true;
}
error_report("iSCSI Busy/TaskSetFull/TimeOut"
" (retry #%u in %u ms): %s",
error_report("iSCSI Busy (retry #%u in %u ms): %s",
iTask->retries, retry_time,
iscsi_get_error(iscsi));
aio_timer_init(iTask->iscsilun->aio_context,
@@ -278,7 +195,6 @@ iscsi_co_generic_cb(struct iscsi_context *iscsi, int status,
return;
}
}
iTask->err_code = iscsi_translate_sense(&task->sense);
error_report("iSCSI Failure: %s", iscsi_get_error(iscsi));
}
@@ -339,36 +255,21 @@ static void
iscsi_set_events(IscsiLun *iscsilun)
{
struct iscsi_context *iscsi = iscsilun->iscsi;
int ev = iscsi_which_events(iscsi);
int ev;
/* We always register a read handler. */
ev = POLLIN;
ev |= iscsi_which_events(iscsi);
if (ev != iscsilun->events) {
aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsi),
false,
(ev & POLLIN) ? iscsi_process_read : NULL,
aio_set_fd_handler(iscsilun->aio_context,
iscsi_get_fd(iscsi),
iscsi_process_read,
(ev & POLLOUT) ? iscsi_process_write : NULL,
iscsilun);
iscsilun->events = ev;
}
}
static void iscsi_timed_check_events(void *opaque)
{
IscsiLun *iscsilun = opaque;
/* check for timed out requests */
iscsi_service(iscsilun->iscsi, 0);
if (iscsilun->request_timed_out) {
iscsilun->request_timed_out = false;
iscsi_reconnect(iscsilun->iscsi);
}
/* newer versions of libiscsi may return zero events. Ensure we are able
* to return to service once this situation changes. */
iscsi_set_events(iscsilun);
timer_mod(iscsilun->event_timer,
qemu_clock_get_ms(QEMU_CLOCK_REALTIME) + EVENT_INTERVAL);
iscsilun->events = ev;
}
static void
@@ -448,19 +349,15 @@ static void iscsi_allocationmap_clear(IscsiLun *iscsilun, int64_t sector_num,
}
}
static int coroutine_fn
iscsi_co_writev_flags(BlockDriverState *bs, int64_t sector_num, int nb_sectors,
QEMUIOVector *iov, int flags)
static int coroutine_fn iscsi_co_writev(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
QEMUIOVector *iov)
{
IscsiLun *iscsilun = bs->opaque;
struct IscsiTask iTask;
uint64_t lba;
uint32_t num_sectors;
bool fua = flags & BDRV_REQ_FUA;
if (fua) {
assert(iscsilun->dpofua);
}
if (!is_request_lun_aligned(sector_num, nb_sectors, iscsilun)) {
return -EINVAL;
}
@@ -478,12 +375,12 @@ retry:
if (iscsilun->use_16_for_rw) {
iTask.task = iscsi_write16_task(iscsilun->iscsi, iscsilun->lun, lba,
NULL, num_sectors * iscsilun->block_size,
iscsilun->block_size, 0, 0, fua, 0, 0,
iscsilun->block_size, 0, 0, 0, 0, 0,
iscsi_co_generic_cb, &iTask);
} else {
iTask.task = iscsi_write10_task(iscsilun->iscsi, iscsilun->lun, lba,
NULL, num_sectors * iscsilun->block_size,
iscsilun->block_size, 0, 0, fua, 0, 0,
iscsilun->block_size, 0, 0, 0, 0, 0,
iscsi_co_generic_cb, &iTask);
}
if (iTask.task == NULL) {
@@ -507,7 +404,7 @@ retry:
}
if (iTask.status != SCSI_STATUS_GOOD) {
return iTask.err_code;
return -EIO;
}
iscsi_allocationmap_set(iscsilun, sector_num, nb_sectors);
@@ -530,8 +427,7 @@ static bool iscsi_allocationmap_is_allocated(IscsiLun *iscsilun,
static int64_t coroutine_fn iscsi_co_get_block_status(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors, int *pnum,
BlockDriverState **file)
int nb_sectors, int *pnum)
{
IscsiLun *iscsilun = bs->opaque;
struct scsi_get_lba_status *lbas = NULL;
@@ -552,7 +448,7 @@ static int64_t coroutine_fn iscsi_co_get_block_status(BlockDriverState *bs,
*pnum = nb_sectors;
/* LUN does not support logical block provisioning */
if (!iscsilun->lbpme) {
if (iscsilun->lbpme == 0) {
goto out;
}
@@ -623,9 +519,6 @@ out:
if (iTask.task != NULL) {
scsi_free_scsi_task(iTask.task);
}
if (ret > 0 && ret & BDRV_BLOCK_OFFSET_VALID) {
*file = bs;
}
return ret;
}
@@ -652,8 +545,7 @@ static int coroutine_fn iscsi_co_readv(BlockDriverState *bs,
!iscsi_allocationmap_is_allocated(iscsilun, sector_num, nb_sectors)) {
int64_t ret;
int pnum;
BlockDriverState *file;
ret = iscsi_co_get_block_status(bs, sector_num, INT_MAX, &pnum, &file);
ret = iscsi_co_get_block_status(bs, sector_num, INT_MAX, &pnum);
if (ret < 0) {
return ret;
}
@@ -701,7 +593,7 @@ retry:
}
if (iTask.status != SCSI_STATUS_GOOD) {
return iTask.err_code;
return -EIO;
}
return 0;
@@ -712,7 +604,12 @@ static int coroutine_fn iscsi_co_flush(BlockDriverState *bs)
IscsiLun *iscsilun = bs->opaque;
struct IscsiTask iTask;
if (bs->sg) {
return 0;
}
iscsi_co_init_iscsitask(iscsilun, &iTask);
retry:
if (iscsi_synchronizecache10_task(iscsilun->iscsi, iscsilun->lun, 0, 0, 0,
0, iscsi_co_generic_cb, &iTask) == NULL) {
@@ -735,7 +632,7 @@ retry:
}
if (iTask.status != SCSI_STATUS_GOOD) {
return iTask.err_code;
return -EIO;
}
return 0;
@@ -755,13 +652,12 @@ iscsi_aio_ioctl_cb(struct iscsi_context *iscsi, int status,
if (status < 0) {
error_report("Failed to ioctl(SG_IO) to iSCSI lun. %s",
iscsi_get_error(iscsi));
acb->status = iscsi_translate_sense(&acb->task->sense);
acb->status = -EIO;
}
acb->ioh->driver_status = 0;
acb->ioh->host_status = 0;
acb->ioh->resid = 0;
acb->ioh->status = status;
#define SG_ERR_DRIVER_SENSE 0x08
@@ -779,38 +675,6 @@ iscsi_aio_ioctl_cb(struct iscsi_context *iscsi, int status,
iscsi_schedule_bh(acb);
}
static void iscsi_ioctl_bh_completion(void *opaque)
{
IscsiAIOCB *acb = opaque;
qemu_bh_delete(acb->bh);
acb->common.cb(acb->common.opaque, acb->ret);
qemu_aio_unref(acb);
}
static void iscsi_ioctl_handle_emulated(IscsiAIOCB *acb, int req, void *buf)
{
BlockDriverState *bs = acb->common.bs;
IscsiLun *iscsilun = bs->opaque;
int ret = 0;
switch (req) {
case SG_GET_VERSION_NUM:
*(int *)buf = 30000;
break;
case SG_GET_SCSI_ID:
((struct sg_scsi_id *)buf)->scsi_type = iscsilun->type;
break;
default:
ret = -EINVAL;
}
assert(!acb->bh);
acb->bh = aio_bh_new(bdrv_get_aio_context(bs),
iscsi_ioctl_bh_completion, acb);
acb->ret = ret;
qemu_bh_schedule(acb->bh);
}
static BlockAIOCB *iscsi_aio_ioctl(BlockDriverState *bs,
unsigned long int req, void *buf,
BlockCompletionFunc *cb, void *opaque)
@@ -820,6 +684,8 @@ static BlockAIOCB *iscsi_aio_ioctl(BlockDriverState *bs,
struct iscsi_data data;
IscsiAIOCB *acb;
assert(req == SG_IO);
acb = qemu_aio_get(&iscsi_aiocb_info, bs, cb, opaque);
acb->iscsilun = iscsilun;
@@ -828,18 +694,6 @@ static BlockAIOCB *iscsi_aio_ioctl(BlockDriverState *bs,
acb->buf = NULL;
acb->ioh = buf;
if (req != SG_IO) {
iscsi_ioctl_handle_emulated(acb, req, buf);
return &acb->common;
}
if (acb->ioh->cmd_len > SCSI_CDB_MAX_SIZE) {
error_report("iSCSI: ioctl error CDB exceeds max size (%d > %d)",
acb->ioh->cmd_len, SCSI_CDB_MAX_SIZE);
qemu_aio_unref(acb);
return NULL;
}
acb->task = malloc(sizeof(struct scsi_task));
if (acb->task == NULL) {
error_report("iSCSI: Failed to allocate task for scsi command. %s",
@@ -904,6 +758,38 @@ static BlockAIOCB *iscsi_aio_ioctl(BlockDriverState *bs,
return &acb->common;
}
static void ioctl_cb(void *opaque, int status)
{
int *p_status = opaque;
*p_status = status;
}
static int iscsi_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
{
IscsiLun *iscsilun = bs->opaque;
int status;
switch (req) {
case SG_GET_VERSION_NUM:
*(int *)buf = 30000;
break;
case SG_GET_SCSI_ID:
((struct sg_scsi_id *)buf)->scsi_type = iscsilun->type;
break;
case SG_IO:
status = -EINPROGRESS;
iscsi_aio_ioctl(bs, req, buf, ioctl_cb, &status);
while (status == -EINPROGRESS) {
aio_poll(iscsilun->aio_context, true);
}
return 0;
default:
return -1;
}
return 0;
}
#endif
static int64_t
@@ -968,7 +854,7 @@ retry:
}
if (iTask.status != SCSI_STATUS_GOOD) {
return iTask.err_code;
return -EIO;
}
iscsi_allocationmap_clear(iscsilun, sector_num, nb_sectors);
@@ -1061,7 +947,7 @@ retry:
}
if (iTask.status != SCSI_STATUS_GOOD) {
return iTask.err_code;
return -EIO;
}
if (flags & BDRV_REQ_MAY_UNMAP) {
@@ -1080,8 +966,6 @@ static void parse_chap(struct iscsi_context *iscsi, const char *target,
QemuOpts *opts;
const char *user = NULL;
const char *password = NULL;
const char *secretid;
char *secret = NULL;
list = qemu_find_opts("iscsi");
if (!list) {
@@ -1101,20 +985,8 @@ static void parse_chap(struct iscsi_context *iscsi, const char *target,
return;
}
secretid = qemu_opt_get(opts, "password-secret");
password = qemu_opt_get(opts, "password");
if (secretid && password) {
error_setg(errp, "'password' and 'password-secret' properties are "
"mutually exclusive");
return;
}
if (secretid) {
secret = qcrypto_secret_lookup_as_utf8(secretid, errp);
if (!secret) {
return;
}
password = secret;
} else if (!password) {
if (!password) {
error_setg(errp, "CHAP username specified but no password was given");
return;
}
@@ -1122,8 +994,6 @@ static void parse_chap(struct iscsi_context *iscsi, const char *target,
if (iscsi_set_initiator_username_pwd(iscsi, user, password)) {
error_setg(errp, "Failed to set initiator username and password");
}
g_free(secret);
}
static void parse_header_digest(struct iscsi_context *iscsi, const char *target,
@@ -1198,37 +1068,16 @@ static char *parse_initiator_name(const char *target)
return iscsi_name;
}
static int parse_timeout(const char *target)
{
QemuOptsList *list;
QemuOpts *opts;
const char *timeout;
list = qemu_find_opts("iscsi");
if (list) {
opts = qemu_opts_find(list, target);
if (!opts) {
opts = QTAILQ_FIRST(&list->head);
}
if (opts) {
timeout = qemu_opt_get(opts, "timeout");
if (timeout) {
return atoi(timeout);
}
}
}
return 0;
}
static void iscsi_nop_timed_event(void *opaque)
{
IscsiLun *iscsilun = opaque;
if (iscsi_get_nops_in_flight(iscsilun->iscsi) >= MAX_NOP_FAILURES) {
if (iscsi_get_nops_in_flight(iscsilun->iscsi) > MAX_NOP_FAILURES) {
error_report("iSCSI: NOP timeout. Reconnecting...");
iscsilun->request_timed_out = true;
} else if (iscsi_nop_out_async(iscsilun->iscsi, NULL, NULL, 0, NULL) != 0) {
iscsi_reconnect(iscsilun->iscsi);
}
if (iscsi_nop_out_async(iscsilun->iscsi, NULL, NULL, 0, NULL) != 0) {
error_report("iSCSI: failed to sent NOP-Out. Disabling NOP messages.");
return;
}
@@ -1260,17 +1109,12 @@ static void iscsi_readcapacity_sync(IscsiLun *iscsilun, Error **errp)
} else {
iscsilun->block_size = rc16->block_length;
iscsilun->num_blocks = rc16->returned_lba + 1;
iscsilun->lbpme = !!rc16->lbpme;
iscsilun->lbprz = !!rc16->lbprz;
iscsilun->lbpme = rc16->lbpme;
iscsilun->lbprz = rc16->lbprz;
iscsilun->use_16_for_rw = (rc16->returned_lba > 0xffffffff);
}
break;
}
if (task != NULL && task->status == SCSI_STATUS_CHECK_CONDITION
&& task->sense.key == SCSI_SENSE_UNIT_ATTENTION) {
break;
}
/* Fall through and try READ CAPACITY(10) instead. */
break;
case TYPE_ROM:
task = iscsi_readcapacity10_sync(iscsilun->iscsi, iscsilun->lun, 0, 0);
if (task != NULL && task->status == SCSI_STATUS_GOOD) {
@@ -1296,11 +1140,7 @@ static void iscsi_readcapacity_sync(IscsiLun *iscsilun, Error **errp)
&& retries-- > 0);
if (task == NULL || task->status != SCSI_STATUS_GOOD) {
error_setg(errp, "iSCSI: failed to send readcapacity10/16 command");
} else if (!iscsilun->block_size ||
iscsilun->block_size % BDRV_SECTOR_SIZE) {
error_setg(errp, "iSCSI: the target returned an invalid "
"block size of %d.", iscsilun->block_size);
error_setg(errp, "iSCSI: failed to send readcapacity10 command.");
}
if (task) {
scsi_free_scsi_task(task);
@@ -1363,8 +1203,9 @@ static void iscsi_detach_aio_context(BlockDriverState *bs)
{
IscsiLun *iscsilun = bs->opaque;
aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsilun->iscsi),
false, NULL, NULL, NULL);
aio_set_fd_handler(iscsilun->aio_context,
iscsi_get_fd(iscsilun->iscsi),
NULL, NULL, NULL);
iscsilun->events = 0;
if (iscsilun->nop_timer) {
@@ -1372,11 +1213,6 @@ static void iscsi_detach_aio_context(BlockDriverState *bs)
timer_free(iscsilun->nop_timer);
iscsilun->nop_timer = NULL;
}
if (iscsilun->event_timer) {
timer_del(iscsilun->event_timer);
timer_free(iscsilun->event_timer);
iscsilun->event_timer = NULL;
}
}
static void iscsi_attach_aio_context(BlockDriverState *bs,
@@ -1393,22 +1229,13 @@ static void iscsi_attach_aio_context(BlockDriverState *bs,
iscsi_nop_timed_event, iscsilun);
timer_mod(iscsilun->nop_timer,
qemu_clock_get_ms(QEMU_CLOCK_REALTIME) + NOP_INTERVAL);
/* Set up a timer for periodic calls to iscsi_set_events and to
* scan for command timeout */
iscsilun->event_timer = aio_timer_new(iscsilun->aio_context,
QEMU_CLOCK_REALTIME, SCALE_MS,
iscsi_timed_check_events, iscsilun);
timer_mod(iscsilun->event_timer,
qemu_clock_get_ms(QEMU_CLOCK_REALTIME) + EVENT_INTERVAL);
}
static void iscsi_modesense_sync(IscsiLun *iscsilun)
static bool iscsi_is_write_protected(IscsiLun *iscsilun)
{
struct scsi_task *task;
struct scsi_mode_sense *ms = NULL;
iscsilun->write_protected = false;
iscsilun->dpofua = false;
bool wrprotected = false;
task = iscsi_modesense6_sync(iscsilun->iscsi, iscsilun->lun,
1, SCSI_MODESENSE_PC_CURRENT,
@@ -1429,18 +1256,22 @@ static void iscsi_modesense_sync(IscsiLun *iscsilun)
iscsi_get_error(iscsilun->iscsi));
goto out;
}
iscsilun->write_protected = ms->device_specific_parameter & 0x80;
iscsilun->dpofua = ms->device_specific_parameter & 0x10;
wrprotected = ms->device_specific_parameter & 0x80;
out:
if (task) {
scsi_free_scsi_task(task);
}
return wrprotected;
}
/*
* We support iscsi url's on the form
* iscsi://[<username>%<password>@]<host>[:<port>]/<targetname>/<lun>
*
* Note: flags are currently not used by iscsi_open. If this function
* is changed such that flags are used, please examine iscsi_reopen_prepare()
* to see if needs to be changed as well.
*/
static int iscsi_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
@@ -1455,7 +1286,14 @@ static int iscsi_open(BlockDriverState *bs, QDict *options, int flags,
QemuOpts *opts;
Error *local_err = NULL;
const char *filename;
int i, ret = 0, timeout = 0;
int i, ret = 0;
if ((BDRV_SECTOR_SIZE % 512) != 0) {
error_setg(errp, "iSCSI: Invalid BDRV_SECTOR_SIZE. "
"BDRV_SECTOR_SIZE(%lld) is not a multiple "
"of 512", BDRV_SECTOR_SIZE);
return -EINVAL;
}
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
@@ -1491,7 +1329,7 @@ static int iscsi_open(BlockDriverState *bs, QDict *options, int flags,
goto out;
}
if (iscsi_url->user[0] != '\0') {
if (iscsi_url->user != NULL) {
ret = iscsi_set_initiator_username_pwd(iscsi, iscsi_url->user,
iscsi_url->passwd);
if (ret != 0) {
@@ -1525,16 +1363,6 @@ static int iscsi_open(BlockDriverState *bs, QDict *options, int flags,
goto out;
}
/* timeout handling is broken in libiscsi before 1.15.0 */
timeout = parse_timeout(iscsi_url->target);
#if defined(LIBISCSI_API_VERSION) && LIBISCSI_API_VERSION >= 20150621
iscsi_set_timeout(iscsi, timeout);
#else
if (timeout) {
error_report("iSCSI: ignoring timeout value for libiscsi <1.15.0");
}
#endif
if (iscsi_full_connect_sync(iscsi, iscsi_url->portal, iscsi_url->lun) != 0) {
error_setg(errp, "iSCSI: Failed to connect to LUN : %s",
iscsi_get_error(iscsi));
@@ -1557,15 +1385,9 @@ static int iscsi_open(BlockDriverState *bs, QDict *options, int flags,
scsi_free_scsi_task(task);
task = NULL;
iscsi_modesense_sync(iscsilun);
if (iscsilun->dpofua) {
bs->supported_write_flags = BDRV_REQ_FUA;
}
bs->supported_zero_flags = BDRV_REQ_MAY_UNMAP;
/* Check the write protect flag of the LUN if we want to write */
if (iscsilun->type == TYPE_DISK && (flags & BDRV_O_RDWR) &&
iscsilun->write_protected) {
iscsi_is_write_protected(iscsilun)) {
error_setg(errp, "Cannot open a write protected LUN as read-write");
ret = -EACCES;
goto out;
@@ -1640,7 +1462,7 @@ static int iscsi_open(BlockDriverState *bs, QDict *options, int flags,
iscsilun->bl.opt_unmap_gran * iscsilun->block_size <= 16 * 1024 * 1024) {
iscsilun->cluster_sectors = (iscsilun->bl.opt_unmap_gran *
iscsilun->block_size) >> BDRV_SECTOR_BITS;
if (iscsilun->lbprz) {
if (iscsilun->lbprz && !(bs->open_flags & BDRV_O_NOCACHE)) {
iscsilun->allocationmap = iscsi_allocationmap_init(iscsilun);
if (iscsilun->allocationmap == NULL) {
ret = -ENOMEM;
@@ -1660,9 +1482,6 @@ out:
if (ret) {
if (iscsi != NULL) {
if (iscsi_is_logged_in(iscsi)) {
iscsi_logout_sync(iscsi);
}
iscsi_destroy_context(iscsi);
}
memset(iscsilun, 0, sizeof(IscsiLun));
@@ -1676,9 +1495,6 @@ static void iscsi_close(BlockDriverState *bs)
struct iscsi_context *iscsi = iscsilun->iscsi;
iscsi_detach_aio_context(bs);
if (iscsi_is_logged_in(iscsi)) {
iscsi_logout_sync(iscsi);
}
iscsi_destroy_context(iscsi);
g_free(iscsilun->zeroblock);
g_free(iscsilun->allocationmap);
@@ -1725,17 +1541,13 @@ static void iscsi_refresh_limits(BlockDriverState *bs, Error **errp)
sector_limits_lun2qemu(iscsilun->bl.opt_xfer_len, iscsilun);
}
/* Note that this will not re-establish a connection with an iSCSI target - it
* is effectively a NOP. */
/* Since iscsi_open() ignores bdrv_flags, there is nothing to do here in
* prepare. Note that this will not re-establish a connection with an iSCSI
* target - it is effectively a NOP. */
static int iscsi_reopen_prepare(BDRVReopenState *state,
BlockReopenQueue *queue, Error **errp)
{
IscsiLun *iscsilun = state->bs->opaque;
if (state->flags & BDRV_O_RDWR && iscsilun->write_protected) {
error_setg(errp, "Cannot open a write protected LUN as read-write");
return -EACCES;
}
/* NOP */
return 0;
}
@@ -1814,7 +1626,7 @@ out:
static int iscsi_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
{
IscsiLun *iscsilun = bs->opaque;
bdi->unallocated_blocks_are_zero = iscsilun->lbprz;
bdi->unallocated_blocks_are_zero = !!iscsilun->lbprz;
bdi->can_write_zeroes_with_unmap = iscsilun->lbprz && iscsilun->lbp.lbpws;
bdi->cluster_size = iscsilun->cluster_sectors * BDRV_SECTOR_SIZE;
return 0;
@@ -1854,10 +1666,11 @@ static BlockDriver bdrv_iscsi = {
.bdrv_co_discard = iscsi_co_discard,
.bdrv_co_write_zeroes = iscsi_co_write_zeroes,
.bdrv_co_readv = iscsi_co_readv,
.bdrv_co_writev_flags = iscsi_co_writev_flags,
.bdrv_co_writev = iscsi_co_writev,
.bdrv_co_flush_to_disk = iscsi_co_flush,
#ifdef __linux__
.bdrv_ioctl = iscsi_ioctl,
.bdrv_aio_ioctl = iscsi_aio_ioctl,
#endif
@@ -1877,11 +1690,6 @@ static QemuOptsList qemu_iscsi_opts = {
.name = "password",
.type = QEMU_OPT_STRING,
.help = "password for CHAP authentication to target",
},{
.name = "password-secret",
.type = QEMU_OPT_STRING,
.help = "ID of the secret providing password for CHAP "
"authentication to target",
},{
.name = "header-digest",
.type = QEMU_OPT_STRING,
@@ -1891,10 +1699,6 @@ static QemuOptsList qemu_iscsi_opts = {
.name = "initiator-name",
.type = QEMU_OPT_STRING,
.help = "Initiator iqn name to use when connecting",
},{
.name = "timeout",
.type = QEMU_OPT_NUMBER,
.help = "Request timeout in seconds (default 0 = no timeout)",
},
{ /* end of list */ }
},

View File

@@ -7,7 +7,6 @@
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "block/aio.h"
#include "qemu/queue.h"
@@ -30,23 +29,23 @@
struct qemu_laiocb {
BlockAIOCB common;
LinuxAioState *ctx;
struct qemu_laio_state *ctx;
struct iocb iocb;
ssize_t ret;
size_t nbytes;
QEMUIOVector *qiov;
bool is_read;
QSIMPLEQ_ENTRY(qemu_laiocb) next;
QLIST_ENTRY(qemu_laiocb) node;
};
typedef struct {
struct iocb *iocbs[MAX_QUEUED_IO];
int plugged;
unsigned int n;
bool blocked;
QSIMPLEQ_HEAD(, qemu_laiocb) pending;
unsigned int size;
unsigned int idx;
} LaioQueue;
struct LinuxAioState {
struct qemu_laio_state {
io_context_t ctx;
EventNotifier e;
@@ -60,8 +59,6 @@ struct LinuxAioState {
int event_max;
};
static void ioq_submit(LinuxAioState *s);
static inline ssize_t io_event_ret(struct io_event *ev)
{
return (ssize_t)(((uint64_t)ev->res2 << 32) | ev->res);
@@ -70,7 +67,8 @@ static inline ssize_t io_event_ret(struct io_event *ev)
/*
* Completes an AIO request (calls the callback and frees the ACB).
*/
static void qemu_laio_process_completion(struct qemu_laiocb *laiocb)
static void qemu_laio_process_completion(struct qemu_laio_state *s,
struct qemu_laiocb *laiocb)
{
int ret;
@@ -98,7 +96,7 @@ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb)
*
* The function is somewhat tricky because it supports nested event loops, for
* example when a request callback invokes aio_poll(). In order to do this,
* the completion events array and index are kept in LinuxAioState. The BH
* the completion events array and index are kept in qemu_laio_state. The BH
* reschedules itself as long as there are completions pending so it will
* either be called again in a nested event loop or will be called after all
* events have been completed. When there are no events left to complete, the
@@ -106,7 +104,7 @@ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb)
*/
static void qemu_laio_completion_bh(void *opaque)
{
LinuxAioState *s = opaque;
struct qemu_laio_state *s = opaque;
/* Fetch more completion events when empty */
if (s->event_idx == s->event_max) {
@@ -135,17 +133,13 @@ static void qemu_laio_completion_bh(void *opaque)
laiocb->ret = io_event_ret(&s->events[s->event_idx]);
s->event_idx++;
qemu_laio_process_completion(laiocb);
}
if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
ioq_submit(s);
qemu_laio_process_completion(s, laiocb);
}
}
static void qemu_laio_completion_cb(EventNotifier *e)
{
LinuxAioState *s = container_of(e, LinuxAioState, e);
struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, e);
if (event_notifier_test_and_clear(&s->e)) {
qemu_bh_schedule(s->completion_bh);
@@ -178,62 +172,82 @@ static const AIOCBInfo laio_aiocb_info = {
static void ioq_init(LaioQueue *io_q)
{
QSIMPLEQ_INIT(&io_q->pending);
io_q->size = MAX_QUEUED_IO;
io_q->idx = 0;
io_q->plugged = 0;
io_q->n = 0;
io_q->blocked = false;
}
static void ioq_submit(LinuxAioState *s)
static int ioq_submit(struct qemu_laio_state *s)
{
int ret, len;
struct qemu_laiocb *aiocb;
struct iocb *iocbs[MAX_QUEUED_IO];
QSIMPLEQ_HEAD(, qemu_laiocb) completed;
int ret, i = 0;
int len = s->io_q.idx;
do {
len = 0;
QSIMPLEQ_FOREACH(aiocb, &s->io_q.pending, next) {
iocbs[len++] = &aiocb->iocb;
if (len == MAX_QUEUED_IO) {
break;
}
}
ret = io_submit(s->ctx, len, s->io_q.iocbs);
} while (i++ < 3 && ret == -EAGAIN);
ret = io_submit(s->ctx, len, iocbs);
if (ret == -EAGAIN) {
break;
}
if (ret < 0) {
abort();
}
/* empty io queue */
s->io_q.idx = 0;
s->io_q.n -= ret;
aiocb = container_of(iocbs[ret - 1], struct qemu_laiocb, iocb);
QSIMPLEQ_SPLIT_AFTER(&s->io_q.pending, aiocb, next, &completed);
} while (ret == len && !QSIMPLEQ_EMPTY(&s->io_q.pending));
s->io_q.blocked = (s->io_q.n > 0);
if (ret < 0) {
i = 0;
} else {
i = ret;
}
for (; i < len; i++) {
struct qemu_laiocb *laiocb =
container_of(s->io_q.iocbs[i], struct qemu_laiocb, iocb);
laiocb->ret = (ret < 0) ? ret : -EIO;
qemu_laio_process_completion(s, laiocb);
}
return ret;
}
void laio_io_plug(BlockDriverState *bs, LinuxAioState *s)
static void ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb)
{
assert(!s->io_q.plugged);
s->io_q.plugged = 1;
}
unsigned int idx = s->io_q.idx;
void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s)
{
assert(s->io_q.plugged);
s->io_q.plugged = 0;
if (!s->io_q.blocked && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
s->io_q.iocbs[idx++] = iocb;
s->io_q.idx = idx;
/* submit immediately if queue is full */
if (idx == s->io_q.size) {
ioq_submit(s);
}
}
BlockAIOCB *laio_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
void laio_io_plug(BlockDriverState *bs, void *aio_ctx)
{
struct qemu_laio_state *s = aio_ctx;
s->io_q.plugged++;
}
int laio_io_unplug(BlockDriverState *bs, void *aio_ctx, bool unplug)
{
struct qemu_laio_state *s = aio_ctx;
int ret = 0;
assert(s->io_q.plugged > 0 || !unplug);
if (unplug && --s->io_q.plugged > 0) {
return 0;
}
if (s->io_q.idx > 0) {
ret = ioq_submit(s);
}
return ret;
}
BlockAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque, int type)
{
struct qemu_laio_state *s = aio_ctx;
struct qemu_laiocb *laiocb;
struct iocb *iocbs;
off_t offset = sector_num * 512;
@@ -262,11 +276,12 @@ BlockAIOCB *laio_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
}
io_set_eventfd(&laiocb->iocb, event_notifier_get_fd(&s->e));
QSIMPLEQ_INSERT_TAIL(&s->io_q.pending, laiocb, next);
s->io_q.n++;
if (!s->io_q.blocked &&
(!s->io_q.plugged || s->io_q.n >= MAX_QUEUED_IO)) {
ioq_submit(s);
if (!s->io_q.plugged) {
if (io_submit(s->ctx, 1, &iocbs) < 0) {
goto out_free_aiocb;
}
} else {
ioq_enqueue(s, iocbs);
}
return &laiocb->common;
@@ -275,22 +290,25 @@ out_free_aiocb:
return NULL;
}
void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context)
void laio_detach_aio_context(void *s_, AioContext *old_context)
{
aio_set_event_notifier(old_context, &s->e, false, NULL);
struct qemu_laio_state *s = s_;
aio_set_event_notifier(old_context, &s->e, NULL);
qemu_bh_delete(s->completion_bh);
}
void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context)
void laio_attach_aio_context(void *s_, AioContext *new_context)
{
struct qemu_laio_state *s = s_;
s->completion_bh = aio_bh_new(new_context, qemu_laio_completion_bh, s);
aio_set_event_notifier(new_context, &s->e, false,
qemu_laio_completion_cb);
aio_set_event_notifier(new_context, &s->e, qemu_laio_completion_cb);
}
LinuxAioState *laio_init(void)
void *laio_init(void)
{
LinuxAioState *s;
struct qemu_laio_state *s;
s = g_malloc0(sizeof(*s));
if (event_notifier_init(&s->e, false) < 0) {
@@ -312,8 +330,10 @@ out_free_state:
return NULL;
}
void laio_cleanup(LinuxAioState *s)
void laio_cleanup(void *s_)
{
struct qemu_laio_state *s = s_;
event_notifier_cleanup(&s->e);
if (io_destroy(s->ctx) != 0) {

View File

@@ -11,19 +11,14 @@
*
*/
#include "qemu/osdep.h"
#include "trace.h"
#include "block/blockjob.h"
#include "block/block_int.h"
#include "sysemu/block-backend.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
#include "qemu/ratelimit.h"
#include "qemu/bitmap.h"
#define SLICE_TIME 100000000ULL /* ns */
#define MAX_IN_FLIGHT 16
#define DEFAULT_MIRROR_BUF_SIZE (10 << 20)
/* The mirroring buffer is a list of granularity-sized chunks.
* Free chunks are organized in a list.
@@ -35,7 +30,7 @@ typedef struct MirrorBuffer {
typedef struct MirrorBlockJob {
BlockJob common;
RateLimit limit;
BlockBackend *target;
BlockDriverState *target;
BlockDriverState *base;
/* The name of the graph node to replace */
char *replaces;
@@ -47,6 +42,7 @@ typedef struct MirrorBlockJob {
BlockdevOnError on_source_error, on_target_error;
bool synced;
bool should_complete;
int64_t sector_num;
int64_t granularity;
size_t buf_size;
int64_t bdev_length;
@@ -61,10 +57,6 @@ typedef struct MirrorBlockJob {
int in_flight;
int sectors_in_flight;
int ret;
bool unmap;
bool waiting_for_io;
int target_cluster_sectors;
int max_iov;
} MirrorBlockJob;
typedef struct MirrorOp {
@@ -79,11 +71,11 @@ static BlockErrorAction mirror_error_action(MirrorBlockJob *s, bool read,
{
s->synced = false;
if (read) {
return block_job_error_action(&s->common, s->on_source_error,
true, error);
return block_job_error_action(&s->common, s->common.bs,
s->on_source_error, true, error);
} else {
return block_job_error_action(&s->common, s->on_target_error,
false, error);
return block_job_error_action(&s->common, s->target,
s->on_target_error, false, error);
}
}
@@ -107,7 +99,7 @@ static void mirror_iteration_done(MirrorOp *op, int ret)
sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
chunk_num = op->sector_num / sectors_per_chunk;
nb_chunks = DIV_ROUND_UP(op->nb_sectors, sectors_per_chunk);
nb_chunks = op->nb_sectors / sectors_per_chunk;
bitmap_clear(s->in_flight_bitmap, chunk_num, nb_chunks);
if (ret >= 0) {
if (s->cow_bitmap) {
@@ -117,9 +109,13 @@ static void mirror_iteration_done(MirrorOp *op, int ret)
}
qemu_iovec_destroy(&op->qiov);
g_free(op);
g_slice_free(MirrorOp, op);
if (s->waiting_for_io) {
/* Enter coroutine when it is not sleeping. The coroutine sleeps to
* rate-limit itself. The coroutine will eventually resume since there is
* a sleep timeout so don't wake it early.
*/
if (s->common.busy) {
qemu_coroutine_enter(s->common.co, NULL);
}
}
@@ -129,9 +125,10 @@ static void mirror_write_complete(void *opaque, int ret)
MirrorOp *op = opaque;
MirrorBlockJob *s = op->s;
if (ret < 0) {
BlockDriverState *source = s->common.bs;
BlockErrorAction action;
bdrv_set_dirty_bitmap(s->dirty_bitmap, op->sector_num, op->nb_sectors);
bdrv_set_dirty(source, op->sector_num, op->nb_sectors);
action = mirror_error_action(s, false, -ret);
if (action == BLOCK_ERROR_ACTION_REPORT && s->ret >= 0) {
s->ret = ret;
@@ -145,9 +142,10 @@ static void mirror_read_complete(void *opaque, int ret)
MirrorOp *op = opaque;
MirrorBlockJob *s = op->s;
if (ret < 0) {
BlockDriverState *source = s->common.bs;
BlockErrorAction action;
bdrv_set_dirty_bitmap(s->dirty_bitmap, op->sector_num, op->nb_sectors);
bdrv_set_dirty(source, op->sector_num, op->nb_sectors);
action = mirror_error_action(s, true, -ret);
if (action == BLOCK_ERROR_ACTION_REPORT && s->ret >= 0) {
s->ret = ret;
@@ -156,102 +154,110 @@ static void mirror_read_complete(void *opaque, int ret)
mirror_iteration_done(op, ret);
return;
}
blk_aio_pwritev(s->target, op->sector_num * BDRV_SECTOR_SIZE, &op->qiov,
op->nb_sectors * BDRV_SECTOR_SIZE,
bdrv_aio_writev(s->target, op->sector_num, &op->qiov, op->nb_sectors,
mirror_write_complete, op);
}
static inline void mirror_clip_sectors(MirrorBlockJob *s,
int64_t sector_num,
int *nb_sectors)
static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s)
{
*nb_sectors = MIN(*nb_sectors,
s->bdev_length / BDRV_SECTOR_SIZE - sector_num);
}
/* Round sector_num and/or nb_sectors to target cluster if COW is needed, and
* return the offset of the adjusted tail sector against original. */
static int mirror_cow_align(MirrorBlockJob *s,
int64_t *sector_num,
int *nb_sectors)
{
bool need_cow;
int ret = 0;
int chunk_sectors = s->granularity >> BDRV_SECTOR_BITS;
int64_t align_sector_num = *sector_num;
int align_nb_sectors = *nb_sectors;
int max_sectors = chunk_sectors * s->max_iov;
need_cow = !test_bit(*sector_num / chunk_sectors, s->cow_bitmap);
need_cow |= !test_bit((*sector_num + *nb_sectors - 1) / chunk_sectors,
s->cow_bitmap);
if (need_cow) {
bdrv_round_to_clusters(blk_bs(s->target), *sector_num, *nb_sectors,
&align_sector_num, &align_nb_sectors);
}
if (align_nb_sectors > max_sectors) {
align_nb_sectors = max_sectors;
if (need_cow) {
align_nb_sectors = QEMU_ALIGN_DOWN(align_nb_sectors,
s->target_cluster_sectors);
}
}
/* Clipping may result in align_nb_sectors unaligned to chunk boundary, but
* that doesn't matter because it's already the end of source image. */
mirror_clip_sectors(s, align_sector_num, &align_nb_sectors);
ret = align_sector_num + align_nb_sectors - (*sector_num + *nb_sectors);
*sector_num = align_sector_num;
*nb_sectors = align_nb_sectors;
assert(ret >= 0);
return ret;
}
static inline void mirror_wait_for_io(MirrorBlockJob *s)
{
assert(!s->waiting_for_io);
s->waiting_for_io = true;
qemu_coroutine_yield();
s->waiting_for_io = false;
}
/* Submit async read while handling COW.
* Returns: nb_sectors if no alignment is necessary, or
* (new_end - sector_num) if tail is rounded up or down due to
* alignment or buffer limit.
*/
static int mirror_do_read(MirrorBlockJob *s, int64_t sector_num,
int nb_sectors)
{
BlockBackend *source = s->common.blk;
int sectors_per_chunk, nb_chunks;
int ret = nb_sectors;
BlockDriverState *source = s->common.bs;
int nb_sectors, sectors_per_chunk, nb_chunks;
int64_t end, sector_num, next_chunk, next_sector, hbitmap_next_sector;
uint64_t delay_ns = 0;
MirrorOp *op;
s->sector_num = hbitmap_iter_next(&s->hbi);
if (s->sector_num < 0) {
bdrv_dirty_iter_init(source, s->dirty_bitmap, &s->hbi);
s->sector_num = hbitmap_iter_next(&s->hbi);
trace_mirror_restart_iter(s,
bdrv_get_dirty_count(source, s->dirty_bitmap));
assert(s->sector_num >= 0);
}
hbitmap_next_sector = s->sector_num;
sector_num = s->sector_num;
sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
end = s->bdev_length / BDRV_SECTOR_SIZE;
/* We can only handle as much as buf_size at a time. */
nb_sectors = MIN(s->buf_size >> BDRV_SECTOR_BITS, nb_sectors);
assert(nb_sectors);
/* Extend the QEMUIOVector to include all adjacent blocks that will
* be copied in this operation.
*
* We have to do this if we have no backing file yet in the destination,
* and the cluster size is very large. Then we need to do COW ourselves.
* The first time a cluster is copied, copy it entirely. Note that,
* because both the granularity and the cluster size are powers of two,
* the number of sectors to copy cannot exceed one cluster.
*
* We also want to extend the QEMUIOVector to include more adjacent
* dirty blocks if possible, to limit the number of I/O operations and
* run efficiently even with a small granularity.
*/
nb_chunks = 0;
nb_sectors = 0;
next_sector = sector_num;
next_chunk = sector_num / sectors_per_chunk;
if (s->cow_bitmap) {
ret += mirror_cow_align(s, &sector_num, &nb_sectors);
}
assert(nb_sectors << BDRV_SECTOR_BITS <= s->buf_size);
/* The sector range must meet granularity because:
* 1) Caller passes in aligned values;
* 2) mirror_cow_align is used only when target cluster is larger. */
assert(!(sector_num % sectors_per_chunk));
nb_chunks = DIV_ROUND_UP(nb_sectors, sectors_per_chunk);
while (s->buf_free_count < nb_chunks) {
/* Wait for I/O to this cluster (from a previous iteration) to be done. */
while (test_bit(next_chunk, s->in_flight_bitmap)) {
trace_mirror_yield_in_flight(s, sector_num, s->in_flight);
mirror_wait_for_io(s);
qemu_coroutine_yield();
}
do {
int added_sectors, added_chunks;
if (!bdrv_get_dirty(source, s->dirty_bitmap, next_sector) ||
test_bit(next_chunk, s->in_flight_bitmap)) {
assert(nb_sectors > 0);
break;
}
added_sectors = sectors_per_chunk;
if (s->cow_bitmap && !test_bit(next_chunk, s->cow_bitmap)) {
bdrv_round_to_clusters(s->target,
next_sector, added_sectors,
&next_sector, &added_sectors);
/* On the first iteration, the rounding may make us copy
* sectors before the first dirty one.
*/
if (next_sector < sector_num) {
assert(nb_sectors == 0);
sector_num = next_sector;
next_chunk = next_sector / sectors_per_chunk;
}
}
added_sectors = MIN(added_sectors, end - (sector_num + nb_sectors));
added_chunks = (added_sectors + sectors_per_chunk - 1) / sectors_per_chunk;
/* When doing COW, it may happen that there is not enough space for
* a full cluster. Wait if that is the case.
*/
while (nb_chunks == 0 && s->buf_free_count < added_chunks) {
trace_mirror_yield_buf_busy(s, nb_chunks, s->in_flight);
qemu_coroutine_yield();
}
if (s->buf_free_count < nb_chunks + added_chunks) {
trace_mirror_break_buf_busy(s, nb_chunks, s->in_flight);
break;
}
/* We have enough free space to copy these sectors. */
bitmap_set(s->in_flight_bitmap, next_chunk, added_chunks);
nb_sectors += added_sectors;
nb_chunks += added_chunks;
next_sector += added_sectors;
next_chunk += added_chunks;
if (!s->synced && s->common.speed) {
delay_ns = ratelimit_calculate_delay(&s->limit, added_sectors);
}
} while (delay_ns == 0 && next_sector < end);
/* Allocate a MirrorOp that is used as an AIO callback. */
op = g_new(MirrorOp, 1);
op = g_slice_new(MirrorOp);
op->s = s;
op->sector_num = sector_num;
op->nb_sectors = nb_sectors;
@@ -260,161 +266,34 @@ static int mirror_do_read(MirrorBlockJob *s, int64_t sector_num,
* from s->buf_free.
*/
qemu_iovec_init(&op->qiov, nb_chunks);
next_sector = sector_num;
while (nb_chunks-- > 0) {
MirrorBuffer *buf = QSIMPLEQ_FIRST(&s->buf_free);
size_t remaining = nb_sectors * BDRV_SECTOR_SIZE - op->qiov.size;
size_t remaining = (nb_sectors * BDRV_SECTOR_SIZE) - op->qiov.size;
QSIMPLEQ_REMOVE_HEAD(&s->buf_free, next);
s->buf_free_count--;
qemu_iovec_add(&op->qiov, buf, MIN(s->granularity, remaining));
/* Advance the HBitmapIter in parallel, so that we do not examine
* the same sector twice.
*/
if (next_sector > hbitmap_next_sector
&& bdrv_get_dirty(source, s->dirty_bitmap, next_sector)) {
hbitmap_next_sector = hbitmap_iter_next(&s->hbi);
}
next_sector += sectors_per_chunk;
}
bdrv_reset_dirty(source, sector_num, nb_sectors);
/* Copy the dirty cluster. */
s->in_flight++;
s->sectors_in_flight += nb_sectors;
trace_mirror_one_iteration(s, sector_num, nb_sectors);
blk_aio_preadv(source, sector_num * BDRV_SECTOR_SIZE, &op->qiov,
nb_sectors * BDRV_SECTOR_SIZE,
bdrv_aio_readv(source, sector_num, &op->qiov, nb_sectors,
mirror_read_complete, op);
return ret;
}
static void mirror_do_zero_or_discard(MirrorBlockJob *s,
int64_t sector_num,
int nb_sectors,
bool is_discard)
{
MirrorOp *op;
/* Allocate a MirrorOp that is used as an AIO callback. The qiov is zeroed
* so the freeing in mirror_iteration_done is nop. */
op = g_new0(MirrorOp, 1);
op->s = s;
op->sector_num = sector_num;
op->nb_sectors = nb_sectors;
s->in_flight++;
s->sectors_in_flight += nb_sectors;
if (is_discard) {
blk_aio_discard(s->target, sector_num, op->nb_sectors,
mirror_write_complete, op);
} else {
blk_aio_pwrite_zeroes(s->target, sector_num * BDRV_SECTOR_SIZE,
op->nb_sectors * BDRV_SECTOR_SIZE,
s->unmap ? BDRV_REQ_MAY_UNMAP : 0,
mirror_write_complete, op);
}
}
static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s)
{
BlockDriverState *source = blk_bs(s->common.blk);
int64_t sector_num, first_chunk;
uint64_t delay_ns = 0;
/* At least the first dirty chunk is mirrored in one iteration. */
int nb_chunks = 1;
int64_t end = s->bdev_length / BDRV_SECTOR_SIZE;
int sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
sector_num = hbitmap_iter_next(&s->hbi);
if (sector_num < 0) {
bdrv_dirty_iter_init(s->dirty_bitmap, &s->hbi);
sector_num = hbitmap_iter_next(&s->hbi);
trace_mirror_restart_iter(s, bdrv_get_dirty_count(s->dirty_bitmap));
assert(sector_num >= 0);
}
first_chunk = sector_num / sectors_per_chunk;
while (test_bit(first_chunk, s->in_flight_bitmap)) {
trace_mirror_yield_in_flight(s, first_chunk, s->in_flight);
mirror_wait_for_io(s);
}
/* Find the number of consective dirty chunks following the first dirty
* one, and wait for in flight requests in them. */
while (nb_chunks * sectors_per_chunk < (s->buf_size >> BDRV_SECTOR_BITS)) {
int64_t hbitmap_next;
int64_t next_sector = sector_num + nb_chunks * sectors_per_chunk;
int64_t next_chunk = next_sector / sectors_per_chunk;
if (next_sector >= end ||
!bdrv_get_dirty(source, s->dirty_bitmap, next_sector)) {
break;
}
if (test_bit(next_chunk, s->in_flight_bitmap)) {
break;
}
hbitmap_next = hbitmap_iter_next(&s->hbi);
if (hbitmap_next > next_sector || hbitmap_next < 0) {
/* The bitmap iterator's cache is stale, refresh it */
bdrv_set_dirty_iter(&s->hbi, next_sector);
hbitmap_next = hbitmap_iter_next(&s->hbi);
}
assert(hbitmap_next == next_sector);
nb_chunks++;
}
/* Clear dirty bits before querying the block status, because
* calling bdrv_get_block_status_above could yield - if some blocks are
* marked dirty in this window, we need to know.
*/
bdrv_reset_dirty_bitmap(s->dirty_bitmap, sector_num,
nb_chunks * sectors_per_chunk);
bitmap_set(s->in_flight_bitmap, sector_num / sectors_per_chunk, nb_chunks);
while (nb_chunks > 0 && sector_num < end) {
int ret;
int io_sectors;
BlockDriverState *file;
enum MirrorMethod {
MIRROR_METHOD_COPY,
MIRROR_METHOD_ZERO,
MIRROR_METHOD_DISCARD
} mirror_method = MIRROR_METHOD_COPY;
assert(!(sector_num % sectors_per_chunk));
ret = bdrv_get_block_status_above(source, NULL, sector_num,
nb_chunks * sectors_per_chunk,
&io_sectors, &file);
if (ret < 0) {
io_sectors = nb_chunks * sectors_per_chunk;
}
io_sectors -= io_sectors % sectors_per_chunk;
if (io_sectors < sectors_per_chunk) {
io_sectors = sectors_per_chunk;
} else if (ret >= 0 && !(ret & BDRV_BLOCK_DATA)) {
int64_t target_sector_num;
int target_nb_sectors;
bdrv_round_to_clusters(blk_bs(s->target), sector_num, io_sectors,
&target_sector_num, &target_nb_sectors);
if (target_sector_num == sector_num &&
target_nb_sectors == io_sectors) {
mirror_method = ret & BDRV_BLOCK_ZERO ?
MIRROR_METHOD_ZERO :
MIRROR_METHOD_DISCARD;
}
}
mirror_clip_sectors(s, sector_num, &io_sectors);
switch (mirror_method) {
case MIRROR_METHOD_COPY:
io_sectors = mirror_do_read(s, sector_num, io_sectors);
break;
case MIRROR_METHOD_ZERO:
mirror_do_zero_or_discard(s, sector_num, io_sectors, false);
break;
case MIRROR_METHOD_DISCARD:
mirror_do_zero_or_discard(s, sector_num, io_sectors, true);
break;
default:
abort();
}
assert(io_sectors);
sector_num += io_sectors;
nb_chunks -= DIV_ROUND_UP(io_sectors, sectors_per_chunk);
delay_ns += ratelimit_calculate_delay(&s->limit, io_sectors);
}
return delay_ns;
}
@@ -438,7 +317,7 @@ static void mirror_free_init(MirrorBlockJob *s)
static void mirror_drain(MirrorBlockJob *s)
{
while (s->in_flight > 0) {
mirror_wait_for_io(s);
qemu_coroutine_yield();
}
}
@@ -451,12 +330,6 @@ static void mirror_exit(BlockJob *job, void *opaque)
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
MirrorExitData *data = opaque;
AioContext *replace_aio_context = NULL;
BlockDriverState *src = blk_bs(s->common.blk);
BlockDriverState *target_bs = blk_bs(s->target);
/* Make sure that the source BDS doesn't go away before we called
* block_job_completed(). */
bdrv_ref(src);
if (s->to_replace) {
replace_aio_context = bdrv_get_aio_context(s->to_replace);
@@ -464,24 +337,21 @@ static void mirror_exit(BlockJob *job, void *opaque)
}
if (s->should_complete && data->ret == 0) {
BlockDriverState *to_replace = src;
BlockDriverState *to_replace = s->common.bs;
if (s->to_replace) {
to_replace = s->to_replace;
}
if (bdrv_get_flags(target_bs) != bdrv_get_flags(to_replace)) {
bdrv_reopen(target_bs, bdrv_get_flags(to_replace), NULL);
if (bdrv_get_flags(s->target) != bdrv_get_flags(to_replace)) {
bdrv_reopen(s->target, bdrv_get_flags(to_replace), NULL);
}
bdrv_swap(s->target, to_replace);
if (s->common.driver->job_type == BLOCK_JOB_TYPE_COMMIT) {
/* drop the bs loop chain formed by the swap: break the loop then
* trigger the unref from the top one */
BlockDriverState *p = s->base->backing_hd;
bdrv_set_backing_hd(s->base, NULL);
bdrv_unref(p);
}
/* The mirror job has no requests in flight any more, but we need to
* drain potential other users of the BDS before changing the graph. */
bdrv_drained_begin(target_bs);
bdrv_replace_in_backing_chain(to_replace, target_bs);
bdrv_drained_end(target_bs);
/* We just changed the BDS the job BB refers to */
blk_remove_bs(job->blk);
blk_insert_bs(job->blk, src);
}
if (s->to_replace) {
bdrv_op_unblock_all(s->to_replace, s->replace_blocker);
@@ -492,31 +362,22 @@ static void mirror_exit(BlockJob *job, void *opaque)
aio_context_release(replace_aio_context);
}
g_free(s->replaces);
bdrv_op_unblock_all(target_bs, s->common.blocker);
blk_unref(s->target);
bdrv_unref(s->target);
block_job_completed(&s->common, data->ret);
g_free(data);
bdrv_drained_end(src);
if (qemu_get_aio_context() == bdrv_get_aio_context(src)) {
aio_enable_external(iohandler_get_aio_context());
}
bdrv_unref(src);
}
static void coroutine_fn mirror_run(void *opaque)
{
MirrorBlockJob *s = opaque;
MirrorExitData *data;
BlockDriverState *bs = blk_bs(s->common.blk);
BlockDriverState *target_bs = blk_bs(s->target);
int64_t sector_num, end, length;
BlockDriverState *bs = s->common.bs;
int64_t sector_num, end, sectors_per_chunk, length;
uint64_t last_pause_ns;
BlockDriverInfo bdi;
char backing_filename[2]; /* we only need 2 characters because we are only
checking for a NULL string */
char backing_filename[1024];
int ret = 0;
int n;
int target_cluster_size = BDRV_SECTOR_SIZE;
if (block_job_is_cancelled(&s->common)) {
goto immediate_exit;
@@ -544,18 +405,18 @@ static void coroutine_fn mirror_run(void *opaque)
* the destination do COW. Instead, we copy sectors around the
* dirty data if needed. We need a bitmap to do that.
*/
bdrv_get_backing_filename(target_bs, backing_filename,
bdrv_get_backing_filename(s->target, backing_filename,
sizeof(backing_filename));
if (!bdrv_get_info(target_bs, &bdi) && bdi.cluster_size) {
target_cluster_size = bdi.cluster_size;
if (backing_filename[0] && !s->target->backing_hd) {
ret = bdrv_get_info(s->target, &bdi);
if (ret < 0) {
goto immediate_exit;
}
if (s->granularity < bdi.cluster_size) {
s->buf_size = MAX(s->buf_size, bdi.cluster_size);
s->cow_bitmap = bitmap_new(length);
}
}
if (backing_filename[0] && !target_bs->backing
&& s->granularity < target_cluster_size) {
s->buf_size = MAX(s->buf_size, target_cluster_size);
s->cow_bitmap = bitmap_new(length);
}
s->target_cluster_sectors = target_cluster_size >> BDRV_SECTOR_BITS;
s->max_iov = MIN(bs->bl.max_iov, target_bs->bl.max_iov);
end = s->bdev_length / BDRV_SECTOR_SIZE;
s->buf = qemu_try_blockalign(bs, s->buf_size);
@@ -564,44 +425,33 @@ static void coroutine_fn mirror_run(void *opaque)
goto immediate_exit;
}
sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
mirror_free_init(s);
last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
if (!s->is_none_mode) {
/* First part, loop on the sectors and initialize the dirty bitmap. */
BlockDriverState *base = s->base;
bool mark_all_dirty = s->base == NULL && !bdrv_has_zero_init(target_bs);
for (sector_num = 0; sector_num < end; ) {
/* Just to make sure we are not exceeding int limit. */
int nb_sectors = MIN(INT_MAX >> BDRV_SECTOR_BITS,
end - sector_num);
int64_t now = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
if (now - last_pause_ns > SLICE_TIME) {
last_pause_ns = now;
block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, 0);
}
if (block_job_is_cancelled(&s->common)) {
goto immediate_exit;
}
ret = bdrv_is_allocated_above(bs, base, sector_num, nb_sectors, &n);
int64_t next = (sector_num | (sectors_per_chunk - 1)) + 1;
ret = bdrv_is_allocated_above(bs, base,
sector_num, next - sector_num, &n);
if (ret < 0) {
goto immediate_exit;
}
assert(n > 0);
if (ret == 1 || mark_all_dirty) {
bdrv_set_dirty_bitmap(s->dirty_bitmap, sector_num, n);
if (ret == 1) {
bdrv_set_dirty(bs, sector_num, n);
sector_num = next;
} else {
sector_num += n;
}
sector_num += n;
}
}
bdrv_dirty_iter_init(s->dirty_bitmap, &s->hbi);
bdrv_dirty_iter_init(bs, s->dirty_bitmap, &s->hbi);
last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
for (;;) {
uint64_t delay_ns = 0;
int64_t cnt;
@@ -612,7 +462,7 @@ static void coroutine_fn mirror_run(void *opaque)
goto immediate_exit;
}
cnt = bdrv_get_dirty_count(s->dirty_bitmap);
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
/* s->common.offset contains the number of bytes already processed so
* far, cnt is the number of dirty sectors remaining and
* s->sectors_in_flight is the number of sectors currently being
@@ -621,7 +471,7 @@ static void coroutine_fn mirror_run(void *opaque)
(cnt + s->sectors_in_flight) * BDRV_SECTOR_SIZE;
/* Note that even when no rate limit is applied we need to yield
* periodically with no pending I/O so that bdrv_drain_all() returns.
* periodically with no pending I/O so that qemu_aio_flush() returns.
* We do so every SLICE_TIME nanoseconds, or when there is an error,
* or when the source is clean, whichever comes first.
*/
@@ -630,17 +480,20 @@ static void coroutine_fn mirror_run(void *opaque)
if (s->in_flight == MAX_IN_FLIGHT || s->buf_free_count == 0 ||
(cnt == 0 && s->in_flight > 0)) {
trace_mirror_yield(s, s->in_flight, s->buf_free_count, cnt);
mirror_wait_for_io(s);
qemu_coroutine_yield();
continue;
} else if (cnt != 0) {
delay_ns = mirror_iteration(s);
if (delay_ns == 0) {
continue;
}
}
}
should_complete = false;
if (s->in_flight == 0 && cnt == 0) {
trace_mirror_before_flush(s);
ret = blk_flush(s->target);
ret = bdrv_flush(s->target);
if (ret < 0) {
if (mirror_error_action(s, false, -ret) ==
BLOCK_ERROR_ACTION_REPORT) {
@@ -659,7 +512,7 @@ static void coroutine_fn mirror_run(void *opaque)
should_complete = s->should_complete ||
block_job_is_cancelled(&s->common);
cnt = bdrv_get_dirty_count(s->dirty_bitmap);
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
}
}
@@ -673,8 +526,8 @@ static void coroutine_fn mirror_run(void *opaque)
* mirror_populate runs.
*/
trace_mirror_before_drain(s, cnt);
bdrv_co_drain(bs);
cnt = bdrv_get_dirty_count(s->dirty_bitmap);
bdrv_drain(bs);
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
}
ret = 0;
@@ -713,18 +566,10 @@ immediate_exit:
g_free(s->cow_bitmap);
g_free(s->in_flight_bitmap);
bdrv_release_dirty_bitmap(bs, s->dirty_bitmap);
bdrv_iostatus_disable(s->target);
data = g_malloc(sizeof(*data));
data->ret = ret;
/* Before we switch to target in mirror_exit, make sure data doesn't
* change. */
bdrv_drained_begin(bs);
if (qemu_get_aio_context() == bdrv_get_aio_context(bs)) {
/* FIXME: virtio host notifiers run on iohandler_ctx, therefore the
* above bdrv_drained_end isn't enough to quiesce it. This is ugly, we
* need a block layer API change to achieve this. */
aio_disable_external(iohandler_get_aio_context());
}
block_job_defer_to_main_loop(&s->common, mirror_exit, data);
}
@@ -733,26 +578,33 @@ static void mirror_set_speed(BlockJob *job, int64_t speed, Error **errp)
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
if (speed < 0) {
error_setg(errp, QERR_INVALID_PARAMETER, "speed");
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
}
static void mirror_iostatus_reset(BlockJob *job)
{
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
bdrv_iostatus_reset(s->target);
}
static void mirror_complete(BlockJob *job, Error **errp)
{
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
Error *local_err = NULL;
int ret;
ret = bdrv_open_backing_file(blk_bs(s->target), NULL, "backing",
&local_err);
ret = bdrv_open_backing_file(s->target, NULL, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
return;
}
if (!s->synced) {
error_setg(errp, QERR_BLOCK_JOB_NOT_READY, job->id);
error_set(errp, QERR_BLOCK_JOB_NOT_READY,
bdrv_get_device_name(job->bs));
return;
}
@@ -760,9 +612,9 @@ static void mirror_complete(BlockJob *job, Error **errp)
if (s->replaces) {
AioContext *replace_aio_context;
s->to_replace = bdrv_find_node(s->replaces);
s->to_replace = check_to_replace_node(s->replaces, &local_err);
if (!s->to_replace) {
error_setg(errp, "Node name '%s' not found", s->replaces);
error_propagate(errp, local_err);
return;
}
@@ -778,13 +630,14 @@ static void mirror_complete(BlockJob *job, Error **errp)
}
s->should_complete = true;
block_job_enter(&s->common);
block_job_resume(job);
}
static const BlockJobDriver mirror_job_driver = {
.instance_size = sizeof(MirrorBlockJob),
.job_type = BLOCK_JOB_TYPE_MIRROR,
.set_speed = mirror_set_speed,
.iostatus_reset= mirror_iostatus_reset,
.complete = mirror_complete,
};
@@ -792,16 +645,17 @@ static const BlockJobDriver commit_active_job_driver = {
.instance_size = sizeof(MirrorBlockJob),
.job_type = BLOCK_JOB_TYPE_COMMIT,
.set_speed = mirror_set_speed,
.iostatus_reset
= mirror_iostatus_reset,
.complete = mirror_complete,
};
static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
const char *replaces,
int64_t speed, uint32_t granularity,
int64_t speed, int64_t granularity,
int64_t buf_size,
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
bool unmap,
BlockCompletionFunc *cb,
void *opaque, Error **errp,
const BlockJobDriver *driver,
@@ -810,47 +664,48 @@ static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
MirrorBlockJob *s;
if (granularity == 0) {
granularity = bdrv_get_default_bitmap_granularity(target);
/* Choose the default granularity based on the target file's cluster
* size, clamped between 4k and 64k. */
BlockDriverInfo bdi;
if (bdrv_get_info(target, &bdi) >= 0 && bdi.cluster_size != 0) {
granularity = MAX(4096, bdi.cluster_size);
granularity = MIN(65536, granularity);
} else {
granularity = 65536;
}
}
assert ((granularity & (granularity - 1)) == 0);
if (buf_size < 0) {
error_setg(errp, "Invalid parameter 'buf-size'");
if ((on_source_error == BLOCKDEV_ON_ERROR_STOP ||
on_source_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_set(errp, QERR_INVALID_PARAMETER, "on-source-error");
return;
}
if (buf_size == 0) {
buf_size = DEFAULT_MIRROR_BUF_SIZE;
}
s = block_job_create(driver, bs, speed, cb, opaque, errp);
if (!s) {
return;
}
s->target = blk_new();
blk_insert_bs(s->target, target);
s->replaces = g_strdup(replaces);
s->on_source_error = on_source_error;
s->on_target_error = on_target_error;
s->target = target;
s->is_none_mode = is_none_mode;
s->base = base;
s->granularity = granularity;
s->buf_size = ROUND_UP(buf_size, granularity);
s->unmap = unmap;
s->buf_size = MAX(buf_size, granularity);
s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, NULL, errp);
s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, errp);
if (!s->dirty_bitmap) {
g_free(s->replaces);
blk_unref(s->target);
block_job_unref(&s->common);
return;
}
bdrv_op_block_all(target, s->common.blocker);
bdrv_set_enable_write_cache(s->target, true);
bdrv_set_on_error(s->target, on_target_error, on_target_error);
bdrv_iostatus_enable(s->target);
s->common.co = qemu_coroutine_create(mirror_run);
trace_mirror_start(bs, s, s->common.co, opaque);
qemu_coroutine_enter(s->common.co, s);
@@ -858,25 +713,20 @@ static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
void mirror_start(BlockDriverState *bs, BlockDriverState *target,
const char *replaces,
int64_t speed, uint32_t granularity, int64_t buf_size,
int64_t speed, int64_t granularity, int64_t buf_size,
MirrorSyncMode mode, BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
bool unmap,
BlockCompletionFunc *cb,
void *opaque, Error **errp)
{
bool is_none_mode;
BlockDriverState *base;
if (mode == MIRROR_SYNC_MODE_INCREMENTAL) {
error_setg(errp, "Sync mode 'incremental' not supported");
return;
}
is_none_mode = mode == MIRROR_SYNC_MODE_NONE;
base = mode == MIRROR_SYNC_MODE_TOP ? backing_bs(bs) : NULL;
base = mode == MIRROR_SYNC_MODE_TOP ? bs->backing_hd : NULL;
mirror_start_job(bs, target, replaces,
speed, granularity, buf_size,
on_source_error, on_target_error, unmap, cb, opaque, errp,
on_source_error, on_target_error, cb, opaque, errp,
&mirror_job_driver, is_none_mode, base);
}
@@ -922,8 +772,9 @@ void commit_active_start(BlockDriverState *bs, BlockDriverState *base,
}
}
bdrv_ref(base);
mirror_start_job(bs, base, NULL, speed, 0, 0,
on_error, on_error, false, cb, opaque, &local_err,
on_error, on_error, cb, opaque, &local_err,
&commit_active_job_driver, false, base);
if (local_err) {
error_propagate(errp, local_err);

View File

@@ -26,8 +26,8 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "nbd-client.h"
#include "qemu/sockets.h"
#define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs))
#define INDEX_TO_HANDLE(bs, index) ((index) ^ ((uint64_t)(intptr_t)bs))
@@ -43,44 +43,29 @@ static void nbd_recv_coroutines_enter_all(NbdClientSession *s)
}
}
static void nbd_teardown_connection(BlockDriverState *bs)
static void nbd_teardown_connection(NbdClientSession *client)
{
NbdClientSession *client = nbd_get_client_session(bs);
if (!client->ioc) { /* Already closed */
return;
}
/* finish any pending coroutines */
qio_channel_shutdown(client->ioc,
QIO_CHANNEL_SHUTDOWN_BOTH,
NULL);
shutdown(client->sock, 2);
nbd_recv_coroutines_enter_all(client);
nbd_client_detach_aio_context(bs);
object_unref(OBJECT(client->sioc));
client->sioc = NULL;
object_unref(OBJECT(client->ioc));
client->ioc = NULL;
nbd_client_session_detach_aio_context(client);
closesocket(client->sock);
client->sock = -1;
}
static void nbd_reply_ready(void *opaque)
{
BlockDriverState *bs = opaque;
NbdClientSession *s = nbd_get_client_session(bs);
NbdClientSession *s = opaque;
uint64_t i;
int ret;
if (!s->ioc) { /* Already closed */
return;
}
if (s->reply.handle == 0) {
/* No reply already in flight. Fetch a header. It is possible
* that another thread has done the same thing in parallel, so
* the socket is not readable anymore.
*/
ret = nbd_receive_reply(s->ioc, &s->reply);
ret = nbd_receive_reply(s->sock, &s->reply);
if (ret == -EAGAIN) {
return;
}
@@ -104,63 +89,47 @@ static void nbd_reply_ready(void *opaque)
}
fail:
nbd_teardown_connection(bs);
nbd_teardown_connection(s);
}
static void nbd_restart_write(void *opaque)
{
BlockDriverState *bs = opaque;
NbdClientSession *s = opaque;
qemu_coroutine_enter(nbd_get_client_session(bs)->send_coroutine, NULL);
qemu_coroutine_enter(s->send_coroutine, NULL);
}
static int nbd_co_send_request(BlockDriverState *bs,
struct nbd_request *request,
QEMUIOVector *qiov, int offset)
static int nbd_co_send_request(NbdClientSession *s,
struct nbd_request *request,
QEMUIOVector *qiov, int offset)
{
NbdClientSession *s = nbd_get_client_session(bs);
AioContext *aio_context;
int rc, ret, i;
int rc, ret;
qemu_co_mutex_lock(&s->send_mutex);
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i] == NULL) {
s->recv_coroutine[i] = qemu_coroutine_self();
break;
}
}
g_assert(qemu_in_coroutine());
assert(i < MAX_NBD_REQUESTS);
request->handle = INDEX_TO_HANDLE(s, i);
if (!s->ioc) {
qemu_co_mutex_unlock(&s->send_mutex);
return -EPIPE;
}
s->send_coroutine = qemu_coroutine_self();
aio_context = bdrv_get_aio_context(bs);
aio_set_fd_handler(aio_context, s->sioc->fd, false,
nbd_reply_ready, nbd_restart_write, bs);
aio_context = bdrv_get_aio_context(s->bs);
aio_set_fd_handler(aio_context, s->sock,
nbd_reply_ready, nbd_restart_write, s);
if (qiov) {
qio_channel_set_cork(s->ioc, true);
rc = nbd_send_request(s->ioc, request);
if (!s->is_unix) {
socket_set_cork(s->sock, 1);
}
rc = nbd_send_request(s->sock, request);
if (rc >= 0) {
ret = nbd_wr_syncv(s->ioc, qiov->iov, qiov->niov,
offset, request->len, 0);
ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
rc = -EIO;
}
}
qio_channel_set_cork(s->ioc, false);
if (!s->is_unix) {
socket_set_cork(s->sock, 0);
}
} else {
rc = nbd_send_request(s->ioc, request);
rc = nbd_send_request(s->sock, request);
}
aio_set_fd_handler(aio_context, s->sioc->fd, false,
nbd_reply_ready, NULL, bs);
aio_set_fd_handler(aio_context, s->sock, nbd_reply_ready, NULL, s);
s->send_coroutine = NULL;
qemu_co_mutex_unlock(&s->send_mutex);
return rc;
@@ -176,13 +145,12 @@ static void nbd_co_receive_reply(NbdClientSession *s,
* peek at the next reply and avoid yielding if it's ours? */
qemu_coroutine_yield();
*reply = s->reply;
if (reply->handle != request->handle ||
!s->ioc) {
if (reply->handle != request->handle) {
reply->error = EIO;
} else {
if (qiov && reply->error == 0) {
ret = nbd_wr_syncv(s->ioc, qiov->iov, qiov->niov,
offset, request->len, 1);
ret = qemu_co_recvv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
reply->error = EIO;
}
@@ -196,6 +164,8 @@ static void nbd_co_receive_reply(NbdClientSession *s,
static void nbd_coroutine_start(NbdClientSession *s,
struct nbd_request *request)
{
int i;
/* Poor man semaphore. The free_sema is locked when no other request
* can be accepted, and unlocked after receiving one reply. */
if (s->in_flight >= MAX_NBD_REQUESTS - 1) {
@@ -204,7 +174,15 @@ static void nbd_coroutine_start(NbdClientSession *s,
}
s->in_flight++;
/* s->recv_coroutine[i] is set as soon as we get the send_lock. */
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i] == NULL) {
s->recv_coroutine[i] = qemu_coroutine_self();
break;
}
}
assert(i < MAX_NBD_REQUESTS);
request->handle = INDEX_TO_HANDLE(s, i);
}
static void nbd_coroutine_end(NbdClientSession *s,
@@ -217,11 +195,10 @@ static void nbd_coroutine_end(NbdClientSession *s,
}
}
static int nbd_co_readv_1(BlockDriverState *bs, int64_t sector_num,
static int nbd_co_readv_1(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
NbdClientSession *client = nbd_get_client_session(bs);
struct nbd_request request = { .type = NBD_CMD_READ };
struct nbd_reply reply;
ssize_t ret;
@@ -230,7 +207,7 @@ static int nbd_co_readv_1(BlockDriverState *bs, int64_t sector_num,
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(bs, &request, NULL, 0);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
@@ -241,17 +218,16 @@ static int nbd_co_readv_1(BlockDriverState *bs, int64_t sector_num,
}
static int nbd_co_writev_1(BlockDriverState *bs, int64_t sector_num,
static int nbd_co_writev_1(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset, int flags)
int offset)
{
NbdClientSession *client = nbd_get_client_session(bs);
struct nbd_request request = { .type = NBD_CMD_WRITE };
struct nbd_reply reply;
ssize_t ret;
if (flags & BDRV_REQ_FUA) {
assert(client->nbdflags & NBD_FLAG_SEND_FUA);
if (!bdrv_enable_write_cache(client->bs) &&
(client->nbdflags & NBD_FLAG_SEND_FUA)) {
request.type |= NBD_CMD_FLAG_FUA;
}
@@ -259,7 +235,7 @@ static int nbd_co_writev_1(BlockDriverState *bs, int64_t sector_num,
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(bs, &request, qiov, offset);
ret = nbd_co_send_request(client, &request, qiov, offset);
if (ret < 0) {
reply.error = -ret;
} else {
@@ -273,13 +249,14 @@ static int nbd_co_writev_1(BlockDriverState *bs, int64_t sector_num,
* remain aligned to 4K. */
#define NBD_MAX_SECTORS 2040
int nbd_client_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
int nbd_client_session_co_readv(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_readv_1(bs, sector_num, NBD_MAX_SECTORS, qiov, offset);
ret = nbd_co_readv_1(client, sector_num,
NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
@@ -287,17 +264,17 @@ int nbd_client_co_readv(BlockDriverState *bs, int64_t sector_num,
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_readv_1(bs, sector_num, nb_sectors, qiov, offset);
return nbd_co_readv_1(client, sector_num, nb_sectors, qiov, offset);
}
int nbd_client_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov, int flags)
int nbd_client_session_co_writev(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_writev_1(bs, sector_num, NBD_MAX_SECTORS, qiov, offset,
flags);
ret = nbd_co_writev_1(client, sector_num,
NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
@@ -305,12 +282,11 @@ int nbd_client_co_writev(BlockDriverState *bs, int64_t sector_num,
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_writev_1(bs, sector_num, nb_sectors, qiov, offset, flags);
return nbd_co_writev_1(client, sector_num, nb_sectors, qiov, offset);
}
int nbd_client_co_flush(BlockDriverState *bs)
int nbd_client_session_co_flush(NbdClientSession *client)
{
NbdClientSession *client = nbd_get_client_session(bs);
struct nbd_request request = { .type = NBD_CMD_FLUSH };
struct nbd_reply reply;
ssize_t ret;
@@ -319,11 +295,15 @@ int nbd_client_co_flush(BlockDriverState *bs)
return 0;
}
if (client->nbdflags & NBD_FLAG_SEND_FUA) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = 0;
request.len = 0;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(bs, &request, NULL, 0);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
@@ -333,10 +313,9 @@ int nbd_client_co_flush(BlockDriverState *bs)
return -reply.error;
}
int nbd_client_co_discard(BlockDriverState *bs, int64_t sector_num,
int nb_sectors)
int nbd_client_session_co_discard(NbdClientSession *client, int64_t sector_num,
int nb_sectors)
{
NbdClientSession *client = nbd_get_client_session(bs);
struct nbd_request request = { .type = NBD_CMD_TRIM };
struct nbd_reply reply;
ssize_t ret;
@@ -348,7 +327,7 @@ int nbd_client_co_discard(BlockDriverState *bs, int64_t sector_num,
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(bs, &request, NULL, 0);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
@@ -359,80 +338,66 @@ int nbd_client_co_discard(BlockDriverState *bs, int64_t sector_num,
}
void nbd_client_detach_aio_context(BlockDriverState *bs)
void nbd_client_session_detach_aio_context(NbdClientSession *client)
{
aio_set_fd_handler(bdrv_get_aio_context(bs),
nbd_get_client_session(bs)->sioc->fd,
false, NULL, NULL, NULL);
aio_set_fd_handler(bdrv_get_aio_context(client->bs), client->sock,
NULL, NULL, NULL);
}
void nbd_client_attach_aio_context(BlockDriverState *bs,
AioContext *new_context)
void nbd_client_session_attach_aio_context(NbdClientSession *client,
AioContext *new_context)
{
aio_set_fd_handler(new_context, nbd_get_client_session(bs)->sioc->fd,
false, nbd_reply_ready, NULL, bs);
aio_set_fd_handler(new_context, client->sock,
nbd_reply_ready, NULL, client);
}
void nbd_client_close(BlockDriverState *bs)
void nbd_client_session_close(NbdClientSession *client)
{
NbdClientSession *client = nbd_get_client_session(bs);
struct nbd_request request = {
.type = NBD_CMD_DISC,
.from = 0,
.len = 0
};
if (client->ioc == NULL) {
if (!client->bs) {
return;
}
if (client->sock == -1) {
return;
}
nbd_send_request(client->ioc, &request);
nbd_send_request(client->sock, &request);
nbd_teardown_connection(bs);
nbd_teardown_connection(client);
client->bs = NULL;
}
int nbd_client_init(BlockDriverState *bs,
QIOChannelSocket *sioc,
const char *export,
QCryptoTLSCreds *tlscreds,
const char *hostname,
Error **errp)
int nbd_client_session_init(NbdClientSession *client, BlockDriverState *bs,
int sock, const char *export)
{
NbdClientSession *client = nbd_get_client_session(bs);
int ret;
/* NBD handshake */
logout("session init %s\n", export);
qio_channel_set_blocking(QIO_CHANNEL(sioc), true, NULL);
ret = nbd_receive_negotiate(QIO_CHANNEL(sioc), export,
&client->nbdflags,
tlscreds, hostname,
&client->ioc,
&client->size, errp);
qemu_set_block(sock);
ret = nbd_receive_negotiate(sock, export,
&client->nbdflags, &client->size,
&client->blocksize);
if (ret < 0) {
logout("Failed to negotiate with the NBD server\n");
closesocket(sock);
return ret;
}
if (client->nbdflags & NBD_FLAG_SEND_FUA) {
bs->supported_write_flags = BDRV_REQ_FUA;
}
qemu_co_mutex_init(&client->send_mutex);
qemu_co_mutex_init(&client->free_sema);
client->sioc = sioc;
object_ref(OBJECT(client->sioc));
if (!client->ioc) {
client->ioc = QIO_CHANNEL(sioc);
object_ref(OBJECT(client->ioc));
}
client->bs = bs;
client->sock = sock;
/* Now that we're connected, set the socket to be non-blocking and
* kick the reply mechanism. */
qio_channel_set_blocking(QIO_CHANNEL(sioc), false, NULL);
nbd_client_attach_aio_context(bs, bdrv_get_aio_context(bs));
qemu_set_nonblock(sock);
nbd_client_session_attach_aio_context(client, bdrv_get_aio_context(bs));
logout("Established connection with NBD server\n");
return 0;

View File

@@ -4,7 +4,6 @@
#include "qemu-common.h"
#include "block/nbd.h"
#include "block/block_int.h"
#include "io/channel-socket.h"
/* #define DEBUG_NBD */
@@ -18,10 +17,10 @@
#define MAX_NBD_REQUESTS 16
typedef struct NbdClientSession {
QIOChannelSocket *sioc; /* The master data channel */
QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
int sock;
uint32_t nbdflags;
off_t size;
size_t blocksize;
CoMutex send_mutex;
CoMutex free_sema;
@@ -32,28 +31,24 @@ typedef struct NbdClientSession {
struct nbd_reply reply;
bool is_unix;
BlockDriverState *bs;
} NbdClientSession;
NbdClientSession *nbd_get_client_session(BlockDriverState *bs);
int nbd_client_session_init(NbdClientSession *client, BlockDriverState *bs,
int sock, const char *export_name);
void nbd_client_session_close(NbdClientSession *client);
int nbd_client_init(BlockDriverState *bs,
QIOChannelSocket *sock,
const char *export_name,
QCryptoTLSCreds *tlscreds,
const char *hostname,
Error **errp);
void nbd_client_close(BlockDriverState *bs);
int nbd_client_session_co_discard(NbdClientSession *client, int64_t sector_num,
int nb_sectors);
int nbd_client_session_co_flush(NbdClientSession *client);
int nbd_client_session_co_writev(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
int nbd_client_session_co_readv(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
int nbd_client_co_discard(BlockDriverState *bs, int64_t sector_num,
int nb_sectors);
int nbd_client_co_flush(BlockDriverState *bs);
int nbd_client_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov, int flags);
int nbd_client_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
void nbd_client_detach_aio_context(BlockDriverState *bs);
void nbd_client_attach_aio_context(BlockDriverState *bs,
AioContext *new_context);
void nbd_client_session_detach_aio_context(NbdClientSession *client);
void nbd_client_session_attach_aio_context(NbdClientSession *client,
AioContext *new_context);
#endif /* NBD_CLIENT_H */

View File

@@ -26,22 +26,24 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "block/nbd-client.h"
#include "qapi/error.h"
#include "qemu/uri.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "qemu/sockets.h"
#include "qapi/qmp/qdict.h"
#include "qapi/qmp/qjson.h"
#include "qapi/qmp/qint.h"
#include "qapi/qmp/qstring.h"
#include "qemu/cutils.h"
#include <sys/types.h>
#include <unistd.h>
#define EN_OPTSTR ":exportname="
typedef struct BDRVNBDState {
NbdClientSession client;
QemuOpts *socket_opts;
} BDRVNBDState;
static int nbd_parse_uri(const char *filename, QDict *options)
@@ -188,10 +190,10 @@ out:
g_free(file);
}
static SocketAddress *nbd_config(BDRVNBDState *s, QDict *options, char **export,
Error **errp)
static void nbd_config(BDRVNBDState *s, QDict *options, char **export,
Error **errp)
{
SocketAddress *saddr;
Error *local_err = NULL;
if (qdict_haskey(options, "path") == qdict_haskey(options, "host")) {
if (qdict_haskey(options, "path")) {
@@ -199,182 +201,121 @@ static SocketAddress *nbd_config(BDRVNBDState *s, QDict *options, char **export,
} else {
error_setg(errp, "one of path and host must be specified.");
}
return NULL;
return;
}
saddr = g_new0(SocketAddress, 1);
s->client.is_unix = qdict_haskey(options, "path");
s->socket_opts = qemu_opts_create(&socket_optslist, NULL, 0,
&error_abort);
if (qdict_haskey(options, "path")) {
UnixSocketAddress *q_unix;
saddr->type = SOCKET_ADDRESS_KIND_UNIX;
q_unix = saddr->u.q_unix.data = g_new0(UnixSocketAddress, 1);
q_unix->path = g_strdup(qdict_get_str(options, "path"));
qdict_del(options, "path");
} else {
InetSocketAddress *inet;
saddr->type = SOCKET_ADDRESS_KIND_INET;
inet = saddr->u.inet.data = g_new0(InetSocketAddress, 1);
inet->host = g_strdup(qdict_get_str(options, "host"));
if (!qdict_get_try_str(options, "port")) {
inet->port = g_strdup_printf("%d", NBD_DEFAULT_PORT);
} else {
inet->port = g_strdup(qdict_get_str(options, "port"));
}
qdict_del(options, "host");
qdict_del(options, "port");
qemu_opts_absorb_qdict(s->socket_opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
s->client.is_unix = saddr->type == SOCKET_ADDRESS_KIND_UNIX;
if (!qemu_opt_get(s->socket_opts, "port")) {
qemu_opt_set_number(s->socket_opts, "port", NBD_DEFAULT_PORT);
}
*export = g_strdup(qdict_get_try_str(options, "export"));
if (*export) {
qdict_del(options, "export");
}
return saddr;
}
NbdClientSession *nbd_get_client_session(BlockDriverState *bs)
static int nbd_establish_connection(BlockDriverState *bs, Error **errp)
{
BDRVNBDState *s = bs->opaque;
return &s->client;
int sock;
if (s->client.is_unix) {
sock = unix_connect_opts(s->socket_opts, errp, NULL, NULL);
} else {
sock = inet_connect_opts(s->socket_opts, errp, NULL, NULL);
if (sock >= 0) {
socket_set_nodelay(sock);
}
}
/* Failed to establish connection */
if (sock < 0) {
logout("Failed to establish connection to NBD server\n");
return -errno;
}
return sock;
}
static QIOChannelSocket *nbd_establish_connection(SocketAddress *saddr,
Error **errp)
{
QIOChannelSocket *sioc;
Error *local_err = NULL;
sioc = qio_channel_socket_new();
qio_channel_socket_connect_sync(sioc,
saddr,
&local_err);
if (local_err) {
error_propagate(errp, local_err);
return NULL;
}
qio_channel_set_delay(QIO_CHANNEL(sioc), false);
return sioc;
}
static QCryptoTLSCreds *nbd_get_tls_creds(const char *id, Error **errp)
{
Object *obj;
QCryptoTLSCreds *creds;
obj = object_resolve_path_component(
object_get_objects_root(), id);
if (!obj) {
error_setg(errp, "No TLS credentials with id '%s'",
id);
return NULL;
}
creds = (QCryptoTLSCreds *)
object_dynamic_cast(obj, TYPE_QCRYPTO_TLS_CREDS);
if (!creds) {
error_setg(errp, "Object with id '%s' is not TLS credentials",
id);
return NULL;
}
if (creds->endpoint != QCRYPTO_TLS_CREDS_ENDPOINT_CLIENT) {
error_setg(errp,
"Expecting TLS credentials with a client endpoint");
return NULL;
}
object_ref(obj);
return creds;
}
static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVNBDState *s = bs->opaque;
char *export = NULL;
QIOChannelSocket *sioc = NULL;
SocketAddress *saddr;
const char *tlscredsid;
QCryptoTLSCreds *tlscreds = NULL;
const char *hostname = NULL;
int ret = -EINVAL;
int result, sock;
Error *local_err = NULL;
/* Pop the config into our state object. Exit if invalid. */
saddr = nbd_config(s, options, &export, errp);
if (!saddr) {
goto error;
}
tlscredsid = g_strdup(qdict_get_try_str(options, "tls-creds"));
if (tlscredsid) {
qdict_del(options, "tls-creds");
tlscreds = nbd_get_tls_creds(tlscredsid, errp);
if (!tlscreds) {
goto error;
}
if (saddr->type != SOCKET_ADDRESS_KIND_INET) {
error_setg(errp, "TLS only supported over IP sockets");
goto error;
}
hostname = saddr->u.inet.data->host;
nbd_config(s, options, &export, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return -EINVAL;
}
/* establish TCP connection, return error if it fails
* TODO: Configurable retry-until-timeout behaviour.
*/
sioc = nbd_establish_connection(saddr, errp);
if (!sioc) {
ret = -ECONNREFUSED;
goto error;
sock = nbd_establish_connection(bs, errp);
if (sock < 0) {
return sock;
}
/* NBD handshake */
ret = nbd_client_init(bs, sioc, export,
tlscreds, hostname, errp);
error:
if (sioc) {
object_unref(OBJECT(sioc));
}
if (tlscreds) {
object_unref(OBJECT(tlscreds));
}
qapi_free_SocketAddress(saddr);
result = nbd_client_session_init(&s->client, bs, sock, export);
g_free(export);
return ret;
return result;
}
static int nbd_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
return nbd_client_co_readv(bs, sector_num, nb_sectors, qiov);
BDRVNBDState *s = bs->opaque;
return nbd_client_session_co_readv(&s->client, sector_num,
nb_sectors, qiov);
}
static int nbd_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BDRVNBDState *s = bs->opaque;
return nbd_client_session_co_writev(&s->client, sector_num,
nb_sectors, qiov);
}
static int nbd_co_flush(BlockDriverState *bs)
{
return nbd_client_co_flush(bs);
}
BDRVNBDState *s = bs->opaque;
static void nbd_refresh_limits(BlockDriverState *bs, Error **errp)
{
bs->bl.max_discard = UINT32_MAX >> BDRV_SECTOR_BITS;
bs->bl.max_transfer_length = UINT32_MAX >> BDRV_SECTOR_BITS;
return nbd_client_session_co_flush(&s->client);
}
static int nbd_co_discard(BlockDriverState *bs, int64_t sector_num,
int nb_sectors)
{
return nbd_client_co_discard(bs, sector_num, nb_sectors);
BDRVNBDState *s = bs->opaque;
return nbd_client_session_co_discard(&s->client, sector_num,
nb_sectors);
}
static void nbd_close(BlockDriverState *bs)
{
nbd_client_close(bs);
BDRVNBDState *s = bs->opaque;
qemu_opts_del(s->socket_opts);
nbd_client_session_close(&s->client);
}
static int64_t nbd_getlength(BlockDriverState *bs)
@@ -386,23 +327,26 @@ static int64_t nbd_getlength(BlockDriverState *bs)
static void nbd_detach_aio_context(BlockDriverState *bs)
{
nbd_client_detach_aio_context(bs);
BDRVNBDState *s = bs->opaque;
nbd_client_session_detach_aio_context(&s->client);
}
static void nbd_attach_aio_context(BlockDriverState *bs,
AioContext *new_context)
{
nbd_client_attach_aio_context(bs, new_context);
BDRVNBDState *s = bs->opaque;
nbd_client_session_attach_aio_context(&s->client, new_context);
}
static void nbd_refresh_filename(BlockDriverState *bs, QDict *options)
static void nbd_refresh_filename(BlockDriverState *bs)
{
QDict *opts = qdict_new();
const char *path = qdict_get_try_str(options, "path");
const char *host = qdict_get_try_str(options, "host");
const char *port = qdict_get_try_str(options, "port");
const char *export = qdict_get_try_str(options, "export");
const char *tlscreds = qdict_get_try_str(options, "tls-creds");
const char *path = qdict_get_try_str(bs->options, "path");
const char *host = qdict_get_try_str(bs->options, "host");
const char *port = qdict_get_try_str(bs->options, "port");
const char *export = qdict_get_try_str(bs->options, "export");
qdict_put_obj(opts, "driver", QOBJECT(qstring_from_str("nbd")));
@@ -437,9 +381,6 @@ static void nbd_refresh_filename(BlockDriverState *bs, QDict *options)
if (export) {
qdict_put_obj(opts, "export", QOBJECT(qstring_from_str(export)));
}
if (tlscreds) {
qdict_put_obj(opts, "tls-creds", QOBJECT(qstring_from_str(tlscreds)));
}
bs->full_open_options = opts;
}
@@ -451,11 +392,10 @@ static BlockDriver bdrv_nbd = {
.bdrv_parse_filename = nbd_parse_filename,
.bdrv_file_open = nbd_open,
.bdrv_co_readv = nbd_co_readv,
.bdrv_co_writev_flags = nbd_client_co_writev,
.bdrv_co_writev = nbd_co_writev,
.bdrv_close = nbd_close,
.bdrv_co_flush_to_os = nbd_co_flush,
.bdrv_co_discard = nbd_co_discard,
.bdrv_refresh_limits = nbd_refresh_limits,
.bdrv_getlength = nbd_getlength,
.bdrv_detach_aio_context = nbd_detach_aio_context,
.bdrv_attach_aio_context = nbd_attach_aio_context,
@@ -469,11 +409,10 @@ static BlockDriver bdrv_nbd_tcp = {
.bdrv_parse_filename = nbd_parse_filename,
.bdrv_file_open = nbd_open,
.bdrv_co_readv = nbd_co_readv,
.bdrv_co_writev_flags = nbd_client_co_writev,
.bdrv_co_writev = nbd_co_writev,
.bdrv_close = nbd_close,
.bdrv_co_flush_to_os = nbd_co_flush,
.bdrv_co_discard = nbd_co_discard,
.bdrv_refresh_limits = nbd_refresh_limits,
.bdrv_getlength = nbd_getlength,
.bdrv_detach_aio_context = nbd_detach_aio_context,
.bdrv_attach_aio_context = nbd_attach_aio_context,
@@ -487,11 +426,10 @@ static BlockDriver bdrv_nbd_unix = {
.bdrv_parse_filename = nbd_parse_filename,
.bdrv_file_open = nbd_open,
.bdrv_co_readv = nbd_co_readv,
.bdrv_co_writev_flags = nbd_client_co_writev,
.bdrv_co_writev = nbd_co_writev,
.bdrv_close = nbd_close,
.bdrv_co_flush_to_os = nbd_co_flush,
.bdrv_co_discard = nbd_co_discard,
.bdrv_refresh_limits = nbd_refresh_limits,
.bdrv_getlength = nbd_getlength,
.bdrv_detach_aio_context = nbd_detach_aio_context,
.bdrv_attach_aio_context = nbd_attach_aio_context,

View File

@@ -22,31 +22,25 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "config-host.h"
#include <poll.h>
#include "qemu-common.h"
#include "qemu/config-file.h"
#include "qemu/error-report.h"
#include "qapi/error.h"
#include "block/block_int.h"
#include "trace.h"
#include "qemu/iov.h"
#include "qemu/uri.h"
#include "qemu/cutils.h"
#include "sysemu/sysemu.h"
#include <nfsc/libnfs.h>
#define QEMU_NFS_MAX_READAHEAD_SIZE 1048576
#define QEMU_NFS_MAX_DEBUG_LEVEL 2
typedef struct NFSClient {
struct nfs_context *context;
struct nfsfh *fh;
int events;
bool has_zero_init;
AioContext *aio_context;
blkcnt_t st_blocks;
} NFSClient;
typedef struct NFSRPC {
@@ -66,10 +60,11 @@ static void nfs_set_events(NFSClient *client)
{
int ev = nfs_which_events(client->context);
if (ev != client->events) {
aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
false,
aio_set_fd_handler(client->aio_context,
nfs_get_fd(client->context),
(ev & POLLIN) ? nfs_process_read : NULL,
(ev & POLLOUT) ? nfs_process_write : NULL, client);
(ev & POLLOUT) ? nfs_process_write : NULL,
client);
}
client->events = ev;
@@ -244,8 +239,9 @@ static void nfs_detach_aio_context(BlockDriverState *bs)
{
NFSClient *client = bs->opaque;
aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
false, NULL, NULL, NULL);
aio_set_fd_handler(client->aio_context,
nfs_get_fd(client->context),
NULL, NULL, NULL);
client->events = 0;
}
@@ -264,8 +260,9 @@ static void nfs_client_close(NFSClient *client)
if (client->fh) {
nfs_close(client->context, client->fh);
}
aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
false, NULL, NULL, NULL);
aio_set_fd_handler(client->aio_context,
nfs_get_fd(client->context),
NULL, NULL, NULL);
nfs_destroy_context(client->context);
}
memset(client, 0, sizeof(NFSClient));
@@ -330,23 +327,7 @@ static int64_t nfs_client_open(NFSClient *client, const char *filename,
nfs_set_tcp_syncnt(client->context, val);
#ifdef LIBNFS_FEATURE_READAHEAD
} else if (!strcmp(qp->p[i].name, "readahead")) {
if (val > QEMU_NFS_MAX_READAHEAD_SIZE) {
error_report("NFS Warning: Truncating NFS readahead"
" size to %d", QEMU_NFS_MAX_READAHEAD_SIZE);
val = QEMU_NFS_MAX_READAHEAD_SIZE;
}
nfs_set_readahead(client->context, val);
#endif
#ifdef LIBNFS_FEATURE_DEBUG
} else if (!strcmp(qp->p[i].name, "debug")) {
/* limit the maximum debug level to avoid potential flooding
* of our log files. */
if (val > QEMU_NFS_MAX_DEBUG_LEVEL) {
error_report("NFS Warning: Limiting NFS debug level"
" to %d", QEMU_NFS_MAX_DEBUG_LEVEL);
val = QEMU_NFS_MAX_DEBUG_LEVEL;
}
nfs_set_debug(client->context, val);
#endif
} else {
error_setg(errp, "Unknown NFS parameter name: %s",
@@ -386,7 +367,6 @@ static int64_t nfs_client_open(NFSClient *client, const char *filename,
}
ret = DIV_ROUND_UP(st.st_size, BDRV_SECTOR_SIZE);
client->st_blocks = st.st_blocks;
client->has_zero_init = S_ISREG(st.st_mode);
goto out;
fail:
@@ -477,11 +457,6 @@ static int64_t nfs_get_allocated_file_size(BlockDriverState *bs)
NFSRPC task = {0};
struct stat st;
if (bdrv_is_read_only(bs) &&
!(bs->open_flags & BDRV_O_NOCACHE)) {
return client->st_blocks * 512;
}
task.st = &st;
if (nfs_fstat_async(client->context, client->fh, nfs_co_generic_cb,
&task) != 0) {
@@ -493,7 +468,7 @@ static int64_t nfs_get_allocated_file_size(BlockDriverState *bs)
aio_poll(client->aio_context, true);
}
return (task.ret < 0 ? task.ret : st.st_blocks * 512);
return (task.ret < 0 ? task.ret : st.st_blocks * st.st_blksize);
}
static int nfs_file_truncate(BlockDriverState *bs, int64_t offset)
@@ -502,34 +477,6 @@ static int nfs_file_truncate(BlockDriverState *bs, int64_t offset)
return nfs_ftruncate(client->context, client->fh, offset);
}
/* Note that this will not re-establish a connection with the NFS server
* - it is effectively a NOP. */
static int nfs_reopen_prepare(BDRVReopenState *state,
BlockReopenQueue *queue, Error **errp)
{
NFSClient *client = state->bs->opaque;
struct stat st;
int ret = 0;
if (state->flags & BDRV_O_RDWR && bdrv_is_read_only(state->bs)) {
error_setg(errp, "Cannot open a read-only mount as read-write");
return -EACCES;
}
/* Update cache for read-only reopens */
if (!(state->flags & BDRV_O_RDWR)) {
ret = nfs_fstat(client->context, client->fh, &st);
if (ret < 0) {
error_setg(errp, "Failed to fstat file: %s",
nfs_get_error(client->context));
return ret;
}
client->st_blocks = st.st_blocks;
}
return 0;
}
static BlockDriver bdrv_nfs = {
.format_name = "nfs",
.protocol_name = "nfs",
@@ -545,7 +492,6 @@ static BlockDriver bdrv_nfs = {
.bdrv_file_open = nfs_file_open,
.bdrv_close = nfs_file_close,
.bdrv_create = nfs_file_create,
.bdrv_reopen_prepare = nfs_reopen_prepare,
.bdrv_co_readv = nfs_co_readv,
.bdrv_co_writev = nfs_co_writev,

View File

@@ -10,17 +10,10 @@
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "block/block_int.h"
#define NULL_OPT_LATENCY "latency-ns"
#define NULL_OPT_ZEROES "read-zeroes"
typedef struct {
int64_t length;
int64_t latency_ns;
bool read_zeroes;
} BDRVNullState;
static QemuOptsList runtime_opts = {
@@ -37,17 +30,6 @@ static QemuOptsList runtime_opts = {
.type = QEMU_OPT_SIZE,
.help = "size of the null block",
},
{
.name = NULL_OPT_LATENCY,
.type = QEMU_OPT_NUMBER,
.help = "nanoseconds (approximated) to wait "
"before completing request",
},
{
.name = NULL_OPT_ZEROES,
.type = QEMU_OPT_BOOL,
.help = "return zeroes when read",
},
{ /* end of list */ }
},
};
@@ -57,21 +39,13 @@ static int null_file_open(BlockDriverState *bs, QDict *options, int flags,
{
QemuOpts *opts;
BDRVNullState *s = bs->opaque;
int ret = 0;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &error_abort);
s->length =
qemu_opt_get_size(opts, BLOCK_OPT_SIZE, 1 << 30);
s->latency_ns =
qemu_opt_get_number(opts, NULL_OPT_LATENCY, 0);
if (s->latency_ns < 0) {
error_setg(errp, "latency-ns is invalid");
ret = -EINVAL;
}
s->read_zeroes = qemu_opt_get_bool(opts, NULL_OPT_ZEROES, false);
qemu_opts_del(opts);
return ret;
return 0;
}
static void null_close(BlockDriverState *bs)
@@ -84,46 +58,28 @@ static int64_t null_getlength(BlockDriverState *bs)
return s->length;
}
static coroutine_fn int null_co_common(BlockDriverState *bs)
{
BDRVNullState *s = bs->opaque;
if (s->latency_ns) {
co_aio_sleep_ns(bdrv_get_aio_context(bs), QEMU_CLOCK_REALTIME,
s->latency_ns);
}
return 0;
}
static coroutine_fn int null_co_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
QEMUIOVector *qiov)
{
BDRVNullState *s = bs->opaque;
if (s->read_zeroes) {
qemu_iovec_memset(qiov, 0, 0, nb_sectors * BDRV_SECTOR_SIZE);
}
return null_co_common(bs);
return 0;
}
static coroutine_fn int null_co_writev(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
QEMUIOVector *qiov)
{
return null_co_common(bs);
return 0;
}
static coroutine_fn int null_co_flush(BlockDriverState *bs)
{
return null_co_common(bs);
return 0;
}
typedef struct {
BlockAIOCB common;
QEMUBH *bh;
QEMUTimer timer;
} NullAIOCB;
static const AIOCBInfo null_aiocb_info = {
@@ -138,33 +94,15 @@ static void null_bh_cb(void *opaque)
qemu_aio_unref(acb);
}
static void null_timer_cb(void *opaque)
{
NullAIOCB *acb = opaque;
acb->common.cb(acb->common.opaque, 0);
timer_deinit(&acb->timer);
qemu_aio_unref(acb);
}
static inline BlockAIOCB *null_aio_common(BlockDriverState *bs,
BlockCompletionFunc *cb,
void *opaque)
{
NullAIOCB *acb;
BDRVNullState *s = bs->opaque;
acb = qemu_aio_get(&null_aiocb_info, bs, cb, opaque);
/* Only emulate latency after vcpu is running. */
if (s->latency_ns) {
aio_timer_init(bdrv_get_aio_context(bs), &acb->timer,
QEMU_CLOCK_REALTIME, SCALE_NS,
null_timer_cb, acb);
timer_mod_ns(&acb->timer,
qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + s->latency_ns);
} else {
acb->bh = aio_bh_new(bdrv_get_aio_context(bs), null_bh_cb, acb);
qemu_bh_schedule(acb->bh);
}
acb->bh = aio_bh_new(bdrv_get_aio_context(bs), null_bh_cb, acb);
qemu_bh_schedule(acb->bh);
return &acb->common;
}
@@ -174,12 +112,6 @@ static BlockAIOCB *null_aio_readv(BlockDriverState *bs,
BlockCompletionFunc *cb,
void *opaque)
{
BDRVNullState *s = bs->opaque;
if (s->read_zeroes) {
qemu_iovec_memset(qiov, 0, 0, nb_sectors * BDRV_SECTOR_SIZE);
}
return null_aio_common(bs, cb, opaque);
}
@@ -199,30 +131,6 @@ static BlockAIOCB *null_aio_flush(BlockDriverState *bs,
return null_aio_common(bs, cb, opaque);
}
static int null_reopen_prepare(BDRVReopenState *reopen_state,
BlockReopenQueue *queue, Error **errp)
{
return 0;
}
static int64_t coroutine_fn null_co_get_block_status(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors, int *pnum,
BlockDriverState **file)
{
BDRVNullState *s = bs->opaque;
off_t start = sector_num * BDRV_SECTOR_SIZE;
*pnum = nb_sectors;
*file = bs;
if (s->read_zeroes) {
return BDRV_BLOCK_OFFSET_VALID | start | BDRV_BLOCK_ZERO;
} else {
return BDRV_BLOCK_OFFSET_VALID | start;
}
}
static BlockDriver bdrv_null_co = {
.format_name = "null-co",
.protocol_name = "null-co",
@@ -235,9 +143,6 @@ static BlockDriver bdrv_null_co = {
.bdrv_co_readv = null_co_readv,
.bdrv_co_writev = null_co_writev,
.bdrv_co_flush_to_disk = null_co_flush,
.bdrv_reopen_prepare = null_reopen_prepare,
.bdrv_co_get_block_status = null_co_get_block_status,
};
static BlockDriver bdrv_null_aio = {
@@ -252,9 +157,6 @@ static BlockDriver bdrv_null_aio = {
.bdrv_aio_readv = null_aio_readv,
.bdrv_aio_writev = null_aio_writev,
.bdrv_aio_flush = null_aio_flush,
.bdrv_reopen_prepare = null_reopen_prepare,
.bdrv_co_get_block_status = null_co_get_block_status,
};
static void bdrv_null_init(void)

View File

@@ -2,12 +2,8 @@
* Block driver for Parallels disk image format
*
* Copyright (c) 2007 Alex Beregszaszi
* Copyright (c) 2015 Denis V. Lunev <den@openvz.org>
*
* This code was originally based on comparing different disk images created
* by Parallels. Currently it is based on opened OpenVZ sources
* available at
* http://git.openvz.org/?p=ploop;a=summary
* This code is based on comparing different disk images created by Parallels.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
@@ -27,554 +23,68 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/block_int.h"
#include "sysemu/block-backend.h"
#include "qemu/module.h"
#include "qemu/bswap.h"
#include "qemu/bitmap.h"
#include "qapi/util.h"
/**************************************************************/
#define HEADER_MAGIC "WithoutFreeSpace"
#define HEADER_MAGIC2 "WithouFreSpacExt"
#define HEADER_VERSION 2
#define HEADER_INUSE_MAGIC (0x746F6E59)
#define DEFAULT_CLUSTER_SIZE 1048576 /* 1 MiB */
#define HEADER_SIZE 64
// always little-endian
typedef struct ParallelsHeader {
struct parallels_header {
char magic[16]; // "WithoutFreeSpace"
uint32_t version;
uint32_t heads;
uint32_t cylinders;
uint32_t tracks;
uint32_t bat_entries;
uint32_t catalog_entries;
uint64_t nb_sectors;
uint32_t inuse;
uint32_t data_off;
char padding[12];
} QEMU_PACKED ParallelsHeader;
typedef enum ParallelsPreallocMode {
PRL_PREALLOC_MODE_FALLOCATE = 0,
PRL_PREALLOC_MODE_TRUNCATE = 1,
PRL_PREALLOC_MODE__MAX = 2,
} ParallelsPreallocMode;
static const char *prealloc_mode_lookup[] = {
"falloc",
"truncate",
NULL,
};
} QEMU_PACKED;
typedef struct BDRVParallelsState {
/** Locking is conservative, the lock protects
* - image file extending (truncate, fallocate)
* - any access to block allocation table
*/
CoMutex lock;
ParallelsHeader *header;
uint32_t header_size;
bool header_unclean;
unsigned long *bat_dirty_bmap;
unsigned int bat_dirty_block;
uint32_t *bat_bitmap;
unsigned int bat_size;
int64_t data_end;
uint64_t prealloc_size;
ParallelsPreallocMode prealloc_mode;
uint32_t *catalog_bitmap;
unsigned int catalog_size;
unsigned int tracks;
unsigned int off_multiplier;
} BDRVParallelsState;
#define PARALLELS_OPT_PREALLOC_MODE "prealloc-mode"
#define PARALLELS_OPT_PREALLOC_SIZE "prealloc-size"
static QemuOptsList parallels_runtime_opts = {
.name = "parallels",
.head = QTAILQ_HEAD_INITIALIZER(parallels_runtime_opts.head),
.desc = {
{
.name = PARALLELS_OPT_PREALLOC_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Preallocation size on image expansion",
.def_value_str = "128MiB",
},
{
.name = PARALLELS_OPT_PREALLOC_MODE,
.type = QEMU_OPT_STRING,
.help = "Preallocation mode on image expansion "
"(allowed values: falloc, truncate)",
.def_value_str = "falloc",
},
{ /* end of list */ },
},
};
static int64_t bat2sect(BDRVParallelsState *s, uint32_t idx)
static int parallels_probe(const uint8_t *buf, int buf_size, const char *filename)
{
return (uint64_t)le32_to_cpu(s->bat_bitmap[idx]) * s->off_multiplier;
}
const struct parallels_header *ph = (const void *)buf;
static uint32_t bat_entry_off(uint32_t idx)
{
return sizeof(ParallelsHeader) + sizeof(uint32_t) * idx;
}
static int64_t seek_to_sector(BDRVParallelsState *s, int64_t sector_num)
{
uint32_t index, offset;
index = sector_num / s->tracks;
offset = sector_num % s->tracks;
/* not allocated */
if ((index >= s->bat_size) || (s->bat_bitmap[index] == 0)) {
return -1;
}
return bat2sect(s, index) + offset;
}
static int cluster_remainder(BDRVParallelsState *s, int64_t sector_num,
int nb_sectors)
{
int ret = s->tracks - sector_num % s->tracks;
return MIN(nb_sectors, ret);
}
static int64_t block_status(BDRVParallelsState *s, int64_t sector_num,
int nb_sectors, int *pnum)
{
int64_t start_off = -2, prev_end_off = -2;
*pnum = 0;
while (nb_sectors > 0 || start_off == -2) {
int64_t offset = seek_to_sector(s, sector_num);
int to_end;
if (start_off == -2) {
start_off = offset;
prev_end_off = offset;
} else if (offset != prev_end_off) {
break;
}
to_end = cluster_remainder(s, sector_num, nb_sectors);
nb_sectors -= to_end;
sector_num += to_end;
*pnum += to_end;
if (offset > 0) {
prev_end_off += to_end;
}
}
return start_off;
}
static int64_t allocate_clusters(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum)
{
BDRVParallelsState *s = bs->opaque;
uint32_t idx, to_allocate, i;
int64_t pos, space;
pos = block_status(s, sector_num, nb_sectors, pnum);
if (pos > 0) {
return pos;
}
idx = sector_num / s->tracks;
if (idx >= s->bat_size) {
return -EINVAL;
}
to_allocate = (sector_num + *pnum + s->tracks - 1) / s->tracks - idx;
space = to_allocate * s->tracks;
if (s->data_end + space > bdrv_getlength(bs->file->bs) >> BDRV_SECTOR_BITS) {
int ret;
space += s->prealloc_size;
if (s->prealloc_mode == PRL_PREALLOC_MODE_FALLOCATE) {
ret = bdrv_write_zeroes(bs->file->bs, s->data_end, space, 0);
} else {
ret = bdrv_truncate(bs->file->bs,
(s->data_end + space) << BDRV_SECTOR_BITS);
}
if (ret < 0) {
return ret;
}
}
for (i = 0; i < to_allocate; i++) {
s->bat_bitmap[idx + i] = cpu_to_le32(s->data_end / s->off_multiplier);
s->data_end += s->tracks;
bitmap_set(s->bat_dirty_bmap,
bat_entry_off(idx + i) / s->bat_dirty_block, 1);
}
return bat2sect(s, idx) + sector_num % s->tracks;
}
static coroutine_fn int parallels_co_flush_to_os(BlockDriverState *bs)
{
BDRVParallelsState *s = bs->opaque;
unsigned long size = DIV_ROUND_UP(s->header_size, s->bat_dirty_block);
unsigned long bit;
qemu_co_mutex_lock(&s->lock);
bit = find_first_bit(s->bat_dirty_bmap, size);
while (bit < size) {
uint32_t off = bit * s->bat_dirty_block;
uint32_t to_write = s->bat_dirty_block;
int ret;
if (off + to_write > s->header_size) {
to_write = s->header_size - off;
}
ret = bdrv_pwrite(bs->file->bs, off, (uint8_t *)s->header + off,
to_write);
if (ret < 0) {
qemu_co_mutex_unlock(&s->lock);
return ret;
}
bit = find_next_bit(s->bat_dirty_bmap, size, bit + 1);
}
bitmap_zero(s->bat_dirty_bmap, size);
qemu_co_mutex_unlock(&s->lock);
return 0;
}
static int64_t coroutine_fn parallels_co_get_block_status(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum, BlockDriverState **file)
{
BDRVParallelsState *s = bs->opaque;
int64_t offset;
qemu_co_mutex_lock(&s->lock);
offset = block_status(s, sector_num, nb_sectors, pnum);
qemu_co_mutex_unlock(&s->lock);
if (offset < 0) {
if (buf_size < HEADER_SIZE)
return 0;
}
*file = bs->file->bs;
return (offset << BDRV_SECTOR_BITS) |
BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID;
}
static coroutine_fn int parallels_co_writev(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
{
BDRVParallelsState *s = bs->opaque;
uint64_t bytes_done = 0;
QEMUIOVector hd_qiov;
int ret = 0;
qemu_iovec_init(&hd_qiov, qiov->niov);
while (nb_sectors > 0) {
int64_t position;
int n, nbytes;
qemu_co_mutex_lock(&s->lock);
position = allocate_clusters(bs, sector_num, nb_sectors, &n);
qemu_co_mutex_unlock(&s->lock);
if (position < 0) {
ret = (int)position;
break;
}
nbytes = n << BDRV_SECTOR_BITS;
qemu_iovec_reset(&hd_qiov);
qemu_iovec_concat(&hd_qiov, qiov, bytes_done, nbytes);
ret = bdrv_co_writev(bs->file->bs, position, n, &hd_qiov);
if (ret < 0) {
break;
}
nb_sectors -= n;
sector_num += n;
bytes_done += nbytes;
}
qemu_iovec_destroy(&hd_qiov);
return ret;
}
static coroutine_fn int parallels_co_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
{
BDRVParallelsState *s = bs->opaque;
uint64_t bytes_done = 0;
QEMUIOVector hd_qiov;
int ret = 0;
qemu_iovec_init(&hd_qiov, qiov->niov);
while (nb_sectors > 0) {
int64_t position;
int n, nbytes;
qemu_co_mutex_lock(&s->lock);
position = block_status(s, sector_num, nb_sectors, &n);
qemu_co_mutex_unlock(&s->lock);
nbytes = n << BDRV_SECTOR_BITS;
if (position < 0) {
qemu_iovec_memset(qiov, bytes_done, 0, nbytes);
} else {
qemu_iovec_reset(&hd_qiov);
qemu_iovec_concat(&hd_qiov, qiov, bytes_done, nbytes);
ret = bdrv_co_readv(bs->file->bs, position, n, &hd_qiov);
if (ret < 0) {
break;
}
}
nb_sectors -= n;
sector_num += n;
bytes_done += nbytes;
}
qemu_iovec_destroy(&hd_qiov);
return ret;
}
static int parallels_check(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix)
{
BDRVParallelsState *s = bs->opaque;
int64_t size, prev_off, high_off;
int ret;
uint32_t i;
bool flush_bat = false;
int cluster_size = s->tracks << BDRV_SECTOR_BITS;
size = bdrv_getlength(bs->file->bs);
if (size < 0) {
res->check_errors++;
return size;
}
if (s->header_unclean) {
fprintf(stderr, "%s image was not closed correctly\n",
fix & BDRV_FIX_ERRORS ? "Repairing" : "ERROR");
res->corruptions++;
if (fix & BDRV_FIX_ERRORS) {
/* parallels_close will do the job right */
res->corruptions_fixed++;
s->header_unclean = false;
}
}
res->bfi.total_clusters = s->bat_size;
res->bfi.compressed_clusters = 0; /* compression is not supported */
high_off = 0;
prev_off = 0;
for (i = 0; i < s->bat_size; i++) {
int64_t off = bat2sect(s, i) << BDRV_SECTOR_BITS;
if (off == 0) {
prev_off = 0;
continue;
}
/* cluster outside the image */
if (off > size) {
fprintf(stderr, "%s cluster %u is outside image\n",
fix & BDRV_FIX_ERRORS ? "Repairing" : "ERROR", i);
res->corruptions++;
if (fix & BDRV_FIX_ERRORS) {
prev_off = 0;
s->bat_bitmap[i] = 0;
res->corruptions_fixed++;
flush_bat = true;
continue;
}
}
res->bfi.allocated_clusters++;
if (off > high_off) {
high_off = off;
}
if (prev_off != 0 && (prev_off + cluster_size) != off) {
res->bfi.fragmented_clusters++;
}
prev_off = off;
}
if (flush_bat) {
ret = bdrv_pwrite_sync(bs->file->bs, 0, s->header, s->header_size);
if (ret < 0) {
res->check_errors++;
return ret;
}
}
res->image_end_offset = high_off + cluster_size;
if (size > res->image_end_offset) {
int64_t count;
count = DIV_ROUND_UP(size - res->image_end_offset, cluster_size);
fprintf(stderr, "%s space leaked at the end of the image %" PRId64 "\n",
fix & BDRV_FIX_LEAKS ? "Repairing" : "ERROR",
size - res->image_end_offset);
res->leaks += count;
if (fix & BDRV_FIX_LEAKS) {
ret = bdrv_truncate(bs->file->bs, res->image_end_offset);
if (ret < 0) {
res->check_errors++;
return ret;
}
res->leaks_fixed += count;
}
}
return 0;
}
static int parallels_create(const char *filename, QemuOpts *opts, Error **errp)
{
int64_t total_size, cl_size;
uint8_t tmp[BDRV_SECTOR_SIZE];
Error *local_err = NULL;
BlockBackend *file;
uint32_t bat_entries, bat_sectors;
ParallelsHeader header;
int ret;
total_size = ROUND_UP(qemu_opt_get_size_del(opts, BLOCK_OPT_SIZE, 0),
BDRV_SECTOR_SIZE);
cl_size = ROUND_UP(qemu_opt_get_size_del(opts, BLOCK_OPT_CLUSTER_SIZE,
DEFAULT_CLUSTER_SIZE), BDRV_SECTOR_SIZE);
ret = bdrv_create_file(filename, opts, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
file = blk_new_open(filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, &local_err);
if (file == NULL) {
error_propagate(errp, local_err);
return -EIO;
}
blk_set_allow_write_beyond_eof(file, true);
ret = blk_truncate(file, 0);
if (ret < 0) {
goto exit;
}
bat_entries = DIV_ROUND_UP(total_size, cl_size);
bat_sectors = DIV_ROUND_UP(bat_entry_off(bat_entries), cl_size);
bat_sectors = (bat_sectors * cl_size) >> BDRV_SECTOR_BITS;
memset(&header, 0, sizeof(header));
memcpy(header.magic, HEADER_MAGIC2, sizeof(header.magic));
header.version = cpu_to_le32(HEADER_VERSION);
/* don't care much about geometry, it is not used on image level */
header.heads = cpu_to_le32(16);
header.cylinders = cpu_to_le32(total_size / BDRV_SECTOR_SIZE / 16 / 32);
header.tracks = cpu_to_le32(cl_size >> BDRV_SECTOR_BITS);
header.bat_entries = cpu_to_le32(bat_entries);
header.nb_sectors = cpu_to_le64(DIV_ROUND_UP(total_size, BDRV_SECTOR_SIZE));
header.data_off = cpu_to_le32(bat_sectors);
/* write all the data */
memset(tmp, 0, sizeof(tmp));
memcpy(tmp, &header, sizeof(header));
ret = blk_pwrite(file, 0, tmp, BDRV_SECTOR_SIZE, 0);
if (ret < 0) {
goto exit;
}
ret = blk_pwrite_zeroes(file, BDRV_SECTOR_SIZE,
(bat_sectors - 1) << BDRV_SECTOR_BITS, 0);
if (ret < 0) {
goto exit;
}
ret = 0;
done:
blk_unref(file);
return ret;
exit:
error_setg_errno(errp, -ret, "Failed to create Parallels image");
goto done;
}
static int parallels_probe(const uint8_t *buf, int buf_size,
const char *filename)
{
const ParallelsHeader *ph = (const void *)buf;
if (buf_size < sizeof(ParallelsHeader)) {
return 0;
}
if ((!memcmp(ph->magic, HEADER_MAGIC, 16) ||
!memcmp(ph->magic, HEADER_MAGIC2, 16)) &&
(le32_to_cpu(ph->version) == HEADER_VERSION)) {
!memcmp(ph->magic, HEADER_MAGIC2, 16)) &&
(le32_to_cpu(ph->version) == HEADER_VERSION))
return 100;
}
return 0;
}
static int parallels_update_header(BlockDriverState *bs)
{
BDRVParallelsState *s = bs->opaque;
unsigned size = MAX(bdrv_opt_mem_align(bs->file->bs),
sizeof(ParallelsHeader));
if (size > s->header_size) {
size = s->header_size;
}
return bdrv_pwrite_sync(bs->file->bs, 0, s->header, size);
}
static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVParallelsState *s = bs->opaque;
ParallelsHeader ph;
int ret, size, i;
QemuOpts *opts = NULL;
Error *local_err = NULL;
char *buf;
int i;
struct parallels_header ph;
int ret;
ret = bdrv_pread(bs->file->bs, 0, &ph, sizeof(ph));
bs->read_only = 1; // no write support yet
ret = bdrv_pread(bs->file, 0, &ph, sizeof(ph));
if (ret < 0) {
goto fail;
}
@@ -605,90 +115,25 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
goto fail;
}
s->bat_size = le32_to_cpu(ph.bat_entries);
if (s->bat_size > INT_MAX / sizeof(uint32_t)) {
s->catalog_size = le32_to_cpu(ph.catalog_entries);
if (s->catalog_size > INT_MAX / 4) {
error_setg(errp, "Catalog too large");
ret = -EFBIG;
goto fail;
}
size = bat_entry_off(s->bat_size);
s->header_size = ROUND_UP(size, bdrv_opt_mem_align(bs->file->bs));
s->header = qemu_try_blockalign(bs->file->bs, s->header_size);
if (s->header == NULL) {
s->catalog_bitmap = g_try_new(uint32_t, s->catalog_size);
if (s->catalog_size && s->catalog_bitmap == NULL) {
ret = -ENOMEM;
goto fail;
}
s->data_end = le32_to_cpu(ph.data_off);
if (s->data_end == 0) {
s->data_end = ROUND_UP(bat_entry_off(s->bat_size), BDRV_SECTOR_SIZE);
}
if (s->data_end < s->header_size) {
/* there is not enough unused space to fit to block align between BAT
and actual data. We can't avoid read-modify-write... */
s->header_size = size;
}
ret = bdrv_pread(bs->file->bs, 0, s->header, s->header_size);
ret = bdrv_pread(bs->file, 64, s->catalog_bitmap, s->catalog_size * 4);
if (ret < 0) {
goto fail;
}
s->bat_bitmap = (uint32_t *)(s->header + 1);
for (i = 0; i < s->bat_size; i++) {
int64_t off = bat2sect(s, i);
if (off >= s->data_end) {
s->data_end = off + s->tracks;
}
}
if (le32_to_cpu(ph.inuse) == HEADER_INUSE_MAGIC) {
/* Image was not closed correctly. The check is mandatory */
s->header_unclean = true;
if ((flags & BDRV_O_RDWR) && !(flags & BDRV_O_CHECK)) {
error_setg(errp, "parallels: Image was not closed correctly; "
"cannot be opened read/write");
ret = -EACCES;
goto fail;
}
}
opts = qemu_opts_create(&parallels_runtime_opts, NULL, 0, &local_err);
if (local_err != NULL) {
goto fail_options;
}
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err != NULL) {
goto fail_options;
}
s->prealloc_size =
qemu_opt_get_size_del(opts, PARALLELS_OPT_PREALLOC_SIZE, 0);
s->prealloc_size = MAX(s->tracks, s->prealloc_size >> BDRV_SECTOR_BITS);
buf = qemu_opt_get_del(opts, PARALLELS_OPT_PREALLOC_MODE);
s->prealloc_mode = qapi_enum_parse(prealloc_mode_lookup, buf,
PRL_PREALLOC_MODE__MAX, PRL_PREALLOC_MODE_FALLOCATE, &local_err);
g_free(buf);
if (local_err != NULL) {
goto fail_options;
}
if (!bdrv_has_zero_init(bs->file->bs) ||
bdrv_truncate(bs->file->bs, bdrv_getlength(bs->file->bs)) != 0) {
s->prealloc_mode = PRL_PREALLOC_MODE_FALLOCATE;
}
if (flags & BDRV_O_RDWR) {
s->header->inuse = cpu_to_le32(HEADER_INUSE_MAGIC);
ret = parallels_update_header(bs);
if (ret < 0) {
goto fail;
}
}
s->bat_dirty_block = 4 * getpagesize();
s->bat_dirty_bmap =
bitmap_new(DIV_ROUND_UP(s->header_size, s->bat_dirty_block));
for (i = 0; i < s->catalog_size; i++)
le32_to_cpus(&s->catalog_bitmap[i]);
qemu_co_mutex_init(&s->lock);
return 0;
@@ -697,67 +142,67 @@ fail_format:
error_setg(errp, "Image not in Parallels format");
ret = -EINVAL;
fail:
qemu_vfree(s->header);
g_free(s->catalog_bitmap);
return ret;
fail_options:
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
}
static int64_t seek_to_sector(BlockDriverState *bs, int64_t sector_num)
{
BDRVParallelsState *s = bs->opaque;
uint32_t index, offset;
index = sector_num / s->tracks;
offset = sector_num % s->tracks;
/* not allocated */
if ((index >= s->catalog_size) || (s->catalog_bitmap[index] == 0))
return -1;
return
((uint64_t)s->catalog_bitmap[index] * s->off_multiplier + offset) * 512;
}
static int parallels_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
while (nb_sectors > 0) {
int64_t position = seek_to_sector(bs, sector_num);
if (position >= 0) {
if (bdrv_pread(bs->file, position, buf, 512) != 512)
return -1;
} else {
memset(buf, 0, 512);
}
nb_sectors--;
sector_num++;
buf += 512;
}
return 0;
}
static coroutine_fn int parallels_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVParallelsState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = parallels_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void parallels_close(BlockDriverState *bs)
{
BDRVParallelsState *s = bs->opaque;
if (bs->open_flags & BDRV_O_RDWR) {
s->header->inuse = 0;
parallels_update_header(bs);
}
if (bs->open_flags & BDRV_O_RDWR) {
bdrv_truncate(bs->file->bs, s->data_end << BDRV_SECTOR_BITS);
}
g_free(s->bat_dirty_bmap);
qemu_vfree(s->header);
g_free(s->catalog_bitmap);
}
static QemuOptsList parallels_create_opts = {
.name = "parallels-create-opts",
.head = QTAILQ_HEAD_INITIALIZER(parallels_create_opts.head),
.desc = {
{
.name = BLOCK_OPT_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Virtual disk size",
},
{
.name = BLOCK_OPT_CLUSTER_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Parallels image cluster size",
.def_value_str = stringify(DEFAULT_CLUSTER_SIZE),
},
{ /* end of list */ }
}
};
static BlockDriver bdrv_parallels = {
.format_name = "parallels",
.instance_size = sizeof(BDRVParallelsState),
.bdrv_probe = parallels_probe,
.bdrv_open = parallels_open,
.bdrv_read = parallels_co_read,
.bdrv_close = parallels_close,
.bdrv_co_get_block_status = parallels_co_get_block_status,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_co_flush_to_os = parallels_co_flush_to_os,
.bdrv_co_readv = parallels_co_readv,
.bdrv_co_writev = parallels_co_writev,
.bdrv_create = parallels_create,
.bdrv_check = parallels_check,
.create_opts = &parallels_create_opts,
};
static void bdrv_parallels_init(void)

View File

@@ -22,23 +22,16 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "block/qapi.h"
#include "block/block_int.h"
#include "block/throttle-groups.h"
#include "block/write-threshold.h"
#include "qmp-commands.h"
#include "qapi-visit.h"
#include "qapi/qmp-output-visitor.h"
#include "qapi/qmp/types.h"
#include "sysemu/block-backend.h"
#include "qemu/cutils.h"
BlockDeviceInfo *bdrv_block_device_info(BlockBackend *blk,
BlockDriverState *bs, Error **errp)
BlockDeviceInfo *bdrv_block_device_info(BlockDriverState *bs)
{
ImageInfo **p_image_info;
BlockDriverState *bs0;
BlockDeviceInfo *info = g_malloc0(sizeof(*info));
info->file = g_strdup(bs->filename);
@@ -47,13 +40,6 @@ BlockDeviceInfo *bdrv_block_device_info(BlockBackend *blk,
info->encrypted = bs->encrypted;
info->encryption_key_missing = bdrv_key_required(bs);
info->cache = g_new(BlockdevCacheInfo, 1);
*info->cache = (BlockdevCacheInfo) {
.writeback = blk ? blk_enable_write_cache(blk) : true,
.direct = !!(bs->open_flags & BDRV_O_NOCACHE),
.no_flush = !!(bs->open_flags & BDRV_O_NO_FLUSH),
};
if (bs->node_name[0]) {
info->has_node_name = true;
info->node_name = g_strdup(bs->node_name);
@@ -67,11 +53,9 @@ BlockDeviceInfo *bdrv_block_device_info(BlockBackend *blk,
info->backing_file_depth = bdrv_get_backing_file_depth(bs);
info->detect_zeroes = bs->detect_zeroes;
if (blk && blk_get_public(blk)->throttle_state) {
if (bs->io_limits_enabled) {
ThrottleConfig cfg;
throttle_group_get_config(blk, &cfg);
throttle_get_config(&bs->throttle_state, &cfg);
info->bps = cfg.buckets[THROTTLE_BPS_TOTAL].avg;
info->bps_rd = cfg.buckets[THROTTLE_BPS_READ].avg;
info->bps_wr = cfg.buckets[THROTTLE_BPS_WRITE].avg;
@@ -94,52 +78,8 @@ BlockDeviceInfo *bdrv_block_device_info(BlockBackend *blk,
info->has_iops_wr_max = cfg.buckets[THROTTLE_OPS_WRITE].max;
info->iops_wr_max = cfg.buckets[THROTTLE_OPS_WRITE].max;
info->has_bps_max_length = info->has_bps_max;
info->bps_max_length =
cfg.buckets[THROTTLE_BPS_TOTAL].burst_length;
info->has_bps_rd_max_length = info->has_bps_rd_max;
info->bps_rd_max_length =
cfg.buckets[THROTTLE_BPS_READ].burst_length;
info->has_bps_wr_max_length = info->has_bps_wr_max;
info->bps_wr_max_length =
cfg.buckets[THROTTLE_BPS_WRITE].burst_length;
info->has_iops_max_length = info->has_iops_max;
info->iops_max_length =
cfg.buckets[THROTTLE_OPS_TOTAL].burst_length;
info->has_iops_rd_max_length = info->has_iops_rd_max;
info->iops_rd_max_length =
cfg.buckets[THROTTLE_OPS_READ].burst_length;
info->has_iops_wr_max_length = info->has_iops_wr_max;
info->iops_wr_max_length =
cfg.buckets[THROTTLE_OPS_WRITE].burst_length;
info->has_iops_size = cfg.op_size;
info->iops_size = cfg.op_size;
info->has_group = true;
info->group = g_strdup(throttle_group_get_name(blk));
}
info->write_threshold = bdrv_write_threshold_get(bs);
bs0 = bs;
p_image_info = &info->image;
while (1) {
Error *local_err = NULL;
bdrv_query_image_info(bs0, p_image_info, &local_err);
if (local_err) {
error_propagate(errp, local_err);
qapi_free_BlockDeviceInfo(info);
return NULL;
}
if (bs0->drv && bs0->backing) {
bs0 = bs0->backing->bs;
(*p_image_info)->has_backing_image = true;
p_image_info = &((*p_image_info)->backing_image);
} else {
break;
}
}
return info;
@@ -228,18 +168,17 @@ void bdrv_query_image_info(BlockDriverState *bs,
{
int64_t size;
const char *backing_filename;
char backing_filename2[1024];
BlockDriverInfo bdi;
int ret;
Error *err = NULL;
ImageInfo *info;
aio_context_acquire(bdrv_get_aio_context(bs));
size = bdrv_getlength(bs);
if (size < 0) {
error_setg_errno(errp, -size, "Can't get size of device '%s'",
bdrv_get_device_name(bs));
goto out;
return;
}
info = g_new0(ImageInfo, 1);
@@ -265,23 +204,14 @@ void bdrv_query_image_info(BlockDriverState *bs,
backing_filename = bs->backing_file;
if (backing_filename[0] != '\0') {
char *backing_filename2 = g_malloc0(PATH_MAX);
info->backing_filename = g_strdup(backing_filename);
info->has_backing_filename = true;
bdrv_get_full_backing_filename(bs, backing_filename2, PATH_MAX, &err);
if (err) {
/* Can't reconstruct the full backing filename, so we must omit
* this field and apply a Best Effort to this query. */
g_free(backing_filename2);
backing_filename2 = NULL;
error_free(err);
err = NULL;
}
bdrv_get_full_backing_filename(bs, backing_filename2,
sizeof(backing_filename2));
/* Always report the full_backing_filename if present, even if it's the
* same as backing_filename. That they are same is useful info. */
if (backing_filename2) {
info->full_backing_filename = g_strdup(backing_filename2);
if (strcmp(backing_filename, backing_filename2) != 0) {
info->full_backing_filename =
g_strdup(backing_filename2);
info->has_full_backing_filename = true;
}
@@ -289,7 +219,6 @@ void bdrv_query_image_info(BlockDriverState *bs,
info->backing_filename_format = g_strdup(bs->backing_format);
info->has_backing_filename_format = true;
}
g_free(backing_filename2);
}
ret = bdrv_query_snapshot_info_list(bs, &info->snapshots, &err);
@@ -307,13 +236,10 @@ void bdrv_query_image_info(BlockDriverState *bs,
default:
error_propagate(errp, err);
qapi_free_ImageInfo(info);
goto out;
return;
}
*p_info = info;
out:
aio_context_release(bdrv_get_aio_context(bs));
}
/* @p_info will be set only on success. */
@@ -322,31 +248,48 @@ static void bdrv_query_info(BlockBackend *blk, BlockInfo **p_info,
{
BlockInfo *info = g_malloc0(sizeof(*info));
BlockDriverState *bs = blk_bs(blk);
BlockDriverState *bs0;
ImageInfo **p_image_info;
Error *local_err = NULL;
info->device = g_strdup(blk_name(blk));
info->type = g_strdup("unknown");
info->locked = blk_dev_is_medium_locked(blk);
info->removable = blk_dev_has_removable_media(blk);
if (blk_dev_has_tray(blk)) {
if (blk_dev_has_removable_media(blk)) {
info->has_tray_open = true;
info->tray_open = blk_dev_is_tray_open(blk);
}
if (blk_iostatus_is_enabled(blk)) {
if (bdrv_iostatus_is_enabled(bs)) {
info->has_io_status = true;
info->io_status = blk_iostatus(blk);
info->io_status = bs->iostatus;
}
if (bs && !QLIST_EMPTY(&bs->dirty_bitmaps)) {
if (!QLIST_EMPTY(&bs->dirty_bitmaps)) {
info->has_dirty_bitmaps = true;
info->dirty_bitmaps = bdrv_query_dirty_bitmaps(bs);
}
if (bs && bs->drv) {
if (bs->drv) {
info->has_inserted = true;
info->inserted = bdrv_block_device_info(blk, bs, errp);
if (info->inserted == NULL) {
goto err;
info->inserted = bdrv_block_device_info(bs);
bs0 = bs;
p_image_info = &info->inserted->image;
while (1) {
bdrv_query_image_info(bs0, p_image_info, &local_err);
if (local_err) {
error_propagate(errp, local_err);
goto err;
}
if (bs0->drv && bs0->backing_hd) {
bs0 = bs0->backing_hd;
(*p_image_info)->has_backing_image = true;
p_image_info = &((*p_image_info)->backing_image);
} else {
break;
}
}
}
@@ -357,115 +300,37 @@ static void bdrv_query_info(BlockBackend *blk, BlockInfo **p_info,
qapi_free_BlockInfo(info);
}
static BlockStats *bdrv_query_stats(BlockBackend *blk,
const BlockDriverState *bs,
bool query_backing);
static void bdrv_query_blk_stats(BlockDeviceStats *ds, BlockBackend *blk)
{
BlockAcctStats *stats = blk_get_stats(blk);
BlockAcctTimedStats *ts = NULL;
ds->rd_bytes = stats->nr_bytes[BLOCK_ACCT_READ];
ds->wr_bytes = stats->nr_bytes[BLOCK_ACCT_WRITE];
ds->rd_operations = stats->nr_ops[BLOCK_ACCT_READ];
ds->wr_operations = stats->nr_ops[BLOCK_ACCT_WRITE];
ds->failed_rd_operations = stats->failed_ops[BLOCK_ACCT_READ];
ds->failed_wr_operations = stats->failed_ops[BLOCK_ACCT_WRITE];
ds->failed_flush_operations = stats->failed_ops[BLOCK_ACCT_FLUSH];
ds->invalid_rd_operations = stats->invalid_ops[BLOCK_ACCT_READ];
ds->invalid_wr_operations = stats->invalid_ops[BLOCK_ACCT_WRITE];
ds->invalid_flush_operations =
stats->invalid_ops[BLOCK_ACCT_FLUSH];
ds->rd_merged = stats->merged[BLOCK_ACCT_READ];
ds->wr_merged = stats->merged[BLOCK_ACCT_WRITE];
ds->flush_operations = stats->nr_ops[BLOCK_ACCT_FLUSH];
ds->wr_total_time_ns = stats->total_time_ns[BLOCK_ACCT_WRITE];
ds->rd_total_time_ns = stats->total_time_ns[BLOCK_ACCT_READ];
ds->flush_total_time_ns = stats->total_time_ns[BLOCK_ACCT_FLUSH];
ds->has_idle_time_ns = stats->last_access_time_ns > 0;
if (ds->has_idle_time_ns) {
ds->idle_time_ns = block_acct_idle_time_ns(stats);
}
ds->account_invalid = stats->account_invalid;
ds->account_failed = stats->account_failed;
while ((ts = block_acct_interval_next(stats, ts))) {
BlockDeviceTimedStatsList *timed_stats =
g_malloc0(sizeof(*timed_stats));
BlockDeviceTimedStats *dev_stats = g_malloc0(sizeof(*dev_stats));
timed_stats->next = ds->timed_stats;
timed_stats->value = dev_stats;
ds->timed_stats = timed_stats;
TimedAverage *rd = &ts->latency[BLOCK_ACCT_READ];
TimedAverage *wr = &ts->latency[BLOCK_ACCT_WRITE];
TimedAverage *fl = &ts->latency[BLOCK_ACCT_FLUSH];
dev_stats->interval_length = ts->interval_length;
dev_stats->min_rd_latency_ns = timed_average_min(rd);
dev_stats->max_rd_latency_ns = timed_average_max(rd);
dev_stats->avg_rd_latency_ns = timed_average_avg(rd);
dev_stats->min_wr_latency_ns = timed_average_min(wr);
dev_stats->max_wr_latency_ns = timed_average_max(wr);
dev_stats->avg_wr_latency_ns = timed_average_avg(wr);
dev_stats->min_flush_latency_ns = timed_average_min(fl);
dev_stats->max_flush_latency_ns = timed_average_max(fl);
dev_stats->avg_flush_latency_ns = timed_average_avg(fl);
dev_stats->avg_rd_queue_depth =
block_acct_queue_depth(ts, BLOCK_ACCT_READ);
dev_stats->avg_wr_queue_depth =
block_acct_queue_depth(ts, BLOCK_ACCT_WRITE);
}
}
static void bdrv_query_bds_stats(BlockStats *s, const BlockDriverState *bs,
bool query_backing)
{
if (bdrv_get_node_name(bs)[0]) {
s->has_node_name = true;
s->node_name = g_strdup(bdrv_get_node_name(bs));
}
s->stats->wr_highest_offset = bs->wr_highest_offset;
if (bs->file) {
s->has_parent = true;
s->parent = bdrv_query_stats(NULL, bs->file->bs, query_backing);
}
if (query_backing && bs->backing) {
s->has_backing = true;
s->backing = bdrv_query_stats(NULL, bs->backing->bs, query_backing);
}
}
static BlockStats *bdrv_query_stats(BlockBackend *blk,
const BlockDriverState *bs,
bool query_backing)
static BlockStats *bdrv_query_stats(const BlockDriverState *bs)
{
BlockStats *s;
s = g_malloc0(sizeof(*s));
s->stats = g_malloc0(sizeof(*s->stats));
if (blk) {
if (bdrv_get_device_name(bs)[0]) {
s->has_device = true;
s->device = g_strdup(blk_name(blk));
bdrv_query_blk_stats(s->stats, blk);
s->device = g_strdup(bdrv_get_device_name(bs));
}
if (bs) {
bdrv_query_bds_stats(s, bs, query_backing);
s->stats = g_malloc0(sizeof(*s->stats));
s->stats->rd_bytes = bs->stats.nr_bytes[BLOCK_ACCT_READ];
s->stats->wr_bytes = bs->stats.nr_bytes[BLOCK_ACCT_WRITE];
s->stats->rd_operations = bs->stats.nr_ops[BLOCK_ACCT_READ];
s->stats->wr_operations = bs->stats.nr_ops[BLOCK_ACCT_WRITE];
s->stats->wr_highest_offset =
bs->stats.wr_highest_sector * BDRV_SECTOR_SIZE;
s->stats->flush_operations = bs->stats.nr_ops[BLOCK_ACCT_FLUSH];
s->stats->wr_total_time_ns = bs->stats.total_time_ns[BLOCK_ACCT_WRITE];
s->stats->rd_total_time_ns = bs->stats.total_time_ns[BLOCK_ACCT_READ];
s->stats->flush_total_time_ns = bs->stats.total_time_ns[BLOCK_ACCT_FLUSH];
if (bs->file) {
s->has_parent = true;
s->parent = bdrv_query_stats(bs->file);
}
if (bs->backing_hd) {
s->has_backing = true;
s->backing = bdrv_query_stats(bs->backing_hd);
}
return s;
@@ -482,9 +347,7 @@ BlockInfoList *qmp_query_block(Error **errp)
bdrv_query_info(blk, &info->value, &local_err);
if (local_err) {
error_propagate(errp, local_err);
g_free(info);
qapi_free_BlockInfoList(head);
return NULL;
goto err;
}
*p_next = info;
@@ -492,40 +355,23 @@ BlockInfoList *qmp_query_block(Error **errp)
}
return head;
err:
qapi_free_BlockInfoList(head);
return NULL;
}
static bool next_query_bds(BlockBackend **blk, BlockDriverState **bs,
bool query_nodes)
{
if (query_nodes) {
*bs = bdrv_next_node(*bs);
return !!*bs;
}
*blk = blk_next(*blk);
*bs = *blk ? blk_bs(*blk) : NULL;
return !!*blk;
}
BlockStatsList *qmp_query_blockstats(bool has_query_nodes,
bool query_nodes,
Error **errp)
BlockStatsList *qmp_query_blockstats(Error **errp)
{
BlockStatsList *head = NULL, **p_next = &head;
BlockBackend *blk = NULL;
BlockDriverState *bs = NULL;
/* Just to be safe if query_nodes is not always initialized */
query_nodes = has_query_nodes && query_nodes;
while (next_query_bds(&blk, &bs, query_nodes)) {
while ((bs = bdrv_next(bs))) {
BlockStatsList *info = g_malloc0(sizeof(*info));
AioContext *ctx = blk ? blk_get_aio_context(blk)
: bdrv_get_aio_context(bs);
AioContext *ctx = bdrv_get_aio_context(bs);
aio_context_acquire(ctx);
info->value = bdrv_query_stats(blk, bs, !query_nodes);
info->value = bdrv_query_stats(bs);
aio_context_release(ctx);
*p_next = info;
@@ -539,7 +385,7 @@ BlockStatsList *qmp_query_blockstats(bool has_query_nodes,
static char *get_human_readable_size(char *buf, int buf_size, int64_t size)
{
static const char suffixes[NB_SUFFIXES] = {'K', 'M', 'G', 'T'};
static const char suffixes[NB_SUFFIXES] = "KMGT";
int64_t base;
int i;
@@ -635,9 +481,18 @@ static void dump_qobject(fprintf_function func_fprintf, void *f,
}
case QTYPE_QBOOL: {
QBool *value = qobject_to_qbool(obj);
func_fprintf(f, "%s", qbool_get_bool(value) ? "true" : "false");
func_fprintf(f, "%s", qbool_get_int(value) ? "true" : "false");
break;
}
case QTYPE_QERROR: {
QString *value = qerror_human((QError *)obj);
func_fprintf(f, "%s", qstring_get_str(value));
QDECREF(value);
break;
}
case QTYPE_NONE:
break;
case QTYPE_MAX:
default:
abort();
}
@@ -650,10 +505,11 @@ static void dump_qlist(fprintf_function func_fprintf, void *f, int indentation,
int i = 0;
for (entry = qlist_first(list); entry; entry = qlist_next(entry), i++) {
QType type = qobject_type(entry->value);
qtype_code type = qobject_type(entry->value);
bool composite = (type == QTYPE_QDICT || type == QTYPE_QLIST);
func_fprintf(f, "%*s[%i]:%c", indentation * 4, "", i,
composite ? '\n' : ' ');
const char *format = composite ? "%*s[%i]:\n" : "%*s[%i]: ";
func_fprintf(f, format, indentation * 4, "", i);
dump_qobject(func_fprintf, f, indentation + 1, entry->value);
if (!composite) {
func_fprintf(f, "\n");
@@ -667,9 +523,10 @@ static void dump_qdict(fprintf_function func_fprintf, void *f, int indentation,
const QDictEntry *entry;
for (entry = qdict_first(dict); entry; entry = qdict_next(dict, entry)) {
QType type = qobject_type(entry->value);
qtype_code type = qobject_type(entry->value);
bool composite = (type == QTYPE_QDICT || type == QTYPE_QLIST);
char *key = g_malloc(strlen(entry->key) + 1);
const char *format = composite ? "%*s%s:\n" : "%*s%s: ";
char key[strlen(entry->key) + 1];
int i;
/* replace dashes with spaces in key (variable) names */
@@ -677,13 +534,12 @@ static void dump_qdict(fprintf_function func_fprintf, void *f, int indentation,
key[i] = entry->key[i] == '-' ? ' ' : entry->key[i];
}
key[i] = 0;
func_fprintf(f, "%*s%s:%c", indentation * 4, "", key,
composite ? '\n' : ' ');
func_fprintf(f, format, indentation * 4, "", key);
dump_qobject(func_fprintf, f, indentation + 1, entry->value);
if (!composite) {
func_fprintf(f, "\n");
}
g_free(key);
}
}
@@ -693,7 +549,7 @@ void bdrv_image_info_specific_dump(fprintf_function func_fprintf, void *f,
QmpOutputVisitor *ov = qmp_output_visitor_new();
QObject *obj, *data;
visit_type_ImageInfoSpecific(qmp_output_get_visitor(ov), NULL, &info_spec,
visit_type_ImageInfoSpecific(qmp_output_get_visitor(ov), &info_spec, NULL,
&error_abort);
obj = qmp_output_get_qobject(ov);
assert(qobject_type(obj) == QTYPE_QDICT);
@@ -737,10 +593,7 @@ void bdrv_image_info_dump(fprintf_function func_fprintf, void *f,
if (info->has_backing_filename) {
func_fprintf(f, "backing file: %s", info->backing_filename);
if (!info->has_full_backing_filename) {
func_fprintf(f, " (cannot determine actual path)");
} else if (strcmp(info->backing_filename,
info->full_backing_filename) != 0) {
if (info->has_full_backing_filename) {
func_fprintf(f, " (actual path: %s)", info->full_backing_filename);
}
func_fprintf(f, "\n");

View File

@@ -21,17 +21,11 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "qemu/error-report.h"
#include "block/block_int.h"
#include "sysemu/block-backend.h"
#include "qemu/module.h"
#include "qemu/bswap.h"
#include <zlib.h>
#include "qapi/qmp/qerror.h"
#include "crypto/cipher.h"
#include "qemu/aes.h"
#include "migration/migration.h"
/**************************************************************/
@@ -77,8 +71,10 @@ typedef struct BDRVQcowState {
uint8_t *cluster_cache;
uint8_t *cluster_data;
uint64_t cluster_cache_offset;
QCryptoCipher *cipher; /* NULL if no key yet */
uint32_t crypt_method; /* current crypt method, 0 if no key yet */
uint32_t crypt_method_header;
AES_KEY aes_encrypt_key;
AES_KEY aes_decrypt_key;
CoMutex lock;
Error *migration_blocker;
} BDRVQcowState;
@@ -105,7 +101,7 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
int ret;
QCowHeader header;
ret = bdrv_pread(bs->file->bs, 0, &header, sizeof(header));
ret = bdrv_pread(bs->file, 0, &header, sizeof(header));
if (ret < 0) {
goto fail;
}
@@ -124,7 +120,11 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
goto fail;
}
if (header.version != QCOW_VERSION) {
error_setg(errp, "Unsupported qcow version %" PRIu32, header.version);
char version[64];
snprintf(version, sizeof(version), "QCOW version %" PRIu32,
header.version);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bdrv_get_device_name(bs), "qcow", version);
ret = -ENOTSUP;
goto fail;
}
@@ -153,21 +153,8 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
ret = -EINVAL;
goto fail;
}
if (!qcrypto_cipher_supports(QCRYPTO_CIPHER_ALG_AES_128)) {
error_setg(errp, "AES cipher not available");
ret = -EINVAL;
goto fail;
}
s->crypt_method_header = header.crypt_method;
if (s->crypt_method_header) {
if (bdrv_uses_whitelist() &&
s->crypt_method_header == QCOW_CRYPT_AES) {
error_report("qcow built-in AES encryption is deprecated");
error_printf("Support for it will be removed in a future release.\n"
"You can use 'qemu-img convert' to switch to an\n"
"unencrypted qcow image, or a LUKS raw image.\n");
}
bs->encrypted = 1;
}
s->cluster_bits = header.cluster_bits;
@@ -202,7 +189,7 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
goto fail;
}
ret = bdrv_pread(bs->file->bs, s->l1_table_offset, s->l1_table,
ret = bdrv_pread(bs->file, s->l1_table_offset, s->l1_table,
s->l1_size * sizeof(uint64_t));
if (ret < 0) {
goto fail;
@@ -214,7 +201,7 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
/* alloc L2 cache (max. 64k * 16 * 8 = 8 MB) */
s->l2_cache =
qemu_try_blockalign(bs->file->bs,
qemu_try_blockalign(bs->file,
s->l2_size * L2_CACHE_SIZE * sizeof(uint64_t));
if (s->l2_cache == NULL) {
error_setg(errp, "Could not allocate L2 table cache");
@@ -228,12 +215,12 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
/* read the backing file name */
if (header.backing_file_offset != 0) {
len = header.backing_file_size;
if (len > 1023 || len >= sizeof(bs->backing_file)) {
if (len > 1023) {
error_setg(errp, "Backing file name too long");
ret = -EINVAL;
goto fail;
}
ret = bdrv_pread(bs->file->bs, header.backing_file_offset,
ret = bdrv_pread(bs->file, header.backing_file_offset,
bs->backing_file, len);
if (ret < 0) {
goto fail;
@@ -242,9 +229,9 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
}
/* Disable migration when qcow images are used */
error_setg(&s->migration_blocker, "The qcow format used by node '%s' "
"does not support live migration",
bdrv_get_device_or_node_name(bs));
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"qcow", bdrv_get_device_name(bs), "live migration");
migrate_add_blocker(s->migration_blocker);
qemu_co_mutex_init(&s->lock);
@@ -272,7 +259,6 @@ static int qcow_set_key(BlockDriverState *bs, const char *key)
BDRVQcowState *s = bs->opaque;
uint8_t keybuf[16];
int len, i;
Error *err;
memset(keybuf, 0, 16);
len = strlen(key);
@@ -283,68 +269,38 @@ static int qcow_set_key(BlockDriverState *bs, const char *key)
for(i = 0;i < len;i++) {
keybuf[i] = key[i];
}
assert(bs->encrypted);
s->crypt_method = s->crypt_method_header;
qcrypto_cipher_free(s->cipher);
s->cipher = qcrypto_cipher_new(
QCRYPTO_CIPHER_ALG_AES_128,
QCRYPTO_CIPHER_MODE_CBC,
keybuf, G_N_ELEMENTS(keybuf),
&err);
if (!s->cipher) {
/* XXX would be nice if errors in this method could
* be properly propagate to the caller. Would need
* the bdrv_set_key() API signature to be fixed. */
error_free(err);
if (AES_set_encrypt_key(keybuf, 128, &s->aes_encrypt_key) != 0)
return -1;
if (AES_set_decrypt_key(keybuf, 128, &s->aes_decrypt_key) != 0)
return -1;
}
return 0;
}
/* The crypt function is compatible with the linux cryptoloop
algorithm for < 4 GB images. NOTE: out_buf == in_buf is
supported */
static int encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, bool enc, Error **errp)
static void encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, int enc,
const AES_KEY *key)
{
union {
uint64_t ll[2];
uint8_t b[16];
} ivec;
int i;
int ret;
for(i = 0; i < nb_sectors; i++) {
ivec.ll[0] = cpu_to_le64(sector_num);
ivec.ll[1] = 0;
if (qcrypto_cipher_setiv(s->cipher,
ivec.b, G_N_ELEMENTS(ivec.b),
errp) < 0) {
return -1;
}
if (enc) {
ret = qcrypto_cipher_encrypt(s->cipher,
in_buf,
out_buf,
512,
errp);
} else {
ret = qcrypto_cipher_decrypt(s->cipher,
in_buf,
out_buf,
512,
errp);
}
if (ret < 0) {
return -1;
}
AES_cbc_encrypt(in_buf, out_buf, 512, key,
ivec.b, enc);
sector_num++;
in_buf += 512;
out_buf += 512;
}
return 0;
}
/* 'allocate' is:
@@ -378,13 +334,13 @@ static uint64_t get_cluster_offset(BlockDriverState *bs,
if (!allocate)
return 0;
/* allocate a new l2 entry */
l2_offset = bdrv_getlength(bs->file->bs);
l2_offset = bdrv_getlength(bs->file);
/* round to cluster size */
l2_offset = (l2_offset + s->cluster_size - 1) & ~(s->cluster_size - 1);
/* update the L1 entry */
s->l1_table[l1_index] = l2_offset;
tmp = cpu_to_be64(l2_offset);
if (bdrv_pwrite_sync(bs->file->bs,
if (bdrv_pwrite_sync(bs->file,
s->l1_table_offset + l1_index * sizeof(tmp),
&tmp, sizeof(tmp)) < 0)
return 0;
@@ -414,12 +370,11 @@ static uint64_t get_cluster_offset(BlockDriverState *bs,
l2_table = s->l2_cache + (min_index << s->l2_bits);
if (new_l2_table) {
memset(l2_table, 0, s->l2_size * sizeof(uint64_t));
if (bdrv_pwrite_sync(bs->file->bs, l2_offset, l2_table,
if (bdrv_pwrite_sync(bs->file, l2_offset, l2_table,
s->l2_size * sizeof(uint64_t)) < 0)
return 0;
} else {
if (bdrv_pread(bs->file->bs, l2_offset, l2_table,
s->l2_size * sizeof(uint64_t)) !=
if (bdrv_pread(bs->file, l2_offset, l2_table, s->l2_size * sizeof(uint64_t)) !=
s->l2_size * sizeof(uint64_t))
return 0;
}
@@ -440,42 +395,34 @@ static uint64_t get_cluster_offset(BlockDriverState *bs,
overwritten */
if (decompress_cluster(bs, cluster_offset) < 0)
return 0;
cluster_offset = bdrv_getlength(bs->file->bs);
cluster_offset = bdrv_getlength(bs->file);
cluster_offset = (cluster_offset + s->cluster_size - 1) &
~(s->cluster_size - 1);
/* write the cluster content */
if (bdrv_pwrite(bs->file->bs, cluster_offset, s->cluster_cache,
s->cluster_size) !=
if (bdrv_pwrite(bs->file, cluster_offset, s->cluster_cache, s->cluster_size) !=
s->cluster_size)
return -1;
} else {
cluster_offset = bdrv_getlength(bs->file->bs);
cluster_offset = bdrv_getlength(bs->file);
if (allocate == 1) {
/* round to cluster size */
cluster_offset = (cluster_offset + s->cluster_size - 1) &
~(s->cluster_size - 1);
bdrv_truncate(bs->file->bs, cluster_offset + s->cluster_size);
bdrv_truncate(bs->file, cluster_offset + s->cluster_size);
/* if encrypted, we must initialize the cluster
content which won't be written */
if (bs->encrypted &&
if (s->crypt_method &&
(n_end - n_start) < s->cluster_sectors) {
uint64_t start_sect;
assert(s->cipher);
start_sect = (offset & ~(s->cluster_size - 1)) >> 9;
memset(s->cluster_data + 512, 0x00, 512);
for(i = 0; i < s->cluster_sectors; i++) {
if (i < n_start || i >= n_end) {
Error *err = NULL;
if (encrypt_sectors(s, start_sect + i,
s->cluster_data,
s->cluster_data + 512, 1,
true, &err) < 0) {
error_free(err);
errno = EIO;
return -1;
}
if (bdrv_pwrite(bs->file->bs,
cluster_offset + i * 512,
encrypt_sectors(s, start_sect + i,
s->cluster_data,
s->cluster_data + 512, 1, 1,
&s->aes_encrypt_key);
if (bdrv_pwrite(bs->file, cluster_offset + i * 512,
s->cluster_data, 512) != 512)
return -1;
}
@@ -489,7 +436,7 @@ static uint64_t get_cluster_offset(BlockDriverState *bs,
/* update L2 table */
tmp = cpu_to_be64(cluster_offset);
l2_table[l2_index] = tmp;
if (bdrv_pwrite_sync(bs->file->bs, l2_offset + l2_index * sizeof(tmp),
if (bdrv_pwrite_sync(bs->file, l2_offset + l2_index * sizeof(tmp),
&tmp, sizeof(tmp)) < 0)
return 0;
}
@@ -497,7 +444,7 @@ static uint64_t get_cluster_offset(BlockDriverState *bs,
}
static int64_t coroutine_fn qcow_co_get_block_status(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum, BlockDriverState **file)
int64_t sector_num, int nb_sectors, int *pnum)
{
BDRVQcowState *s = bs->opaque;
int index_in_cluster, n;
@@ -514,11 +461,10 @@ static int64_t coroutine_fn qcow_co_get_block_status(BlockDriverState *bs,
if (!cluster_offset) {
return 0;
}
if ((cluster_offset & QCOW_OFLAG_COMPRESSED) || s->cipher) {
if ((cluster_offset & QCOW_OFLAG_COMPRESSED) || s->crypt_method) {
return BDRV_BLOCK_DATA;
}
cluster_offset |= (index_in_cluster << BDRV_SECTOR_BITS);
*file = bs->file->bs;
return BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID | cluster_offset;
}
@@ -559,7 +505,7 @@ static int decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset)
if (s->cluster_cache_offset != coffset) {
csize = cluster_offset >> (63 - s->cluster_bits);
csize &= (s->cluster_size - 1);
ret = bdrv_pread(bs->file->bs, coffset, s->cluster_data, csize);
ret = bdrv_pread(bs->file, coffset, s->cluster_data, csize);
if (ret != csize)
return -1;
if (decompress_buffer(s->cluster_cache, s->cluster_size,
@@ -582,7 +528,6 @@ static coroutine_fn int qcow_co_readv(BlockDriverState *bs, int64_t sector_num,
QEMUIOVector hd_qiov;
uint8_t *buf;
void *orig_buf;
Error *err = NULL;
if (qiov->niov > 1) {
buf = orig_buf = qemu_try_blockalign(bs, qiov->size);
@@ -607,13 +552,13 @@ static coroutine_fn int qcow_co_readv(BlockDriverState *bs, int64_t sector_num,
}
if (!cluster_offset) {
if (bs->backing) {
if (bs->backing_hd) {
/* read from the base image */
hd_iov.iov_base = (void *)buf;
hd_iov.iov_len = n * 512;
qemu_iovec_init_external(&hd_qiov, &hd_iov, 1);
qemu_co_mutex_unlock(&s->lock);
ret = bdrv_co_readv(bs->backing->bs, sector_num,
ret = bdrv_co_readv(bs->backing_hd, sector_num,
n, &hd_qiov);
qemu_co_mutex_lock(&s->lock);
if (ret < 0) {
@@ -638,19 +583,17 @@ static coroutine_fn int qcow_co_readv(BlockDriverState *bs, int64_t sector_num,
hd_iov.iov_len = n * 512;
qemu_iovec_init_external(&hd_qiov, &hd_iov, 1);
qemu_co_mutex_unlock(&s->lock);
ret = bdrv_co_readv(bs->file->bs,
ret = bdrv_co_readv(bs->file,
(cluster_offset >> 9) + index_in_cluster,
n, &hd_qiov);
qemu_co_mutex_lock(&s->lock);
if (ret < 0) {
break;
}
if (bs->encrypted) {
assert(s->cipher);
if (encrypt_sectors(s, sector_num, buf, buf,
n, false, &err) < 0) {
goto fail;
}
if (s->crypt_method) {
encrypt_sectors(s, sector_num, buf, buf,
n, 0,
&s->aes_decrypt_key);
}
}
ret = 0;
@@ -671,7 +614,6 @@ done:
return ret;
fail:
error_free(err);
ret = -EIO;
goto done;
}
@@ -719,18 +661,12 @@ static coroutine_fn int qcow_co_writev(BlockDriverState *bs, int64_t sector_num,
ret = -EIO;
break;
}
if (bs->encrypted) {
Error *err = NULL;
assert(s->cipher);
if (s->crypt_method) {
if (!cluster_data) {
cluster_data = g_malloc0(s->cluster_size);
}
if (encrypt_sectors(s, sector_num, cluster_data, buf,
n, true, &err) < 0) {
error_free(err);
ret = -EIO;
break;
}
encrypt_sectors(s, sector_num, cluster_data, buf,
n, 1, &s->aes_encrypt_key);
src_buf = cluster_data;
} else {
src_buf = buf;
@@ -740,7 +676,7 @@ static coroutine_fn int qcow_co_writev(BlockDriverState *bs, int64_t sector_num,
hd_iov.iov_len = n * 512;
qemu_iovec_init_external(&hd_qiov, &hd_iov, 1);
qemu_co_mutex_unlock(&s->lock);
ret = bdrv_co_writev(bs->file->bs,
ret = bdrv_co_writev(bs->file,
(cluster_offset >> 9) + index_in_cluster,
n, &hd_qiov);
qemu_co_mutex_lock(&s->lock);
@@ -767,8 +703,6 @@ static void qcow_close(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
qcrypto_cipher_free(s->cipher);
s->cipher = NULL;
g_free(s->l1_table);
qemu_vfree(s->l2_cache);
g_free(s->cluster_cache);
@@ -788,7 +722,7 @@ static int qcow_create(const char *filename, QemuOpts *opts, Error **errp)
int flags = 0;
Error *local_err = NULL;
int ret;
BlockBackend *qcow_blk;
BlockDriverState *qcow_bs;
/* Read out options */
total_size = ROUND_UP(qemu_opt_get_size_del(opts, BLOCK_OPT_SIZE, 0),
@@ -804,17 +738,15 @@ static int qcow_create(const char *filename, QemuOpts *opts, Error **errp)
goto cleanup;
}
qcow_blk = blk_new_open(filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, &local_err);
if (qcow_blk == NULL) {
qcow_bs = NULL;
ret = bdrv_open(&qcow_bs, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, NULL, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
ret = -EIO;
goto cleanup;
}
blk_set_allow_write_beyond_eof(qcow_blk, true);
ret = blk_truncate(qcow_blk, 0);
ret = bdrv_truncate(qcow_bs, 0);
if (ret < 0) {
goto exit;
}
@@ -854,14 +786,14 @@ static int qcow_create(const char *filename, QemuOpts *opts, Error **errp)
}
/* write all the data */
ret = blk_pwrite(qcow_blk, 0, &header, sizeof(header), 0);
ret = bdrv_pwrite(qcow_bs, 0, &header, sizeof(header));
if (ret != sizeof(header)) {
goto exit;
}
if (backing_file) {
ret = blk_pwrite(qcow_blk, sizeof(header),
backing_file, backing_filename_len, 0);
ret = bdrv_pwrite(qcow_bs, sizeof(header),
backing_file, backing_filename_len);
if (ret != backing_filename_len) {
goto exit;
}
@@ -870,8 +802,8 @@ static int qcow_create(const char *filename, QemuOpts *opts, Error **errp)
tmp = g_malloc0(BDRV_SECTOR_SIZE);
for (i = 0; i < ((sizeof(uint64_t)*l1_size + BDRV_SECTOR_SIZE - 1)/
BDRV_SECTOR_SIZE); i++) {
ret = blk_pwrite(qcow_blk, header_size + BDRV_SECTOR_SIZE * i,
tmp, BDRV_SECTOR_SIZE, 0);
ret = bdrv_pwrite(qcow_bs, header_size +
BDRV_SECTOR_SIZE*i, tmp, BDRV_SECTOR_SIZE);
if (ret != BDRV_SECTOR_SIZE) {
g_free(tmp);
goto exit;
@@ -881,7 +813,7 @@ static int qcow_create(const char *filename, QemuOpts *opts, Error **errp)
g_free(tmp);
ret = 0;
exit:
blk_unref(qcow_blk);
bdrv_unref(qcow_bs);
cleanup:
g_free(backing_file);
return ret;
@@ -894,10 +826,10 @@ static int qcow_make_empty(BlockDriverState *bs)
int ret;
memset(s->l1_table, 0, l1_length);
if (bdrv_pwrite_sync(bs->file->bs, s->l1_table_offset, s->l1_table,
if (bdrv_pwrite_sync(bs->file, s->l1_table_offset, s->l1_table,
l1_length) < 0)
return -1;
ret = bdrv_truncate(bs->file->bs, s->l1_table_offset + l1_length);
ret = bdrv_truncate(bs->file, s->l1_table_offset + l1_length);
if (ret < 0)
return ret;
@@ -977,7 +909,7 @@ static int qcow_write_compressed(BlockDriverState *bs, int64_t sector_num,
}
cluster_offset &= s->cluster_offset_mask;
ret = bdrv_pwrite(bs->file->bs, cluster_offset, out_buf, out_len);
ret = bdrv_pwrite(bs->file, cluster_offset, out_buf, out_len);
if (ret < 0) {
goto fail;
}

View File

@@ -22,132 +22,68 @@
* THE SOFTWARE.
*/
/* Needed for CONFIG_MADVISE */
#include "qemu/osdep.h"
#if defined(CONFIG_MADVISE) || defined(CONFIG_POSIX_MADVISE)
#include <sys/mman.h>
#endif
#include "block/block_int.h"
#include "qemu-common.h"
#include "qcow2.h"
#include "trace.h"
typedef struct Qcow2CachedTable {
int64_t offset;
uint64_t lru_counter;
int ref;
bool dirty;
void* table;
int64_t offset;
bool dirty;
int cache_hits;
int ref;
} Qcow2CachedTable;
struct Qcow2Cache {
Qcow2CachedTable *entries;
struct Qcow2Cache *depends;
Qcow2CachedTable* entries;
struct Qcow2Cache* depends;
int size;
bool depends_on_flush;
void *table_array;
uint64_t lru_counter;
uint64_t cache_clean_lru_counter;
};
static inline void *qcow2_cache_get_table_addr(BlockDriverState *bs,
Qcow2Cache *c, int table)
{
BDRVQcow2State *s = bs->opaque;
return (uint8_t *) c->table_array + (size_t) table * s->cluster_size;
}
static inline int qcow2_cache_get_table_idx(BlockDriverState *bs,
Qcow2Cache *c, void *table)
{
BDRVQcow2State *s = bs->opaque;
ptrdiff_t table_offset = (uint8_t *) table - (uint8_t *) c->table_array;
int idx = table_offset / s->cluster_size;
assert(idx >= 0 && idx < c->size && table_offset % s->cluster_size == 0);
return idx;
}
static void qcow2_cache_table_release(BlockDriverState *bs, Qcow2Cache *c,
int i, int num_tables)
{
#if QEMU_MADV_DONTNEED != QEMU_MADV_INVALID
BDRVQcow2State *s = bs->opaque;
void *t = qcow2_cache_get_table_addr(bs, c, i);
int align = getpagesize();
size_t mem_size = (size_t) s->cluster_size * num_tables;
size_t offset = QEMU_ALIGN_UP((uintptr_t) t, align) - (uintptr_t) t;
size_t length = QEMU_ALIGN_DOWN(mem_size - offset, align);
if (length > 0) {
qemu_madvise((uint8_t *) t + offset, length, QEMU_MADV_DONTNEED);
}
#endif
}
static inline bool can_clean_entry(Qcow2Cache *c, int i)
{
Qcow2CachedTable *t = &c->entries[i];
return t->ref == 0 && !t->dirty && t->offset != 0 &&
t->lru_counter <= c->cache_clean_lru_counter;
}
void qcow2_cache_clean_unused(BlockDriverState *bs, Qcow2Cache *c)
{
int i = 0;
while (i < c->size) {
int to_clean = 0;
/* Skip the entries that we don't need to clean */
while (i < c->size && !can_clean_entry(c, i)) {
i++;
}
/* And count how many we can clean in a row */
while (i < c->size && can_clean_entry(c, i)) {
c->entries[i].offset = 0;
c->entries[i].lru_counter = 0;
i++;
to_clean++;
}
if (to_clean > 0) {
qcow2_cache_table_release(bs, c, i - to_clean, to_clean);
}
}
c->cache_clean_lru_counter = c->lru_counter;
}
Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
Qcow2Cache *c;
int i;
c = g_new0(Qcow2Cache, 1);
c->size = num_tables;
c->entries = g_try_new0(Qcow2CachedTable, num_tables);
c->table_array = qemu_try_blockalign(bs->file->bs,
(size_t) num_tables * s->cluster_size);
if (!c->entries) {
goto fail;
}
if (!c->entries || !c->table_array) {
qemu_vfree(c->table_array);
g_free(c->entries);
g_free(c);
c = NULL;
for (i = 0; i < c->size; i++) {
c->entries[i].table = qemu_try_blockalign(bs->file, s->cluster_size);
if (c->entries[i].table == NULL) {
goto fail;
}
}
return c;
fail:
if (c->entries) {
for (i = 0; i < c->size; i++) {
qemu_vfree(c->entries[i].table);
}
}
g_free(c->entries);
g_free(c);
return NULL;
}
int qcow2_cache_destroy(BlockDriverState *bs, Qcow2Cache *c)
int qcow2_cache_destroy(BlockDriverState* bs, Qcow2Cache *c)
{
int i;
for (i = 0; i < c->size; i++) {
assert(c->entries[i].ref == 0);
qemu_vfree(c->entries[i].table);
}
qemu_vfree(c->table_array);
g_free(c->entries);
g_free(c);
@@ -171,7 +107,7 @@ static int qcow2_cache_flush_dependency(BlockDriverState *bs, Qcow2Cache *c)
static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int ret = 0;
if (!c->entries[i].dirty || !c->entries[i].offset) {
@@ -184,7 +120,7 @@ static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
if (c->depends) {
ret = qcow2_cache_flush_dependency(bs, c);
} else if (c->depends_on_flush) {
ret = bdrv_flush(bs->file->bs);
ret = bdrv_flush(bs->file);
if (ret >= 0) {
c->depends_on_flush = false;
}
@@ -215,8 +151,8 @@ static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
BLKDBG_EVENT(bs->file, BLKDBG_L2_UPDATE);
}
ret = bdrv_pwrite(bs->file->bs, c->entries[i].offset,
qcow2_cache_get_table_addr(bs, c, i), s->cluster_size);
ret = bdrv_pwrite(bs->file, c->entries[i].offset, c->entries[i].table,
s->cluster_size);
if (ret < 0) {
return ret;
}
@@ -228,7 +164,7 @@ static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
int qcow2_cache_flush(BlockDriverState *bs, Qcow2Cache *c)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int result = 0;
int ret;
int i;
@@ -243,7 +179,7 @@ int qcow2_cache_flush(BlockDriverState *bs, Qcow2Cache *c)
}
if (result == 0) {
ret = bdrv_flush(bs->file->bs);
ret = bdrv_flush(bs->file);
if (ret < 0) {
result = ret;
}
@@ -292,55 +228,66 @@ int qcow2_cache_empty(BlockDriverState *bs, Qcow2Cache *c)
for (i = 0; i < c->size; i++) {
assert(c->entries[i].ref == 0);
c->entries[i].offset = 0;
c->entries[i].lru_counter = 0;
c->entries[i].cache_hits = 0;
}
qcow2_cache_table_release(bs, c, 0, c->size);
c->lru_counter = 0;
return 0;
}
static int qcow2_cache_find_entry_to_replace(Qcow2Cache *c)
{
int i;
int min_count = INT_MAX;
int min_index = -1;
for (i = 0; i < c->size; i++) {
if (c->entries[i].ref) {
continue;
}
if (c->entries[i].cache_hits < min_count) {
min_index = i;
min_count = c->entries[i].cache_hits;
}
/* Give newer hits priority */
/* TODO Check how to optimize the replacement strategy */
c->entries[i].cache_hits /= 2;
}
if (min_index == -1) {
/* This can't happen in current synchronous code, but leave the check
* here as a reminder for whoever starts using AIO with the cache */
abort();
}
return min_index;
}
static int qcow2_cache_do_get(BlockDriverState *bs, Qcow2Cache *c,
uint64_t offset, void **table, bool read_from_disk)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int i;
int ret;
int lookup_index;
uint64_t min_lru_counter = UINT64_MAX;
int min_lru_index = -1;
trace_qcow2_cache_get(qemu_coroutine_self(), c == s->l2_table_cache,
offset, read_from_disk);
/* Check if the table is already cached */
i = lookup_index = (offset / s->cluster_size * 4) % c->size;
do {
const Qcow2CachedTable *t = &c->entries[i];
if (t->offset == offset) {
for (i = 0; i < c->size; i++) {
if (c->entries[i].offset == offset) {
goto found;
}
if (t->ref == 0 && t->lru_counter < min_lru_counter) {
min_lru_counter = t->lru_counter;
min_lru_index = i;
}
if (++i == c->size) {
i = 0;
}
} while (i != lookup_index);
if (min_lru_index == -1) {
/* This can't happen in current synchronous code, but leave the check
* here as a reminder for whoever starts using AIO with the cache */
abort();
}
/* Cache miss: write a table back and replace it */
i = min_lru_index;
/* If not, write a table back and replace it */
i = qcow2_cache_find_entry_to_replace(c);
trace_qcow2_cache_get_replace_entry(qemu_coroutine_self(),
c == s->l2_table_cache, i);
if (i < 0) {
return i;
}
ret = qcow2_cache_entry_flush(bs, c, i);
if (ret < 0) {
@@ -355,20 +302,22 @@ static int qcow2_cache_do_get(BlockDriverState *bs, Qcow2Cache *c,
BLKDBG_EVENT(bs->file, BLKDBG_L2_LOAD);
}
ret = bdrv_pread(bs->file->bs, offset,
qcow2_cache_get_table_addr(bs, c, i),
s->cluster_size);
ret = bdrv_pread(bs->file, offset, c->entries[i].table, s->cluster_size);
if (ret < 0) {
return ret;
}
}
/* Give the table some hits for the start so that it won't be replaced
* immediately. The number 32 is completely arbitrary. */
c->entries[i].cache_hits = 32;
c->entries[i].offset = offset;
/* And return the right table */
found:
c->entries[i].cache_hits++;
c->entries[i].ref++;
*table = qcow2_cache_get_table_addr(bs, c, i);
*table = c->entries[i].table;
trace_qcow2_cache_get_done(qemu_coroutine_self(),
c == s->l2_table_cache, i);
@@ -388,24 +337,36 @@ int qcow2_cache_get_empty(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,
return qcow2_cache_do_get(bs, c, offset, table, false);
}
void qcow2_cache_put(BlockDriverState *bs, Qcow2Cache *c, void **table)
int qcow2_cache_put(BlockDriverState *bs, Qcow2Cache *c, void **table)
{
int i = qcow2_cache_get_table_idx(bs, c, *table);
int i;
for (i = 0; i < c->size; i++) {
if (c->entries[i].table == *table) {
goto found;
}
}
return -ENOENT;
found:
c->entries[i].ref--;
*table = NULL;
if (c->entries[i].ref == 0) {
c->entries[i].lru_counter = ++c->lru_counter;
}
assert(c->entries[i].ref >= 0);
return 0;
}
void qcow2_cache_entry_mark_dirty(BlockDriverState *bs, Qcow2Cache *c,
void *table)
void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table)
{
int i = qcow2_cache_get_table_idx(bs, c, table);
assert(c->entries[i].offset != 0);
int i;
for (i = 0; i < c->size; i++) {
if (c->entries[i].table == table) {
goto found;
}
}
abort();
found:
c->entries[i].dirty = true;
}

View File

@@ -22,20 +22,17 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include <zlib.h>
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/block_int.h"
#include "block/qcow2.h"
#include "qemu/bswap.h"
#include "trace.h"
int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
bool exact_size)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int new_l1_size2, ret, i;
uint64_t *new_l1_table;
int64_t old_l1_table_offset, old_l1_size;
@@ -75,7 +72,7 @@ int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
#endif
new_l1_size2 = sizeof(uint64_t) * new_l1_size;
new_l1_table = qemu_try_blockalign(bs->file->bs,
new_l1_table = qemu_try_blockalign(bs->file,
align_offset(new_l1_size2, 512));
if (new_l1_table == NULL) {
return -ENOMEM;
@@ -108,8 +105,7 @@ int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
BLKDBG_EVENT(bs->file, BLKDBG_L1_GROW_WRITE_TABLE);
for(i = 0; i < s->l1_size; i++)
new_l1_table[i] = cpu_to_be64(new_l1_table[i]);
ret = bdrv_pwrite_sync(bs->file->bs, new_l1_table_offset,
new_l1_table, new_l1_size2);
ret = bdrv_pwrite_sync(bs->file, new_l1_table_offset, new_l1_table, new_l1_size2);
if (ret < 0)
goto fail;
for(i = 0; i < s->l1_size; i++)
@@ -119,8 +115,7 @@ int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
BLKDBG_EVENT(bs->file, BLKDBG_L1_GROW_ACTIVATE_TABLE);
cpu_to_be32w((uint32_t*)data, new_l1_size);
stq_be_p(data + 4, new_l1_table_offset);
ret = bdrv_pwrite_sync(bs->file->bs, offsetof(QCowHeader, l1_size),
data, sizeof(data));
ret = bdrv_pwrite_sync(bs->file, offsetof(QCowHeader, l1_size), data,sizeof(data));
if (ret < 0) {
goto fail;
}
@@ -153,7 +148,7 @@ int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
static int l2_load(BlockDriverState *bs, uint64_t l2_offset,
uint64_t **l2_table)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int ret;
ret = qcow2_cache_get(bs, s->l2_table_cache, l2_offset, (void**) l2_table);
@@ -168,7 +163,7 @@ static int l2_load(BlockDriverState *bs, uint64_t l2_offset,
#define L1_ENTRIES_PER_SECTOR (512 / 8)
int qcow2_write_l1_entry(BlockDriverState *bs, int l1_index)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
uint64_t buf[L1_ENTRIES_PER_SECTOR] = { 0 };
int l1_start_index;
int i, ret;
@@ -187,9 +182,8 @@ int qcow2_write_l1_entry(BlockDriverState *bs, int l1_index)
}
BLKDBG_EVENT(bs->file, BLKDBG_L1_UPDATE);
ret = bdrv_pwrite_sync(bs->file->bs,
s->l1_table_offset + 8 * l1_start_index,
buf, sizeof(buf));
ret = bdrv_pwrite_sync(bs->file, s->l1_table_offset + 8 * l1_start_index,
buf, sizeof(buf));
if (ret < 0) {
return ret;
}
@@ -209,7 +203,7 @@ int qcow2_write_l1_entry(BlockDriverState *bs, int l1_index)
static int l2_allocate(BlockDriverState *bs, int l1_index, uint64_t **table)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
uint64_t old_l2_offset;
uint64_t *l2_table = NULL;
int64_t l2_offset;
@@ -259,14 +253,17 @@ static int l2_allocate(BlockDriverState *bs, int l1_index, uint64_t **table)
memcpy(l2_table, old_table, s->cluster_size);
qcow2_cache_put(bs, s->l2_table_cache, (void **) &old_table);
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &old_table);
if (ret < 0) {
goto fail;
}
}
/* write the l2 table to the file */
BLKDBG_EVENT(bs->file, BLKDBG_L2_ALLOC_WRITE);
trace_qcow2_l2_allocate_write_l2(bs, l1_index);
qcow2_cache_entry_mark_dirty(bs, s->l2_table_cache, l2_table);
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
ret = qcow2_cache_flush(bs, s->l2_table_cache);
if (ret < 0) {
goto fail;
@@ -304,7 +301,7 @@ fail:
* as contiguous. (This allows it, for example, to stop at the first compressed
* cluster which may require a different handling)
*/
static int count_contiguous_clusters(int nb_clusters, int cluster_size,
static int count_contiguous_clusters(uint64_t nb_clusters, int cluster_size,
uint64_t *l2_table, uint64_t stop_flags)
{
int i;
@@ -315,7 +312,7 @@ static int count_contiguous_clusters(int nb_clusters, int cluster_size,
if (!offset)
return 0;
assert(qcow2_get_cluster_type(first_entry) == QCOW2_CLUSTER_NORMAL);
assert(qcow2_get_cluster_type(first_entry) != QCOW2_CLUSTER_COMPRESSED);
for (i = 0; i < nb_clusters; i++) {
uint64_t l2_entry = be64_to_cpu(l2_table[i]) & mask;
@@ -327,16 +324,14 @@ static int count_contiguous_clusters(int nb_clusters, int cluster_size,
return i;
}
static int count_contiguous_clusters_by_type(int nb_clusters,
uint64_t *l2_table,
int wanted_type)
static int count_contiguous_free_clusters(uint64_t nb_clusters, uint64_t *l2_table)
{
int i;
for (i = 0; i < nb_clusters; i++) {
int type = qcow2_get_cluster_type(be64_to_cpu(l2_table[i]));
if (type != wanted_type) {
if (type != QCOW2_CLUSTER_UNALLOCATED) {
break;
}
}
@@ -347,47 +342,26 @@ static int count_contiguous_clusters_by_type(int nb_clusters,
/* The crypt function is compatible with the linux cryptoloop
algorithm for < 4 GB images. NOTE: out_buf == in_buf is
supported */
int qcow2_encrypt_sectors(BDRVQcow2State *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, bool enc,
Error **errp)
void qcow2_encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, int enc,
const AES_KEY *key)
{
union {
uint64_t ll[2];
uint8_t b[16];
} ivec;
int i;
int ret;
for(i = 0; i < nb_sectors; i++) {
ivec.ll[0] = cpu_to_le64(sector_num);
ivec.ll[1] = 0;
if (qcrypto_cipher_setiv(s->cipher,
ivec.b, G_N_ELEMENTS(ivec.b),
errp) < 0) {
return -1;
}
if (enc) {
ret = qcrypto_cipher_encrypt(s->cipher,
in_buf,
out_buf,
512,
errp);
} else {
ret = qcrypto_cipher_decrypt(s->cipher,
in_buf,
out_buf,
512,
errp);
}
if (ret < 0) {
return -1;
}
AES_cbc_encrypt(in_buf, out_buf, 512, key,
ivec.b, enc);
sector_num++;
in_buf += 512;
out_buf += 512;
}
return 0;
}
static int coroutine_fn copy_sectors(BlockDriverState *bs,
@@ -395,7 +369,7 @@ static int coroutine_fn copy_sectors(BlockDriverState *bs,
uint64_t cluster_offset,
int n_start, int n_end)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QEMUIOVector qiov;
struct iovec iov;
int n, ret;
@@ -429,16 +403,10 @@ static int coroutine_fn copy_sectors(BlockDriverState *bs,
goto out;
}
if (bs->encrypted) {
Error *err = NULL;
assert(s->cipher);
if (qcow2_encrypt_sectors(s, start_sect + n_start,
iov.iov_base, iov.iov_base, n,
true, &err) < 0) {
ret = -EIO;
error_free(err);
goto out;
}
if (s->crypt_method) {
qcow2_encrypt_sectors(s, start_sect + n_start,
iov.iov_base, iov.iov_base, n, 1,
&s->aes_encrypt_key);
}
ret = qcow2_pre_write_overlap_check(bs, 0,
@@ -448,8 +416,7 @@ static int coroutine_fn copy_sectors(BlockDriverState *bs,
}
BLKDBG_EVENT(bs->file, BLKDBG_COW_WRITE);
ret = bdrv_co_writev(bs->file->bs, (cluster_offset >> 9) + n_start, n,
&qiov);
ret = bdrv_co_writev(bs->file, (cluster_offset >> 9) + n_start, n, &qiov);
if (ret < 0) {
goto out;
}
@@ -478,7 +445,7 @@ out:
int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
int *num, uint64_t *cluster_offset)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
unsigned int l2_index;
uint64_t l1_index, l2_offset, *l2_table;
int l1_bits, c;
@@ -504,11 +471,10 @@ int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
if (nb_needed > nb_available) {
nb_needed = nb_available;
}
assert(nb_needed <= INT_MAX);
*cluster_offset = 0;
/* seek to the l2 offset in the l1 table */
/* seek the the l2 offset in the l1 table */
l1_index = offset >> l1_bits;
if (l1_index >= s->l1_size) {
@@ -540,8 +506,6 @@ int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
l2_index = (offset >> s->cluster_bits) & (s->l2_size - 1);
*cluster_offset = be64_to_cpu(l2_table[l2_index]);
/* nb_needed <= INT_MAX, thus nb_clusters <= INT_MAX, too */
nb_clusters = size_to_clusters(s, nb_needed << 9);
ret = qcow2_get_cluster_type(*cluster_offset);
@@ -559,14 +523,13 @@ int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
ret = -EIO;
goto fail;
}
c = count_contiguous_clusters_by_type(nb_clusters, &l2_table[l2_index],
QCOW2_CLUSTER_ZERO);
c = count_contiguous_clusters(nb_clusters, s->cluster_size,
&l2_table[l2_index], QCOW_OFLAG_ZERO);
*cluster_offset = 0;
break;
case QCOW2_CLUSTER_UNALLOCATED:
/* how many empty clusters ? */
c = count_contiguous_clusters_by_type(nb_clusters, &l2_table[l2_index],
QCOW2_CLUSTER_UNALLOCATED);
c = count_contiguous_free_clusters(nb_clusters, &l2_table[l2_index]);
*cluster_offset = 0;
break;
case QCOW2_CLUSTER_NORMAL:
@@ -619,13 +582,13 @@ static int get_cluster_table(BlockDriverState *bs, uint64_t offset,
uint64_t **new_l2_table,
int *new_l2_index)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
unsigned int l2_index;
uint64_t l1_index, l2_offset;
uint64_t *l2_table = NULL;
int ret;
/* seek to the l2 offset in the l1 table */
/* seek the the l2 offset in the l1 table */
l1_index = offset >> (s->l2_bits + s->cluster_bits);
if (l1_index >= s->l1_size) {
@@ -693,7 +656,7 @@ uint64_t qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs,
uint64_t offset,
int compressed_size)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int l2_index, ret;
uint64_t *l2_table;
int64_t cluster_offset;
@@ -729,16 +692,19 @@ uint64_t qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs,
/* compressed clusters never have the copied flag */
BLKDBG_EVENT(bs->file, BLKDBG_L2_UPDATE_COMPRESSED);
qcow2_cache_entry_mark_dirty(bs, s->l2_table_cache, l2_table);
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
l2_table[l2_index] = cpu_to_be64(cluster_offset);
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
if (ret < 0) {
return 0;
}
return cluster_offset;
}
static int perform_cow(BlockDriverState *bs, QCowL2Meta *m, Qcow2COWRegion *r)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int ret;
if (r->nb_sectors == 0) {
@@ -767,7 +733,7 @@ static int perform_cow(BlockDriverState *bs, QCowL2Meta *m, Qcow2COWRegion *r)
int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int i, j = 0, l2_index, ret;
uint64_t *old_cluster, *l2_table;
uint64_t cluster_offset = m->alloc_offset;
@@ -805,7 +771,7 @@ int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m)
if (ret < 0) {
goto err;
}
qcow2_cache_entry_mark_dirty(bs, s->l2_table_cache, l2_table);
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
assert(l2_index + m->nb_clusters <= s->l2_size);
for (i = 0; i < m->nb_clusters; i++) {
@@ -823,10 +789,14 @@ int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m)
}
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
if (ret < 0) {
goto err;
}
/*
* If this was a COW, we need to decrease the refcount of the old cluster.
* Also flush bs->file to get the right order for L2 and refcount update.
*
* Don't discard clusters that reach a refcount of 0 (e.g. compressed
* clusters), the next write will reuse them anyway.
@@ -849,7 +819,7 @@ err:
* write, but require COW to be performed (this includes yet unallocated space,
* which must copy from the backing file)
*/
static int count_cow_clusters(BDRVQcow2State *s, int nb_clusters,
static int count_cow_clusters(BDRVQcowState *s, int nb_clusters,
uint64_t *l2_table, int l2_index)
{
int i;
@@ -895,7 +865,7 @@ out:
static int handle_dependencies(BlockDriverState *bs, uint64_t guest_offset,
uint64_t *cur_bytes, QCowL2Meta **m)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QCowL2Meta *old_alloc;
uint64_t bytes = *cur_bytes;
@@ -968,13 +938,13 @@ static int handle_dependencies(BlockDriverState *bs, uint64_t guest_offset,
static int handle_copied(BlockDriverState *bs, uint64_t guest_offset,
uint64_t *host_offset, uint64_t *bytes, QCowL2Meta **m)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int l2_index;
uint64_t cluster_offset;
uint64_t *l2_table;
uint64_t nb_clusters;
unsigned int nb_clusters;
unsigned int keep_clusters;
int ret;
int ret, pret;
trace_qcow2_handle_copied(qemu_coroutine_self(), guest_offset, *host_offset,
*bytes);
@@ -991,7 +961,6 @@ static int handle_copied(BlockDriverState *bs, uint64_t guest_offset,
l2_index = offset_to_l2_index(s, guest_offset);
nb_clusters = MIN(nb_clusters, s->l2_size - l2_index);
assert(nb_clusters <= INT_MAX);
/* Find L2 entry for the first involved cluster */
ret = get_cluster_table(bs, guest_offset, &l2_table, &l2_index);
@@ -1042,7 +1011,10 @@ static int handle_copied(BlockDriverState *bs, uint64_t guest_offset,
/* Cleanup */
out:
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
pret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
if (pret < 0) {
return pret;
}
/* Only return a host offset if we actually made progress. Otherwise we
* would make requirements for handle_alloc() that it can't fulfill */
@@ -1074,9 +1046,9 @@ out:
* restarted, but the whole request should not be failed.
*/
static int do_alloc_cluster_offset(BlockDriverState *bs, uint64_t guest_offset,
uint64_t *host_offset, uint64_t *nb_clusters)
uint64_t *host_offset, unsigned int *nb_clusters)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
trace_qcow2_do_alloc_clusters_offset(qemu_coroutine_self(), guest_offset,
*host_offset, *nb_clusters);
@@ -1092,7 +1064,7 @@ static int do_alloc_cluster_offset(BlockDriverState *bs, uint64_t guest_offset,
*host_offset = cluster_offset;
return 0;
} else {
int64_t ret = qcow2_alloc_clusters_at(bs, *host_offset, *nb_clusters);
int ret = qcow2_alloc_clusters_at(bs, *host_offset, *nb_clusters);
if (ret < 0) {
return ret;
}
@@ -1124,11 +1096,11 @@ static int do_alloc_cluster_offset(BlockDriverState *bs, uint64_t guest_offset,
static int handle_alloc(BlockDriverState *bs, uint64_t guest_offset,
uint64_t *host_offset, uint64_t *bytes, QCowL2Meta **m)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int l2_index;
uint64_t *l2_table;
uint64_t entry;
uint64_t nb_clusters;
unsigned int nb_clusters;
int ret;
uint64_t alloc_cluster_offset;
@@ -1146,7 +1118,6 @@ static int handle_alloc(BlockDriverState *bs, uint64_t guest_offset,
l2_index = offset_to_l2_index(s, guest_offset);
nb_clusters = MIN(nb_clusters, s->l2_size - l2_index);
assert(nb_clusters <= INT_MAX);
/* Find L2 entry for the first involved cluster */
ret = get_cluster_table(bs, guest_offset, &l2_table, &l2_index);
@@ -1168,7 +1139,10 @@ static int handle_alloc(BlockDriverState *bs, uint64_t guest_offset,
* wrong with our code. */
assert(nb_clusters > 0);
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
if (ret < 0) {
return ret;
}
/* Allocate, if necessary at a given offset in the image file */
alloc_cluster_offset = start_of_cluster(s, *host_offset);
@@ -1277,7 +1251,7 @@ fail:
int qcow2_alloc_cluster_offset(BlockDriverState *bs, uint64_t offset,
int *num, uint64_t *host_offset, QCowL2Meta **m)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
uint64_t start, remaining;
uint64_t cluster_offset;
uint64_t cur_bytes;
@@ -1411,7 +1385,7 @@ static int decompress_buffer(uint8_t *out_buf, int out_buf_size,
int qcow2_decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int ret, csize, nb_csectors, sector_offset;
uint64_t coffset;
@@ -1421,8 +1395,7 @@ int qcow2_decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset)
sector_offset = coffset & 511;
csize = nb_csectors * 512 - sector_offset;
BLKDBG_EVENT(bs->file, BLKDBG_READ_COMPRESSED);
ret = bdrv_read(bs->file->bs, coffset >> 9, s->cluster_data,
nb_csectors);
ret = bdrv_read(bs->file, coffset >> 9, s->cluster_data, nb_csectors);
if (ret < 0) {
return ret;
}
@@ -1441,10 +1414,9 @@ int qcow2_decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset)
* clusters.
*/
static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
uint64_t nb_clusters, enum qcow2_discard_type type,
bool full_discard)
unsigned int nb_clusters, enum qcow2_discard_type type, bool full_discard)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
uint64_t *l2_table;
int l2_index;
int ret;
@@ -1457,7 +1429,6 @@ static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
/* Limit nb_clusters to one L2 table */
nb_clusters = MIN(nb_clusters, s->l2_size - l2_index);
assert(nb_clusters <= INT_MAX);
for (i = 0; i < nb_clusters; i++) {
uint64_t old_l2_entry;
@@ -1479,7 +1450,7 @@ static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
*/
switch (qcow2_get_cluster_type(old_l2_entry)) {
case QCOW2_CLUSTER_UNALLOCATED:
if (full_discard || !bs->backing) {
if (full_discard || !bs->backing_hd) {
continue;
}
break;
@@ -1499,7 +1470,7 @@ static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
}
/* First remove L2 entries */
qcow2_cache_entry_mark_dirty(bs, s->l2_table_cache, l2_table);
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
if (!full_discard && s->qcow_version >= 3) {
l2_table[l2_index + i] = cpu_to_be64(QCOW_OFLAG_ZERO);
} else {
@@ -1510,7 +1481,10 @@ static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
qcow2_free_any_clusters(bs, old_l2_entry, 1, type);
}
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
if (ret < 0) {
return ret;
}
return nb_clusters;
}
@@ -1518,9 +1492,9 @@ static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
int qcow2_discard_clusters(BlockDriverState *bs, uint64_t offset,
int nb_sectors, enum qcow2_discard_type type, bool full_discard)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
uint64_t end_offset;
uint64_t nb_clusters;
unsigned int nb_clusters;
int ret;
end_offset = offset + (nb_sectors << BDRV_SECTOR_BITS);
@@ -1562,9 +1536,9 @@ fail:
* clusters.
*/
static int zero_single_l2(BlockDriverState *bs, uint64_t offset,
uint64_t nb_clusters)
unsigned int nb_clusters)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
uint64_t *l2_table;
int l2_index;
int ret;
@@ -1577,7 +1551,6 @@ static int zero_single_l2(BlockDriverState *bs, uint64_t offset,
/* Limit nb_clusters to one L2 table */
nb_clusters = MIN(nb_clusters, s->l2_size - l2_index);
assert(nb_clusters <= INT_MAX);
for (i = 0; i < nb_clusters; i++) {
uint64_t old_offset;
@@ -1585,7 +1558,7 @@ static int zero_single_l2(BlockDriverState *bs, uint64_t offset,
old_offset = be64_to_cpu(l2_table[l2_index + i]);
/* Update L2 entries */
qcow2_cache_entry_mark_dirty(bs, s->l2_table_cache, l2_table);
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
if (old_offset & QCOW_OFLAG_COMPRESSED) {
l2_table[l2_index + i] = cpu_to_be64(QCOW_OFLAG_ZERO);
qcow2_free_any_clusters(bs, old_offset, 1, QCOW2_DISCARD_REQUEST);
@@ -1594,15 +1567,18 @@ static int zero_single_l2(BlockDriverState *bs, uint64_t offset,
}
}
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
if (ret < 0) {
return ret;
}
return nb_clusters;
}
int qcow2_zero_clusters(BlockDriverState *bs, uint64_t offset, int nb_sectors)
{
BDRVQcow2State *s = bs->opaque;
uint64_t nb_clusters;
BDRVQcowState *s = bs->opaque;
unsigned int nb_clusters;
int ret;
/* The zero flag is only supported by version 3 and newer */
@@ -1644,10 +1620,9 @@ fail:
static int expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
int l1_size, int64_t *visited_l1_entries,
int64_t l1_entries,
BlockDriverAmendStatusCB *status_cb,
void *cb_opaque)
BlockDriverAmendStatusCB *status_cb)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
bool is_active_l1 = (l1_table == s->l1_table);
uint64_t *l2_table = NULL;
int ret;
@@ -1656,7 +1631,7 @@ static int expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
if (!is_active_l1) {
/* inactive L2 tables require a buffer to be stored in when loading
* them from disk */
l2_table = qemu_try_blockalign(bs->file->bs, s->cluster_size);
l2_table = qemu_try_blockalign(bs->file, s->cluster_size);
if (l2_table == NULL) {
return -ENOMEM;
}
@@ -1665,41 +1640,33 @@ static int expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
for (i = 0; i < l1_size; i++) {
uint64_t l2_offset = l1_table[i] & L1E_OFFSET_MASK;
bool l2_dirty = false;
uint64_t l2_refcount;
int l2_refcount;
if (!l2_offset) {
/* unallocated */
(*visited_l1_entries)++;
if (status_cb) {
status_cb(bs, *visited_l1_entries, l1_entries, cb_opaque);
status_cb(bs, *visited_l1_entries, l1_entries);
}
continue;
}
if (offset_into_cluster(s, l2_offset)) {
qcow2_signal_corruption(bs, true, -1, -1, "L2 table offset %#"
PRIx64 " unaligned (L1 index: %#x)",
l2_offset, i);
ret = -EIO;
goto fail;
}
if (is_active_l1) {
/* get active L2 tables from cache */
ret = qcow2_cache_get(bs, s->l2_table_cache, l2_offset,
(void **)&l2_table);
} else {
/* load inactive L2 tables from disk */
ret = bdrv_read(bs->file->bs, l2_offset / BDRV_SECTOR_SIZE,
(void *)l2_table, s->cluster_sectors);
ret = bdrv_read(bs->file, l2_offset / BDRV_SECTOR_SIZE,
(void *)l2_table, s->cluster_sectors);
}
if (ret < 0) {
goto fail;
}
ret = qcow2_get_refcount(bs, l2_offset >> s->cluster_bits,
&l2_refcount);
if (ret < 0) {
l2_refcount = qcow2_get_refcount(bs, l2_offset >> s->cluster_bits);
if (l2_refcount < 0) {
ret = l2_refcount;
goto fail;
}
@@ -1714,7 +1681,7 @@ static int expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
}
if (!preallocated) {
if (!bs->backing) {
if (!bs->backing_hd) {
/* not backed; therefore we can simply deallocate the
* cluster */
l2_table[j] = 0;
@@ -1732,8 +1699,7 @@ static int expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
/* For shared L2 tables, set the refcount accordingly (it is
* already 1 and needs to be l2_refcount) */
ret = qcow2_update_cluster_refcount(bs,
offset >> s->cluster_bits,
refcount_diff(1, l2_refcount), false,
offset >> s->cluster_bits, l2_refcount - 1,
QCOW2_DISCARD_OTHER);
if (ret < 0) {
qcow2_free_clusters(bs, offset, s->cluster_size,
@@ -1743,19 +1709,6 @@ static int expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
}
}
if (offset_into_cluster(s, offset)) {
qcow2_signal_corruption(bs, true, -1, -1, "Data cluster offset "
"%#" PRIx64 " unaligned (L2 offset: %#"
PRIx64 ", L2 index: %#x)", offset,
l2_offset, j);
if (!preallocated) {
qcow2_free_clusters(bs, offset, s->cluster_size,
QCOW2_DISCARD_ALWAYS);
}
ret = -EIO;
goto fail;
}
ret = qcow2_pre_write_overlap_check(bs, 0, offset, s->cluster_size);
if (ret < 0) {
if (!preallocated) {
@@ -1765,7 +1718,7 @@ static int expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
goto fail;
}
ret = bdrv_write_zeroes(bs->file->bs, offset / BDRV_SECTOR_SIZE,
ret = bdrv_write_zeroes(bs->file, offset / BDRV_SECTOR_SIZE,
s->cluster_sectors, 0);
if (ret < 0) {
if (!preallocated) {
@@ -1785,10 +1738,14 @@ static int expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
if (is_active_l1) {
if (l2_dirty) {
qcow2_cache_entry_mark_dirty(bs, s->l2_table_cache, l2_table);
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
qcow2_cache_depends_on_flush(s->l2_table_cache);
}
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
ret = qcow2_cache_put(bs, s->l2_table_cache, (void **)&l2_table);
if (ret < 0) {
l2_table = NULL;
goto fail;
}
} else {
if (l2_dirty) {
ret = qcow2_pre_write_overlap_check(bs,
@@ -1798,8 +1755,8 @@ static int expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
goto fail;
}
ret = bdrv_write(bs->file->bs, l2_offset / BDRV_SECTOR_SIZE,
(void *)l2_table, s->cluster_sectors);
ret = bdrv_write(bs->file, l2_offset / BDRV_SECTOR_SIZE,
(void *)l2_table, s->cluster_sectors);
if (ret < 0) {
goto fail;
}
@@ -1808,7 +1765,7 @@ static int expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
(*visited_l1_entries)++;
if (status_cb) {
status_cb(bs, *visited_l1_entries, l1_entries, cb_opaque);
status_cb(bs, *visited_l1_entries, l1_entries);
}
}
@@ -1819,7 +1776,12 @@ fail:
if (!is_active_l1) {
qemu_vfree(l2_table);
} else {
qcow2_cache_put(bs, s->l2_table_cache, (void **) &l2_table);
if (ret < 0) {
qcow2_cache_put(bs, s->l2_table_cache, (void **)&l2_table);
} else {
ret = qcow2_cache_put(bs, s->l2_table_cache,
(void **)&l2_table);
}
}
}
return ret;
@@ -1832,10 +1794,9 @@ fail:
* qcow2 version which doesn't yet support metadata zero clusters.
*/
int qcow2_expand_zero_clusters(BlockDriverState *bs,
BlockDriverAmendStatusCB *status_cb,
void *cb_opaque)
BlockDriverAmendStatusCB *status_cb)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
uint64_t *l1_table = NULL;
int64_t l1_entries = 0, visited_l1_entries = 0;
int ret;
@@ -1850,7 +1811,7 @@ int qcow2_expand_zero_clusters(BlockDriverState *bs,
ret = expand_zero_clusters_in_l1(bs, s->l1_table, s->l1_size,
&visited_l1_entries, l1_entries,
status_cb, cb_opaque);
status_cb);
if (ret < 0) {
goto fail;
}
@@ -1873,9 +1834,8 @@ int qcow2_expand_zero_clusters(BlockDriverState *bs,
l1_table = g_realloc(l1_table, l1_sectors * BDRV_SECTOR_SIZE);
ret = bdrv_read(bs->file->bs,
s->snapshots[i].l1_table_offset / BDRV_SECTOR_SIZE,
(void *)l1_table, l1_sectors);
ret = bdrv_read(bs->file, s->snapshots[i].l1_table_offset /
BDRV_SECTOR_SIZE, (void *)l1_table, l1_sectors);
if (ret < 0) {
goto fail;
}
@@ -1886,7 +1846,7 @@ int qcow2_expand_zero_clusters(BlockDriverState *bs,
ret = expand_zero_clusters_in_l1(bs, l1_table, s->snapshots[i].l1_size,
&visited_l1_entries, l1_entries,
status_cb, cb_opaque);
status_cb);
if (ret < 0) {
goto fail;
}

File diff suppressed because it is too large Load Diff

View File

@@ -22,17 +22,13 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu-common.h"
#include "block/block_int.h"
#include "block/qcow2.h"
#include "qemu/bswap.h"
#include "qemu/error-report.h"
#include "qemu/cutils.h"
void qcow2_free_snapshots(BlockDriverState *bs)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int i;
for(i = 0; i < s->nb_snapshots; i++) {
@@ -46,7 +42,7 @@ void qcow2_free_snapshots(BlockDriverState *bs)
int qcow2_read_snapshots(BlockDriverState *bs)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QCowSnapshotHeader h;
QCowSnapshotExtraData extra;
QCowSnapshot *sn;
@@ -67,7 +63,7 @@ int qcow2_read_snapshots(BlockDriverState *bs)
for(i = 0; i < s->nb_snapshots; i++) {
/* Read statically sized part of the snapshot header */
offset = align_offset(offset, 8);
ret = bdrv_pread(bs->file->bs, offset, &h, sizeof(h));
ret = bdrv_pread(bs->file, offset, &h, sizeof(h));
if (ret < 0) {
goto fail;
}
@@ -86,7 +82,7 @@ int qcow2_read_snapshots(BlockDriverState *bs)
name_size = be16_to_cpu(h.name_size);
/* Read extra data */
ret = bdrv_pread(bs->file->bs, offset, &extra,
ret = bdrv_pread(bs->file, offset, &extra,
MIN(sizeof(extra), extra_data_size));
if (ret < 0) {
goto fail;
@@ -105,7 +101,7 @@ int qcow2_read_snapshots(BlockDriverState *bs)
/* Read snapshot ID */
sn->id_str = g_malloc(id_str_size + 1);
ret = bdrv_pread(bs->file->bs, offset, sn->id_str, id_str_size);
ret = bdrv_pread(bs->file, offset, sn->id_str, id_str_size);
if (ret < 0) {
goto fail;
}
@@ -114,7 +110,7 @@ int qcow2_read_snapshots(BlockDriverState *bs)
/* Read snapshot name */
sn->name = g_malloc(name_size + 1);
ret = bdrv_pread(bs->file->bs, offset, sn->name, name_size);
ret = bdrv_pread(bs->file, offset, sn->name, name_size);
if (ret < 0) {
goto fail;
}
@@ -139,7 +135,7 @@ fail:
/* add at the end of the file a new list of snapshots */
static int qcow2_write_snapshots(BlockDriverState *bs)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QCowSnapshot *sn;
QCowSnapshotHeader h;
QCowSnapshotExtraData extra;
@@ -217,25 +213,25 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
h.name_size = cpu_to_be16(name_size);
offset = align_offset(offset, 8);
ret = bdrv_pwrite(bs->file->bs, offset, &h, sizeof(h));
ret = bdrv_pwrite(bs->file, offset, &h, sizeof(h));
if (ret < 0) {
goto fail;
}
offset += sizeof(h);
ret = bdrv_pwrite(bs->file->bs, offset, &extra, sizeof(extra));
ret = bdrv_pwrite(bs->file, offset, &extra, sizeof(extra));
if (ret < 0) {
goto fail;
}
offset += sizeof(extra);
ret = bdrv_pwrite(bs->file->bs, offset, sn->id_str, id_str_size);
ret = bdrv_pwrite(bs->file, offset, sn->id_str, id_str_size);
if (ret < 0) {
goto fail;
}
offset += id_str_size;
ret = bdrv_pwrite(bs->file->bs, offset, sn->name, name_size);
ret = bdrv_pwrite(bs->file, offset, sn->name, name_size);
if (ret < 0) {
goto fail;
}
@@ -257,7 +253,7 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
header_data.nb_snapshots = cpu_to_be32(s->nb_snapshots);
header_data.snapshots_offset = cpu_to_be64(snapshots_offset);
ret = bdrv_pwrite_sync(bs->file->bs, offsetof(QCowHeader, nb_snapshots),
ret = bdrv_pwrite_sync(bs->file, offsetof(QCowHeader, nb_snapshots),
&header_data, sizeof(header_data));
if (ret < 0) {
goto fail;
@@ -281,7 +277,7 @@ fail:
static void find_new_snapshot_id(BlockDriverState *bs,
char *id_str, int id_str_size)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QCowSnapshot *sn;
int i;
unsigned long id, id_max = 0;
@@ -299,7 +295,7 @@ static int find_snapshot_by_id_and_name(BlockDriverState *bs,
const char *id,
const char *name)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
int i;
if (id && name) {
@@ -341,7 +337,7 @@ static int find_snapshot_by_id_or_name(BlockDriverState *bs,
/* if no id is provided, a new one is constructed */
int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QCowSnapshot *new_snapshot_list = NULL;
QCowSnapshot *old_snapshot_list = NULL;
QCowSnapshot sn1, *sn = &sn1;
@@ -355,8 +351,10 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
memset(sn, 0, sizeof(*sn));
/* Generate an ID */
find_new_snapshot_id(bs, sn_info->id_str, sizeof(sn_info->id_str));
/* Generate an ID if it wasn't passed */
if (sn_info->id_str[0] == '\0') {
find_new_snapshot_id(bs, sn_info->id_str, sizeof(sn_info->id_str));
}
/* Check that the ID is unique */
if (find_snapshot_by_id_and_name(bs, sn_info->id_str, NULL) >= 0) {
@@ -399,7 +397,7 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
goto fail;
}
ret = bdrv_pwrite(bs->file->bs, sn->l1_table_offset, l1_table,
ret = bdrv_pwrite(bs->file, sn->l1_table_offset, l1_table,
s->l1_size * sizeof(uint64_t));
if (ret < 0) {
goto fail;
@@ -464,7 +462,7 @@ fail:
/* copy the snapshot 'snapshot_name' into the current disk image */
int qcow2_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QCowSnapshot *sn;
int i, snapshot_index;
int cur_l1_bytes, sn_l1_bytes;
@@ -512,8 +510,7 @@ int qcow2_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
goto fail;
}
ret = bdrv_pread(bs->file->bs, sn->l1_table_offset,
sn_l1_table, sn_l1_bytes);
ret = bdrv_pread(bs->file, sn->l1_table_offset, sn_l1_table, sn_l1_bytes);
if (ret < 0) {
goto fail;
}
@@ -530,7 +527,7 @@ int qcow2_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
goto fail;
}
ret = bdrv_pwrite_sync(bs->file->bs, s->l1_table_offset, sn_l1_table,
ret = bdrv_pwrite_sync(bs->file, s->l1_table_offset, sn_l1_table,
cur_l1_bytes);
if (ret < 0) {
goto fail;
@@ -591,7 +588,7 @@ int qcow2_snapshot_delete(BlockDriverState *bs,
const char *name,
Error **errp)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QCowSnapshot sn;
int snapshot_index, ret;
@@ -654,7 +651,7 @@ int qcow2_snapshot_delete(BlockDriverState *bs,
int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab)
{
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QEMUSnapshotInfo *sn_tab, *sn_info;
QCowSnapshot *sn;
int i;
@@ -687,7 +684,7 @@ int qcow2_snapshot_load_tmp(BlockDriverState *bs,
Error **errp)
{
int i, snapshot_index;
BDRVQcow2State *s = bs->opaque;
BDRVQcowState *s = bs->opaque;
QCowSnapshot *sn;
uint64_t *new_l1_table;
int new_l1_bytes;
@@ -705,19 +702,18 @@ int qcow2_snapshot_load_tmp(BlockDriverState *bs,
sn = &s->snapshots[snapshot_index];
/* Allocate and read in the snapshot's L1 table */
if (sn->l1_size > QCOW_MAX_L1_SIZE / sizeof(uint64_t)) {
if (sn->l1_size > QCOW_MAX_L1_SIZE) {
error_setg(errp, "Snapshot L1 table too large");
return -EFBIG;
}
new_l1_bytes = sn->l1_size * sizeof(uint64_t);
new_l1_table = qemu_try_blockalign(bs->file->bs,
new_l1_table = qemu_try_blockalign(bs->file,
align_offset(new_l1_bytes, 512));
if (new_l1_table == NULL) {
return -ENOMEM;
}
ret = bdrv_pread(bs->file->bs, sn->l1_table_offset,
new_l1_table, new_l1_bytes);
ret = bdrv_pread(bs->file, sn->l1_table_offset, new_l1_table, new_l1_bytes);
if (ret < 0) {
error_setg(errp, "Failed to read l1 table for snapshot");
qemu_vfree(new_l1_table);

File diff suppressed because it is too large Load Diff

View File

@@ -25,8 +25,8 @@
#ifndef BLOCK_QCOW2_H
#define BLOCK_QCOW2_H
#include "crypto/cipher.h"
#include "qemu/coroutine.h"
#include "qemu/aes.h"
#include "block/coroutine.h"
//#define DEBUG_ALLOC
//#define DEBUG_ALLOC2
@@ -62,14 +62,11 @@
#define MIN_CLUSTER_BITS 9
#define MAX_CLUSTER_BITS 21
/* Must be at least 2 to cover COW */
#define MIN_L2_CACHE_SIZE 2 /* clusters */
#define MIN_L2_CACHE_SIZE 1 /* cluster */
/* Must be at least 4 to cover all cases of refcount table growth */
#define MIN_REFCOUNT_CACHE_SIZE 4 /* clusters */
/* Whichever is more */
#define DEFAULT_L2_CACHE_CLUSTERS 8 /* clusters */
#define DEFAULT_L2_CACHE_BYTE_SIZE 1048576 /* bytes */
/* The refblock cache needs only a fourth of the L2 cache size to cover as many
@@ -96,7 +93,6 @@
#define QCOW2_OPT_CACHE_SIZE "cache-size"
#define QCOW2_OPT_L2_CACHE_SIZE "l2-cache-size"
#define QCOW2_OPT_REFCOUNT_CACHE_SIZE "refcount-cache-size"
#define QCOW2_OPT_CACHE_CLEAN_INTERVAL "cache-clean-interval"
typedef struct QCowHeader {
uint32_t magic;
@@ -217,12 +213,7 @@ typedef struct Qcow2DiscardRegion {
QTAILQ_ENTRY(Qcow2DiscardRegion) next;
} Qcow2DiscardRegion;
typedef uint64_t Qcow2GetRefcountFunc(const void *refcount_array,
uint64_t index);
typedef void Qcow2SetRefcountFunc(void *refcount_array,
uint64_t index, uint64_t value);
typedef struct BDRVQcow2State {
typedef struct BDRVQcowState {
int cluster_bits;
int cluster_size;
int cluster_sectors;
@@ -240,8 +231,6 @@ typedef struct BDRVQcow2State {
Qcow2Cache* l2_table_cache;
Qcow2Cache* refcount_block_cache;
QEMUTimer *cache_clean_timer;
unsigned cache_clean_interval;
uint8_t *cluster_cache;
uint8_t *cluster_data;
@@ -256,8 +245,10 @@ typedef struct BDRVQcow2State {
CoMutex lock;
QCryptoCipher *cipher; /* current cipher, NULL if no key yet */
uint32_t crypt_method; /* current crypt method, 0 if no key yet */
uint32_t crypt_method_header;
AES_KEY aes_encrypt_key;
AES_KEY aes_decrypt_key;
uint64_t snapshots_offset;
int snapshots_size;
unsigned int nb_snapshots;
@@ -267,11 +258,6 @@ typedef struct BDRVQcow2State {
int qcow_version;
bool use_lazy_refcounts;
int refcount_order;
int refcount_bits;
uint64_t refcount_max;
Qcow2GetRefcountFunc *get_refcount;
Qcow2SetRefcountFunc *set_refcount;
bool discard_passthrough[QCOW2_DISCARD_MAX];
@@ -287,13 +273,20 @@ typedef struct BDRVQcow2State {
QLIST_HEAD(, Qcow2UnknownHeaderExtension) unknown_header_ext;
QTAILQ_HEAD (, Qcow2DiscardRegion) discards;
bool cache_discards;
} BDRVQcowState;
/* Backing file path and format as stored in the image (this is not the
* effective path/format, which may be the result of a runtime option
* override) */
char *image_backing_file;
char *image_backing_format;
} BDRVQcow2State;
/* XXX: use std qcow open function ? */
typedef struct QCowCreateState {
int cluster_size;
int cluster_bits;
uint16_t *refcount_block;
uint64_t *refcount_table;
int64_t l1_table_offset;
int64_t refcount_table_offset;
int64_t refcount_block_offset;
} QCowCreateState;
struct QCowAIOCB;
typedef struct Qcow2COWRegion {
/**
@@ -403,28 +396,28 @@ typedef enum QCow2MetadataOverlap {
#define REFT_OFFSET_MASK 0xfffffffffffffe00ULL
static inline int64_t start_of_cluster(BDRVQcow2State *s, int64_t offset)
static inline int64_t start_of_cluster(BDRVQcowState *s, int64_t offset)
{
return offset & ~(s->cluster_size - 1);
}
static inline int64_t offset_into_cluster(BDRVQcow2State *s, int64_t offset)
static inline int64_t offset_into_cluster(BDRVQcowState *s, int64_t offset)
{
return offset & (s->cluster_size - 1);
}
static inline uint64_t size_to_clusters(BDRVQcow2State *s, uint64_t size)
static inline int size_to_clusters(BDRVQcowState *s, int64_t size)
{
return (size + (s->cluster_size - 1)) >> s->cluster_bits;
}
static inline int64_t size_to_l1(BDRVQcow2State *s, int64_t size)
static inline int64_t size_to_l1(BDRVQcowState *s, int64_t size)
{
int shift = s->cluster_bits + s->l2_bits;
return (size + (1ULL << shift) - 1) >> shift;
}
static inline int offset_to_l2_index(BDRVQcow2State *s, int64_t offset)
static inline int offset_to_l2_index(BDRVQcowState *s, int64_t offset)
{
return (offset >> s->cluster_bits) & (s->l2_size - 1);
}
@@ -435,12 +428,12 @@ static inline int64_t align_offset(int64_t offset, int n)
return offset;
}
static inline int64_t qcow2_vm_state_offset(BDRVQcow2State *s)
static inline int64_t qcow2_vm_state_offset(BDRVQcowState *s)
{
return (int64_t)s->l1_vm_state_index << (s->cluster_bits + s->l2_bits);
}
static inline uint64_t qcow2_max_refcount_clusters(BDRVQcow2State *s)
static inline uint64_t qcow2_max_refcount_clusters(BDRVQcowState *s)
{
return QCOW_MAX_REFTABLE_SIZE >> s->cluster_bits;
}
@@ -459,7 +452,7 @@ static inline int qcow2_get_cluster_type(uint64_t l2_entry)
}
/* Check whether refcounts are eager or lazy */
static inline bool qcow2_need_accurate_refcounts(BDRVQcow2State *s)
static inline bool qcow2_need_accurate_refcounts(BDRVQcowState *s)
{
return !(s->incompatible_features & QCOW2_INCOMPAT_DIRTY);
}
@@ -475,11 +468,6 @@ static inline uint64_t l2meta_cow_end(QCowL2Meta *m)
+ (m->cow_end.nb_sectors << BDRV_SECTOR_BITS);
}
static inline uint64_t refcount_diff(uint64_t r1, uint64_t r2)
{
return r1 > r2 ? r1 - r2 : r2 - r1;
}
// FIXME Need qcow2_ prefix to global functions
/* qcow2.c functions */
@@ -499,16 +487,14 @@ void qcow2_signal_corruption(BlockDriverState *bs, bool fatal, int64_t offset,
int qcow2_refcount_init(BlockDriverState *bs);
void qcow2_refcount_close(BlockDriverState *bs);
int qcow2_get_refcount(BlockDriverState *bs, int64_t cluster_index,
uint64_t *refcount);
int qcow2_get_refcount(BlockDriverState *bs, int64_t cluster_index);
int qcow2_update_cluster_refcount(BlockDriverState *bs, int64_t cluster_index,
uint64_t addend, bool decrease,
enum qcow2_discard_type type);
int addend, enum qcow2_discard_type type);
int64_t qcow2_alloc_clusters(BlockDriverState *bs, uint64_t size);
int64_t qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
int64_t nb_clusters);
int qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
int nb_clusters);
int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size);
void qcow2_free_clusters(BlockDriverState *bs,
int64_t offset, int64_t size,
@@ -529,19 +515,16 @@ int qcow2_check_metadata_overlap(BlockDriverState *bs, int ign, int64_t offset,
int qcow2_pre_write_overlap_check(BlockDriverState *bs, int ign, int64_t offset,
int64_t size);
int qcow2_change_refcount_order(BlockDriverState *bs, int refcount_order,
BlockDriverAmendStatusCB *status_cb,
void *cb_opaque, Error **errp);
/* qcow2-cluster.c functions */
int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
bool exact_size);
int qcow2_write_l1_entry(BlockDriverState *bs, int l1_index);
void qcow2_l2_cache_reset(BlockDriverState *bs);
int qcow2_decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset);
int qcow2_encrypt_sectors(BDRVQcow2State *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, bool enc, Error **errp);
void qcow2_encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
uint8_t *out_buf, const uint8_t *in_buf,
int nb_sectors, int enc,
const AES_KEY *key);
int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
int *num, uint64_t *cluster_offset);
@@ -557,8 +540,7 @@ int qcow2_discard_clusters(BlockDriverState *bs, uint64_t offset,
int qcow2_zero_clusters(BlockDriverState *bs, uint64_t offset, int nb_sectors);
int qcow2_expand_zero_clusters(BlockDriverState *bs,
BlockDriverAmendStatusCB *status_cb,
void *cb_opaque);
BlockDriverAmendStatusCB *status_cb);
/* qcow2-snapshot.c functions */
int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info);
@@ -580,20 +562,18 @@ int qcow2_read_snapshots(BlockDriverState *bs);
Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables);
int qcow2_cache_destroy(BlockDriverState* bs, Qcow2Cache *c);
void qcow2_cache_entry_mark_dirty(BlockDriverState *bs, Qcow2Cache *c,
void *table);
void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table);
int qcow2_cache_flush(BlockDriverState *bs, Qcow2Cache *c);
int qcow2_cache_set_dependency(BlockDriverState *bs, Qcow2Cache *c,
Qcow2Cache *dependency);
void qcow2_cache_depends_on_flush(Qcow2Cache *c);
void qcow2_cache_clean_unused(BlockDriverState *bs, Qcow2Cache *c);
int qcow2_cache_empty(BlockDriverState *bs, Qcow2Cache *c);
int qcow2_cache_get(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,
void **table);
int qcow2_cache_get_empty(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,
void **table);
void qcow2_cache_put(BlockDriverState *bs, Qcow2Cache *c, void **table);
int qcow2_cache_put(BlockDriverState *bs, Qcow2Cache *c, void **table);
#endif

View File

@@ -11,7 +11,6 @@
*
*/
#include "qemu/osdep.h"
#include "qed.h"
typedef struct {

View File

@@ -12,7 +12,6 @@
*
*/
#include "qemu/osdep.h"
#include "qed.h"
/**

View File

@@ -11,7 +11,6 @@
*
*/
#include "qemu/osdep.h"
#include "qed.h"
void *gencb_alloc(size_t len, BlockCompletionFunc *cb, void *opaque)

View File

@@ -50,7 +50,6 @@
* table will be deleted in favor of the existing cache entry.
*/
#include "qemu/osdep.h"
#include "trace.h"
#include "qed.h"

View File

@@ -12,11 +12,9 @@
*
*/
#include "qemu/osdep.h"
#include "trace.h"
#include "qemu/sockets.h" /* for EINPROGRESS on Windows */
#include "qed.h"
#include "qemu/bswap.h"
typedef struct {
GenericCB gencb;
@@ -65,7 +63,7 @@ static void qed_read_table(BDRVQEDState *s, uint64_t offset, QEDTable *table,
read_table_cb->iov.iov_len = s->header.cluster_size * s->header.table_size,
qemu_iovec_init_external(qiov, &read_table_cb->iov, 1);
bdrv_aio_readv(s->bs->file->bs, offset / BDRV_SECTOR_SIZE, qiov,
bdrv_aio_readv(s->bs->file, offset / BDRV_SECTOR_SIZE, qiov,
qiov->size / BDRV_SECTOR_SIZE,
qed_read_table_cb, read_table_cb);
}
@@ -154,7 +152,7 @@ static void qed_write_table(BDRVQEDState *s, uint64_t offset, QEDTable *table,
/* Adjust for offset into table */
offset += start * sizeof(uint64_t);
bdrv_aio_writev(s->bs->file->bs, offset / BDRV_SECTOR_SIZE,
bdrv_aio_writev(s->bs->file, offset / BDRV_SECTOR_SIZE,
&write_table_cb->qiov,
write_table_cb->qiov.size / BDRV_SECTOR_SIZE,
qed_write_table_cb, write_table_cb);

View File

@@ -12,15 +12,11 @@
*
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu/timer.h"
#include "qemu/bswap.h"
#include "trace.h"
#include "qed.h"
#include "qapi/qmp/qerror.h"
#include "migration/migration.h"
#include "sysemu/block-backend.h"
static const AIOCBInfo qed_aiocb_info = {
.aiocb_size = sizeof(QEDAIOCB),
@@ -86,7 +82,7 @@ int qed_write_header_sync(BDRVQEDState *s)
int ret;
qed_header_cpu_to_le(&s->header, &le);
ret = bdrv_pwrite(s->bs->file->bs, 0, &le, sizeof(le));
ret = bdrv_pwrite(s->bs->file, 0, &le, sizeof(le));
if (ret != sizeof(le)) {
return ret;
}
@@ -123,7 +119,7 @@ static void qed_write_header_read_cb(void *opaque, int ret)
/* Update header */
qed_header_cpu_to_le(&s->header, (QEDHeader *)write_header_cb->buf);
bdrv_aio_writev(s->bs->file->bs, 0, &write_header_cb->qiov,
bdrv_aio_writev(s->bs->file, 0, &write_header_cb->qiov,
write_header_cb->nsectors, qed_write_header_cb,
write_header_cb);
}
@@ -156,7 +152,7 @@ static void qed_write_header(BDRVQEDState *s, BlockCompletionFunc cb,
write_header_cb->iov.iov_len = len;
qemu_iovec_init_external(&write_header_cb->qiov, &write_header_cb->iov, 1);
bdrv_aio_readv(s->bs->file->bs, 0, &write_header_cb->qiov, nsectors,
bdrv_aio_readv(s->bs->file, 0, &write_header_cb->qiov, nsectors,
qed_write_header_read_cb, write_header_cb);
}
@@ -348,7 +344,7 @@ static void qed_start_need_check_timer(BDRVQEDState *s)
* migration.
*/
timer_mod(s->need_check_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
NANOSECONDS_PER_SECOND * QED_NEED_CHECK_TIMEOUT);
get_ticks_per_sec() * QED_NEED_CHECK_TIMEOUT);
}
/* It's okay to call this multiple times or when no timer is started */
@@ -358,6 +354,12 @@ static void qed_cancel_need_check_timer(BDRVQEDState *s)
timer_del(s->need_check_timer);
}
static void bdrv_qed_rebind(BlockDriverState *bs)
{
BDRVQEDState *s = bs->opaque;
s->bs = bs;
}
static void bdrv_qed_detach_aio_context(BlockDriverState *bs)
{
BDRVQEDState *s = bs->opaque;
@@ -390,7 +392,7 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
s->bs = bs;
QSIMPLEQ_INIT(&s->allocating_write_reqs);
ret = bdrv_pread(bs->file->bs, 0, &le_header, sizeof(le_header));
ret = bdrv_pread(bs->file, 0, &le_header, sizeof(le_header));
if (ret < 0) {
return ret;
}
@@ -402,8 +404,11 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
}
if (s->header.features & ~QED_FEATURE_MASK) {
/* image uses unsupported feature bits */
error_setg(errp, "Unsupported QED features: %" PRIx64,
s->header.features & ~QED_FEATURE_MASK);
char buf[64];
snprintf(buf, sizeof(buf), "%" PRIx64,
s->header.features & ~QED_FEATURE_MASK);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bdrv_get_device_name(bs), "QED", buf);
return -ENOTSUP;
}
if (!qed_is_cluster_size_valid(s->header.cluster_size)) {
@@ -411,7 +416,7 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
}
/* Round down file size to the last cluster */
file_size = bdrv_getlength(bs->file->bs);
file_size = bdrv_getlength(bs->file);
if (file_size < 0) {
return file_size;
}
@@ -431,14 +436,9 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
s->table_nelems = (s->header.cluster_size * s->header.table_size) /
sizeof(uint64_t);
s->l2_shift = ctz32(s->header.cluster_size);
s->l2_shift = ffs(s->header.cluster_size) - 1;
s->l2_mask = s->table_nelems - 1;
s->l1_shift = s->l2_shift + ctz32(s->table_nelems);
/* Header size calculation must not overflow uint32_t */
if (s->header.header_size > UINT32_MAX / s->header.cluster_size) {
return -EINVAL;
}
s->l1_shift = s->l2_shift + ffs(s->table_nelems) - 1;
if ((s->header.features & QED_F_BACKING_FILE)) {
if ((uint64_t)s->header.backing_filename_offset +
@@ -447,7 +447,7 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
return -EINVAL;
}
ret = qed_read_string(bs->file->bs, s->header.backing_filename_offset,
ret = qed_read_string(bs->file, s->header.backing_filename_offset,
s->header.backing_filename_size, bs->backing_file,
sizeof(bs->backing_file));
if (ret < 0) {
@@ -466,7 +466,7 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
* feature is no longer valid.
*/
if ((s->header.autoclear_features & ~QED_AUTOCLEAR_FEATURE_MASK) != 0 &&
!bdrv_is_read_only(bs->file->bs) && !(flags & BDRV_O_INACTIVE)) {
!bdrv_is_read_only(bs->file) && !(flags & BDRV_O_INCOMING)) {
s->header.autoclear_features &= QED_AUTOCLEAR_FEATURE_MASK;
ret = qed_write_header_sync(s);
@@ -475,7 +475,7 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
}
/* From here on only known autoclear feature bits are valid */
bdrv_flush(bs->file->bs);
bdrv_flush(bs->file);
}
s->l1_table = qed_alloc_table(s);
@@ -493,8 +493,8 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
* potentially inconsistent images to be opened read-only. This can
* aid data recovery from an otherwise inconsistent image.
*/
if (!bdrv_is_read_only(bs->file->bs) &&
!(flags & BDRV_O_INACTIVE)) {
if (!bdrv_is_read_only(bs->file) &&
!(flags & BDRV_O_INCOMING)) {
BdrvCheckResult result = {0};
ret = qed_check(s, &result, true);
@@ -536,7 +536,7 @@ static void bdrv_qed_close(BlockDriverState *bs)
bdrv_qed_detach_aio_context(bs);
/* Ensure writes reach stable storage */
bdrv_flush(bs->file->bs);
bdrv_flush(bs->file);
/* Clean shutdown, no check required on next open */
if (s->header.features & QED_F_NEED_CHECK) {
@@ -568,7 +568,7 @@ static int qed_create(const char *filename, uint32_t cluster_size,
size_t l1_size = header.cluster_size * header.table_size;
Error *local_err = NULL;
int ret = 0;
BlockBackend *blk;
BlockDriverState *bs;
ret = bdrv_create_file(filename, opts, &local_err);
if (ret < 0) {
@@ -576,17 +576,17 @@ static int qed_create(const char *filename, uint32_t cluster_size,
return ret;
}
blk = blk_new_open(filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, &local_err);
if (blk == NULL) {
bs = NULL;
ret = bdrv_open(&bs, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_CACHE_WB | BDRV_O_PROTOCOL, NULL,
&local_err);
if (ret < 0) {
error_propagate(errp, local_err);
return -EIO;
return ret;
}
blk_set_allow_write_beyond_eof(blk, true);
/* File must start empty and grow, check truncate is supported */
ret = blk_truncate(blk, 0);
ret = bdrv_truncate(bs, 0);
if (ret < 0) {
goto out;
}
@@ -602,18 +602,18 @@ static int qed_create(const char *filename, uint32_t cluster_size,
}
qed_header_cpu_to_le(&header, &le_header);
ret = blk_pwrite(blk, 0, &le_header, sizeof(le_header), 0);
ret = bdrv_pwrite(bs, 0, &le_header, sizeof(le_header));
if (ret < 0) {
goto out;
}
ret = blk_pwrite(blk, sizeof(le_header), backing_file,
header.backing_filename_size, 0);
ret = bdrv_pwrite(bs, sizeof(le_header), backing_file,
header.backing_filename_size);
if (ret < 0) {
goto out;
}
l1_table = g_malloc0(l1_size);
ret = blk_pwrite(blk, header.l1_table_offset, l1_table, l1_size, 0);
ret = bdrv_pwrite(bs, header.l1_table_offset, l1_table, l1_size);
if (ret < 0) {
goto out;
}
@@ -621,7 +621,7 @@ static int qed_create(const char *filename, uint32_t cluster_size,
ret = 0; /* success */
out:
g_free(l1_table);
blk_unref(blk);
bdrv_unref(bs);
return ret;
}
@@ -681,7 +681,6 @@ typedef struct {
uint64_t pos;
int64_t status;
int *pnum;
BlockDriverState **file;
} QEDIsAllocatedCB;
static void qed_is_allocated_cb(void *opaque, int ret, uint64_t offset, size_t len)
@@ -693,7 +692,6 @@ static void qed_is_allocated_cb(void *opaque, int ret, uint64_t offset, size_t l
case QED_CLUSTER_FOUND:
offset |= qed_offset_into_cluster(s, cb->pos);
cb->status = BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID | offset;
*cb->file = cb->bs->file->bs;
break;
case QED_CLUSTER_ZERO:
cb->status = BDRV_BLOCK_ZERO;
@@ -715,8 +713,7 @@ static void qed_is_allocated_cb(void *opaque, int ret, uint64_t offset, size_t l
static int64_t coroutine_fn bdrv_qed_co_get_block_status(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors, int *pnum,
BlockDriverState **file)
int nb_sectors, int *pnum)
{
BDRVQEDState *s = bs->opaque;
size_t len = (size_t)nb_sectors * BDRV_SECTOR_SIZE;
@@ -725,7 +722,6 @@ static int64_t coroutine_fn bdrv_qed_co_get_block_status(BlockDriverState *bs,
.pos = (uint64_t)sector_num * BDRV_SECTOR_SIZE,
.status = BDRV_BLOCK_OFFSET_MASK,
.pnum = pnum,
.file = file,
};
QEDRequest request = { .l2_table = NULL };
@@ -771,8 +767,8 @@ static void qed_read_backing_file(BDRVQEDState *s, uint64_t pos,
/* If there is a backing file, get its length. Treat the absence of a
* backing file like a zero length backing file.
*/
if (s->bs->backing) {
int64_t l = bdrv_getlength(s->bs->backing->bs);
if (s->bs->backing_hd) {
int64_t l = bdrv_getlength(s->bs->backing_hd);
if (l < 0) {
cb(opaque, l);
return;
@@ -801,7 +797,7 @@ static void qed_read_backing_file(BDRVQEDState *s, uint64_t pos,
qemu_iovec_concat(*backing_qiov, qiov, 0, size);
BLKDBG_EVENT(s->bs->file, BLKDBG_READ_BACKING_AIO);
bdrv_aio_readv(s->bs->backing->bs, pos / BDRV_SECTOR_SIZE,
bdrv_aio_readv(s->bs->backing_hd, pos / BDRV_SECTOR_SIZE,
*backing_qiov, size / BDRV_SECTOR_SIZE, cb, opaque);
}
@@ -838,7 +834,7 @@ static void qed_copy_from_backing_file_write(void *opaque, int ret)
}
BLKDBG_EVENT(s->bs->file, BLKDBG_COW_WRITE);
bdrv_aio_writev(s->bs->file->bs, copy_cb->offset / BDRV_SECTOR_SIZE,
bdrv_aio_writev(s->bs->file, copy_cb->offset / BDRV_SECTOR_SIZE,
&copy_cb->qiov, copy_cb->qiov.size / BDRV_SECTOR_SIZE,
qed_copy_from_backing_file_cb, copy_cb);
}
@@ -1054,7 +1050,7 @@ static void qed_aio_write_flush_before_l2_update(void *opaque, int ret)
QEDAIOCB *acb = opaque;
BDRVQEDState *s = acb_to_s(acb);
if (!bdrv_aio_flush(s->bs->file->bs, qed_aio_write_l2_update_cb, opaque)) {
if (!bdrv_aio_flush(s->bs->file, qed_aio_write_l2_update_cb, opaque)) {
qed_aio_complete(acb, -EIO);
}
}
@@ -1080,7 +1076,7 @@ static void qed_aio_write_main(void *opaque, int ret)
if (acb->find_cluster_ret == QED_CLUSTER_FOUND) {
next_fn = qed_aio_next_io;
} else {
if (s->bs->backing) {
if (s->bs->backing_hd) {
next_fn = qed_aio_write_flush_before_l2_update;
} else {
next_fn = qed_aio_write_l2_update_cb;
@@ -1088,7 +1084,7 @@ static void qed_aio_write_main(void *opaque, int ret)
}
BLKDBG_EVENT(s->bs->file, BLKDBG_WRITE_AIO);
bdrv_aio_writev(s->bs->file->bs, offset / BDRV_SECTOR_SIZE,
bdrv_aio_writev(s->bs->file, offset / BDRV_SECTOR_SIZE,
&acb->cur_qiov, acb->cur_qiov.size / BDRV_SECTOR_SIZE,
next_fn, acb);
}
@@ -1138,7 +1134,7 @@ static void qed_aio_write_prefill(void *opaque, int ret)
static bool qed_should_set_need_check(BDRVQEDState *s)
{
/* The flush before L2 update path ensures consistency */
if (s->bs->backing) {
if (s->bs->backing_hd) {
return false;
}
@@ -1320,7 +1316,7 @@ static void qed_aio_read_data(void *opaque, int ret,
}
BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
bdrv_aio_readv(bs->file->bs, offset / BDRV_SECTOR_SIZE,
bdrv_aio_readv(bs->file, offset / BDRV_SECTOR_SIZE,
&acb->cur_qiov, acb->cur_qiov.size / BDRV_SECTOR_SIZE,
qed_aio_next_io, acb);
return;
@@ -1442,7 +1438,7 @@ static int coroutine_fn bdrv_qed_co_write_zeroes(BlockDriverState *bs,
struct iovec iov;
/* Refuse if there are untouched backing file sectors */
if (bs->backing) {
if (bs->backing_hd) {
if (qed_offset_into_cluster(s, sector_num * BDRV_SECTOR_SIZE) != 0) {
return -ENOTSUP;
}
@@ -1579,7 +1575,7 @@ static int bdrv_qed_change_backing_file(BlockDriverState *bs,
}
/* Write new header */
ret = bdrv_pwrite_sync(bs->file->bs, 0, buffer, buffer_len);
ret = bdrv_pwrite_sync(bs->file, 0, buffer, buffer_len);
g_free(buffer);
if (ret == 0) {
memcpy(&s->header, &new_header, sizeof(new_header));
@@ -1595,11 +1591,18 @@ static void bdrv_qed_invalidate_cache(BlockDriverState *bs, Error **errp)
bdrv_qed_close(bs);
bdrv_invalidate_cache(bs->file, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
memset(s, 0, sizeof(BDRVQEDState));
ret = bdrv_qed_open(bs, NULL, bs->open_flags, &local_err);
if (local_err) {
error_propagate(errp, local_err);
error_prepend(errp, "Could not reopen qed layer: ");
error_setg(errp, "Could not reopen qed layer: %s",
error_get_pretty(local_err));
error_free(local_err);
return;
} else if (ret < 0) {
error_setg_errno(errp, -ret, "Could not reopen qed layer");
@@ -1656,6 +1659,7 @@ static BlockDriver bdrv_qed = {
.supports_backing = true,
.bdrv_probe = bdrv_qed_probe,
.bdrv_rebind = bdrv_qed_rebind,
.bdrv_open = bdrv_qed_open,
.bdrv_close = bdrv_qed_close,
.bdrv_reopen_prepare = bdrv_qed_reopen_prepare,

View File

@@ -16,7 +16,6 @@
#define BLOCK_QED_H
#include "block/block_int.h"
#include "qemu/cutils.h"
/* The layout of a QED file is as follows:
*
@@ -134,6 +133,7 @@ typedef struct QEDAIOCB {
int bh_ret; /* final return status for completion bh */
QSIMPLEQ_ENTRY(QEDAIOCB) next; /* next request */
int flags; /* QED_AIOCB_* bits ORed together */
bool *finished; /* signal for cancel completion */
uint64_t end_pos; /* request end on block device, in bytes */
/* User scatter-gather list */

View File

@@ -13,18 +13,16 @@
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "qemu/cutils.h"
#include <gnutls/gnutls.h>
#include <gnutls/crypto.h>
#include "block/block_int.h"
#include "qapi/qmp/qbool.h"
#include "qapi/qmp/qdict.h"
#include "qapi/qmp/qerror.h"
#include "qapi/qmp/qint.h"
#include "qapi/qmp/qjson.h"
#include "qapi/qmp/qlist.h"
#include "qapi/qmp/qstring.h"
#include "qapi-event.h"
#include "crypto/hash.h"
#define HASH_LENGTH 32
@@ -35,7 +33,7 @@
/* This union holds a vote hash value */
typedef union QuorumVoteValue {
uint8_t h[HASH_LENGTH]; /* SHA-256 hash */
char h[HASH_LENGTH]; /* SHA-256 hash */
int64_t l; /* simpler 64 bits hash */
} QuorumVoteValue;
@@ -66,11 +64,8 @@ typedef struct QuorumVotes {
/* the following structure holds the state of one quorum instance */
typedef struct BDRVQuorumState {
BdrvChild **children; /* children BlockDriverStates */
BlockDriverState **bs; /* children BlockDriverStates */
int num_children; /* children count */
unsigned next_child_index; /* the index of the next child that should
* be added
*/
int threshold; /* if less than threshold children reads gave the
* same result a quorum error occurs.
*/
@@ -219,21 +214,22 @@ static QuorumAIOCB *quorum_aio_get(BDRVQuorumState *s,
return acb;
}
static void quorum_report_bad(QuorumOpType type, uint64_t sector_num,
int nb_sectors, char *node_name, int ret)
static void quorum_report_bad(QuorumAIOCB *acb, char *node_name, int ret)
{
const char *msg = NULL;
if (ret < 0) {
msg = strerror(-ret);
}
qapi_event_send_quorum_report_bad(type, !!msg, msg, node_name,
sector_num, nb_sectors, &error_abort);
qapi_event_send_quorum_report_bad(!!msg, msg, node_name,
acb->sector_num, acb->nb_sectors, &error_abort);
}
static void quorum_report_failure(QuorumAIOCB *acb)
{
const char *reference = bdrv_get_device_or_node_name(acb->common.bs);
const char *reference = bdrv_get_device_name(acb->common.bs)[0] ?
bdrv_get_device_name(acb->common.bs) :
acb->common.bs->node_name;
qapi_event_send_quorum_failure(reference, acb->sector_num,
acb->nb_sectors, &error_abort);
}
@@ -290,19 +286,9 @@ static void quorum_aio_cb(void *opaque, int ret)
BDRVQuorumState *s = acb->common.bs->opaque;
bool rewrite = false;
if (ret == 0) {
acb->success_count++;
} else {
QuorumOpType type;
type = acb->is_read ? QUORUM_OP_TYPE_READ : QUORUM_OP_TYPE_WRITE;
quorum_report_bad(type, acb->sector_num, acb->nb_sectors,
sacb->aiocb->bs->node_name, ret);
}
if (acb->is_read && s->read_pattern == QUORUM_READ_PATTERN_FIFO) {
/* We try to read next child in FIFO order if we fail to read */
if (ret < 0 && (acb->child_iter + 1) < s->num_children) {
acb->child_iter++;
if (ret < 0 && ++acb->child_iter < s->num_children) {
read_fifo_child(acb);
return;
}
@@ -317,6 +303,11 @@ static void quorum_aio_cb(void *opaque, int ret)
sacb->ret = ret;
acb->count++;
if (ret == 0) {
acb->success_count++;
} else {
quorum_report_bad(acb, sacb->aiocb->bs->node_name, ret);
}
assert(acb->count <= s->num_children);
assert(acb->success_count <= s->num_children);
if (acb->count < s->num_children) {
@@ -348,9 +339,7 @@ static void quorum_report_bad_versions(BDRVQuorumState *s,
continue;
}
QLIST_FOREACH(item, &version->items, next) {
quorum_report_bad(QUORUM_OP_TYPE_READ, acb->sector_num,
acb->nb_sectors,
s->children[item->index]->bs->node_name, 0);
quorum_report_bad(acb, s->bs[item->index]->node_name, 0);
}
}
}
@@ -383,9 +372,8 @@ static bool quorum_rewrite_bad_versions(BDRVQuorumState *s, QuorumAIOCB *acb,
continue;
}
QLIST_FOREACH(item, &version->items, next) {
bdrv_aio_writev(s->children[item->index]->bs, acb->sector_num,
acb->qiov, acb->nb_sectors, quorum_rewrite_aio_cb,
acb);
bdrv_aio_writev(s->bs[item->index], acb->sector_num, acb->qiov,
acb->nb_sectors, quorum_rewrite_aio_cb, acb);
}
}
@@ -442,21 +430,25 @@ static void quorum_free_vote_list(QuorumVotes *votes)
static int quorum_compute_hash(QuorumAIOCB *acb, int i, QuorumVoteValue *hash)
{
int j, ret;
gnutls_hash_hd_t dig;
QEMUIOVector *qiov = &acb->qcrs[i].qiov;
size_t len = sizeof(hash->h);
uint8_t *data = hash->h;
/* XXX - would be nice if we could pass in the Error **
* and propagate that back, but this quorum code is
* restricted to just errno values currently */
if (qcrypto_hash_bytesv(QCRYPTO_HASH_ALG_SHA256,
qiov->iov, qiov->niov,
&data, &len,
NULL) < 0) {
return -EINVAL;
ret = gnutls_hash_init(&dig, GNUTLS_DIG_SHA256);
if (ret < 0) {
return ret;
}
return 0;
for (j = 0; j < qiov->niov; j++) {
ret = gnutls_hash(dig, qiov->iov[j].iov_base, qiov->iov[j].iov_len);
if (ret < 0) {
break;
}
}
gnutls_hash_deinit(dig, (void *) hash);
return ret;
}
static QuorumVoteVersion *quorum_get_vote_winner(QuorumVotes *votes)
@@ -654,15 +646,14 @@ static BlockAIOCB *read_quorum_children(QuorumAIOCB *acb)
int i;
for (i = 0; i < s->num_children; i++) {
acb->qcrs[i].buf = qemu_blockalign(s->children[i]->bs, acb->qiov->size);
acb->qcrs[i].buf = qemu_blockalign(s->bs[i], acb->qiov->size);
qemu_iovec_init(&acb->qcrs[i].qiov, acb->qiov->niov);
qemu_iovec_clone(&acb->qcrs[i].qiov, acb->qiov, acb->qcrs[i].buf);
}
for (i = 0; i < s->num_children; i++) {
acb->qcrs[i].aiocb = bdrv_aio_readv(s->children[i]->bs, acb->sector_num,
&acb->qcrs[i].qiov, acb->nb_sectors,
quorum_aio_cb, &acb->qcrs[i]);
bdrv_aio_readv(s->bs[i], acb->sector_num, &acb->qcrs[i].qiov,
acb->nb_sectors, quorum_aio_cb, &acb->qcrs[i]);
}
return &acb->common;
@@ -672,15 +663,14 @@ static BlockAIOCB *read_fifo_child(QuorumAIOCB *acb)
{
BDRVQuorumState *s = acb->common.bs->opaque;
acb->qcrs[acb->child_iter].buf =
qemu_blockalign(s->children[acb->child_iter]->bs, acb->qiov->size);
acb->qcrs[acb->child_iter].buf = qemu_blockalign(s->bs[acb->child_iter],
acb->qiov->size);
qemu_iovec_init(&acb->qcrs[acb->child_iter].qiov, acb->qiov->niov);
qemu_iovec_clone(&acb->qcrs[acb->child_iter].qiov, acb->qiov,
acb->qcrs[acb->child_iter].buf);
acb->qcrs[acb->child_iter].aiocb =
bdrv_aio_readv(s->children[acb->child_iter]->bs, acb->sector_num,
&acb->qcrs[acb->child_iter].qiov, acb->nb_sectors,
quorum_aio_cb, &acb->qcrs[acb->child_iter]);
bdrv_aio_readv(s->bs[acb->child_iter], acb->sector_num,
&acb->qcrs[acb->child_iter].qiov, acb->nb_sectors,
quorum_aio_cb, &acb->qcrs[acb->child_iter]);
return &acb->common;
}
@@ -719,8 +709,8 @@ static BlockAIOCB *quorum_aio_writev(BlockDriverState *bs,
int i;
for (i = 0; i < s->num_children; i++) {
acb->qcrs[i].aiocb = bdrv_aio_writev(s->children[i]->bs, sector_num,
qiov, nb_sectors, &quorum_aio_cb,
acb->qcrs[i].aiocb = bdrv_aio_writev(s->bs[i], sector_num, qiov,
nb_sectors, &quorum_aio_cb,
&acb->qcrs[i]);
}
@@ -734,12 +724,12 @@ static int64_t quorum_getlength(BlockDriverState *bs)
int i;
/* check that all file have the same length */
result = bdrv_getlength(s->children[0]->bs);
result = bdrv_getlength(s->bs[0]);
if (result < 0) {
return result;
}
for (i = 1; i < s->num_children; i++) {
int64_t value = bdrv_getlength(s->children[i]->bs);
int64_t value = bdrv_getlength(s->bs[i]);
if (value < 0) {
return value;
}
@@ -751,6 +741,21 @@ static int64_t quorum_getlength(BlockDriverState *bs)
return result;
}
static void quorum_invalidate_cache(BlockDriverState *bs, Error **errp)
{
BDRVQuorumState *s = bs->opaque;
Error *local_err = NULL;
int i;
for (i = 0; i < s->num_children; i++) {
bdrv_invalidate_cache(s->bs[i], &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
}
}
static coroutine_fn int quorum_co_flush(BlockDriverState *bs)
{
BDRVQuorumState *s = bs->opaque;
@@ -759,30 +764,19 @@ static coroutine_fn int quorum_co_flush(BlockDriverState *bs)
QuorumVoteValue result_value;
int i;
int result = 0;
int success_count = 0;
QLIST_INIT(&error_votes.vote_list);
error_votes.compare = quorum_64bits_compare;
for (i = 0; i < s->num_children; i++) {
result = bdrv_co_flush(s->children[i]->bs);
if (result) {
quorum_report_bad(QUORUM_OP_TYPE_FLUSH, 0,
bdrv_nb_sectors(s->children[i]->bs),
s->children[i]->bs->node_name, result);
result_value.l = result;
quorum_count_vote(&error_votes, &result_value, i);
} else {
success_count++;
}
result = bdrv_co_flush(s->bs[i]);
result_value.l = result;
quorum_count_vote(&error_votes, &result_value, i);
}
if (success_count >= s->threshold) {
result = 0;
} else {
winner = quorum_get_vote_winner(&error_votes);
result = winner->value.l;
}
winner = quorum_get_vote_winner(&error_votes);
result = winner->value.l;
quorum_free_vote_list(&error_votes);
return result;
@@ -795,7 +789,7 @@ static bool quorum_recurse_is_first_non_filter(BlockDriverState *bs,
int i;
for (i = 0; i < s->num_children; i++) {
bool perm = bdrv_recurse_is_first_non_filter(s->children[i]->bs,
bool perm = bdrv_recurse_is_first_non_filter(s->bs[i],
candidate);
if (perm) {
return true;
@@ -809,8 +803,8 @@ static int quorum_valid_threshold(int threshold, int num_children, Error **errp)
{
if (threshold < 1) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
"vote-threshold", "value >= 1");
error_set(errp, QERR_INVALID_PARAMETER_VALUE,
"vote-threshold", "value >= 1");
return -ERANGE;
}
@@ -859,7 +853,7 @@ static int parse_read_pattern(const char *opt)
return QUORUM_READ_PATTERN_QUORUM;
}
for (i = 0; i < QUORUM_READ_PATTERN__MAX; i++) {
for (i = 0; i < QUORUM_READ_PATTERN_MAX; i++) {
if (!strcmp(opt, QuorumReadPattern_lookup[i])) {
return i;
}
@@ -875,21 +869,28 @@ static int quorum_open(BlockDriverState *bs, QDict *options, int flags,
Error *local_err = NULL;
QemuOpts *opts = NULL;
bool *opened;
QDict *sub = NULL;
QList *list = NULL;
const QListEntry *lentry;
int i;
int ret = 0;
qdict_flatten(options);
qdict_extract_subqdict(options, &sub, "children.");
qdict_array_split(sub, &list);
/* count how many different children are present */
s->num_children = qdict_array_entries(options, "children.");
if (s->num_children < 0) {
error_setg(&local_err, "Option children is not a valid array");
if (qdict_size(sub)) {
error_setg(&local_err, "Invalid option children.%s",
qdict_first(sub)->key);
ret = -EINVAL;
goto exit;
}
if (s->num_children < 1) {
/* count how many different children are present */
s->num_children = qlist_size(list);
if (s->num_children < 2) {
error_setg(&local_err,
"Number of provided children must be 1 or more");
"Number of provided children must be greater than 1");
ret = -EINVAL;
goto exit;
}
@@ -902,12 +903,6 @@ static int quorum_open(BlockDriverState *bs, QDict *options, int flags,
}
s->threshold = qemu_opt_get_number(opts, QUORUM_OPT_VOTE_THRESHOLD, 0);
/* and validate it against s->num_children */
ret = quorum_valid_threshold(s->threshold, s->num_children, &local_err);
if (ret < 0) {
goto exit;
}
ret = parse_read_pattern(qemu_opt_get(opts, QUORUM_OPT_READ_PATTERN));
if (ret < 0) {
error_setg(&local_err, "Please set read-pattern as fifo or quorum");
@@ -916,6 +911,12 @@ static int quorum_open(BlockDriverState *bs, QDict *options, int flags,
s->read_pattern = ret;
if (s->read_pattern == QUORUM_READ_PATTERN_QUORUM) {
/* and validate it against s->num_children */
ret = quorum_valid_threshold(s->threshold, s->num_children, &local_err);
if (ret < 0) {
goto exit;
}
/* is the driver in blkverify mode */
if (qemu_opt_get_bool(opts, QUORUM_OPT_BLKVERIFY, false) &&
s->num_children == 2 && s->threshold == 2) {
@@ -935,25 +936,43 @@ static int quorum_open(BlockDriverState *bs, QDict *options, int flags,
}
}
/* allocate the children array */
s->children = g_new0(BdrvChild *, s->num_children);
/* allocate the children BlockDriverState array */
s->bs = g_new0(BlockDriverState *, s->num_children);
opened = g_new0(bool, s->num_children);
for (i = 0; i < s->num_children; i++) {
char indexstr[32];
ret = snprintf(indexstr, 32, "children.%d", i);
assert(ret < 32);
for (i = 0, lentry = qlist_first(list); lentry;
lentry = qlist_next(lentry), i++) {
QDict *d;
QString *string;
s->children[i] = bdrv_open_child(NULL, options, indexstr, bs,
&child_format, false, &local_err);
if (local_err) {
ret = -EINVAL;
goto close_exit;
switch (qobject_type(lentry->value))
{
/* List of options */
case QTYPE_QDICT:
d = qobject_to_qdict(lentry->value);
QINCREF(d);
ret = bdrv_open(&s->bs[i], NULL, NULL, d, flags, NULL,
&local_err);
break;
/* QMP reference */
case QTYPE_QSTRING:
string = qobject_to_qstring(lentry->value);
ret = bdrv_open(&s->bs[i], NULL, qstring_get_str(string), NULL,
flags, NULL, &local_err);
break;
default:
error_setg(&local_err, "Specification of child block device %i "
"is invalid", i);
ret = -EINVAL;
}
if (ret < 0) {
goto close_exit;
}
opened[i] = true;
}
s->next_child_index = s->num_children;
g_free(opened);
goto exit;
@@ -964,9 +983,9 @@ close_exit:
if (!opened[i]) {
continue;
}
bdrv_unref_child(bs, s->children[i]);
bdrv_unref(s->bs[i]);
}
g_free(s->children);
g_free(s->bs);
g_free(opened);
exit:
qemu_opts_del(opts);
@@ -974,6 +993,8 @@ exit:
if (local_err) {
error_propagate(errp, local_err);
}
QDECREF(list);
QDECREF(sub);
return ret;
}
@@ -983,79 +1004,34 @@ static void quorum_close(BlockDriverState *bs)
int i;
for (i = 0; i < s->num_children; i++) {
bdrv_unref_child(bs, s->children[i]);
bdrv_unref(s->bs[i]);
}
g_free(s->children);
g_free(s->bs);
}
static void quorum_add_child(BlockDriverState *bs, BlockDriverState *child_bs,
Error **errp)
{
BDRVQuorumState *s = bs->opaque;
BdrvChild *child;
char indexstr[32];
int ret;
assert(s->num_children <= INT_MAX / sizeof(BdrvChild *));
if (s->num_children == INT_MAX / sizeof(BdrvChild *) ||
s->next_child_index == UINT_MAX) {
error_setg(errp, "Too many children");
return;
}
ret = snprintf(indexstr, 32, "children.%u", s->next_child_index);
if (ret < 0 || ret >= 32) {
error_setg(errp, "cannot generate child name");
return;
}
s->next_child_index++;
bdrv_drained_begin(bs);
/* We can safely add the child now */
bdrv_ref(child_bs);
child = bdrv_attach_child(bs, child_bs, indexstr, &child_format);
s->children = g_renew(BdrvChild *, s->children, s->num_children + 1);
s->children[s->num_children++] = child;
bdrv_drained_end(bs);
}
static void quorum_del_child(BlockDriverState *bs, BdrvChild *child,
Error **errp)
static void quorum_detach_aio_context(BlockDriverState *bs)
{
BDRVQuorumState *s = bs->opaque;
int i;
for (i = 0; i < s->num_children; i++) {
if (s->children[i] == child) {
break;
}
bdrv_detach_aio_context(s->bs[i]);
}
/* we have checked it in bdrv_del_child() */
assert(i < s->num_children);
if (s->num_children <= s->threshold) {
error_setg(errp,
"The number of children cannot be lower than the vote threshold %d",
s->threshold);
return;
}
bdrv_drained_begin(bs);
/* We can safely remove this child now */
memmove(&s->children[i], &s->children[i + 1],
(s->num_children - i - 1) * sizeof(BdrvChild *));
s->children = g_renew(BdrvChild *, s->children, --s->num_children);
bdrv_unref_child(bs, child);
bdrv_drained_end(bs);
}
static void quorum_refresh_filename(BlockDriverState *bs, QDict *options)
static void quorum_attach_aio_context(BlockDriverState *bs,
AioContext *new_context)
{
BDRVQuorumState *s = bs->opaque;
int i;
for (i = 0; i < s->num_children; i++) {
bdrv_attach_aio_context(s->bs[i], new_context);
}
}
static void quorum_refresh_filename(BlockDriverState *bs)
{
BDRVQuorumState *s = bs->opaque;
QDict *opts;
@@ -1063,17 +1039,16 @@ static void quorum_refresh_filename(BlockDriverState *bs, QDict *options)
int i;
for (i = 0; i < s->num_children; i++) {
bdrv_refresh_filename(s->children[i]->bs);
if (!s->children[i]->bs->full_open_options) {
bdrv_refresh_filename(s->bs[i]);
if (!s->bs[i]->full_open_options) {
return;
}
}
children = qlist_new();
for (i = 0; i < s->num_children; i++) {
QINCREF(s->children[i]->bs->full_open_options);
qlist_append_obj(children,
QOBJECT(s->children[i]->bs->full_open_options));
QINCREF(s->bs[i]->full_open_options);
qlist_append_obj(children, QOBJECT(s->bs[i]->full_open_options));
}
opts = qdict_new();
@@ -1081,9 +1056,9 @@ static void quorum_refresh_filename(BlockDriverState *bs, QDict *options)
qdict_put_obj(opts, QUORUM_OPT_VOTE_THRESHOLD,
QOBJECT(qint_from_int(s->threshold)));
qdict_put_obj(opts, QUORUM_OPT_BLKVERIFY,
QOBJECT(qbool_from_bool(s->is_blkverify)));
QOBJECT(qbool_from_int(s->is_blkverify)));
qdict_put_obj(opts, QUORUM_OPT_REWRITE,
QOBJECT(qbool_from_bool(s->rewrite_corrupted)));
QOBJECT(qbool_from_int(s->rewrite_corrupted)));
qdict_put_obj(opts, "children", QOBJECT(children));
bs->full_open_options = opts;
@@ -1105,9 +1080,10 @@ static BlockDriver bdrv_quorum = {
.bdrv_aio_readv = quorum_aio_readv,
.bdrv_aio_writev = quorum_aio_writev,
.bdrv_invalidate_cache = quorum_invalidate_cache,
.bdrv_add_child = quorum_add_child,
.bdrv_del_child = quorum_del_child,
.bdrv_detach_aio_context = quorum_detach_aio_context,
.bdrv_attach_aio_context = quorum_attach_aio_context,
.is_filter = true,
.bdrv_recurse_is_first_non_filter = quorum_recurse_is_first_non_filter,
@@ -1115,10 +1091,6 @@ static BlockDriver bdrv_quorum = {
static void bdrv_quorum_init(void)
{
if (!qcrypto_hash_supports(QCRYPTO_HASH_ALG_SHA256)) {
/* SHA256 hash support is required for quorum device */
return;
}
bdrv_register(&bdrv_quorum);
}

View File

@@ -15,8 +15,6 @@
#ifndef QEMU_RAW_AIO_H
#define QEMU_RAW_AIO_H
#include "qemu/iov.h"
/* AIO request types */
#define QEMU_AIO_READ 0x0001
#define QEMU_AIO_WRITE 0x0002
@@ -35,16 +33,15 @@
/* linux-aio.c - Linux native implementation */
#ifdef CONFIG_LINUX_AIO
typedef struct LinuxAioState LinuxAioState;
LinuxAioState *laio_init(void);
void laio_cleanup(LinuxAioState *s);
BlockAIOCB *laio_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
void *laio_init(void);
void laio_cleanup(void *s);
BlockAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque, int type);
void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context);
void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context);
void laio_io_plug(BlockDriverState *bs, LinuxAioState *s);
void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s);
void laio_detach_aio_context(void *s, AioContext *old_context);
void laio_attach_aio_context(void *s, AioContext *new_context);
void laio_io_plug(BlockDriverState *bs, void *aio_ctx);
int laio_io_unplug(BlockDriverState *bs, void *aio_ctx, bool unplug);
#endif
#ifdef _WIN32

File diff suppressed because it is too large Load Diff

View File

@@ -21,9 +21,7 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu/cutils.h"
#include "qemu-common.h"
#include "qemu/timer.h"
#include "block/block_int.h"
#include "qemu/module.h"
@@ -31,7 +29,6 @@
#include "trace.h"
#include "block/thread-pool.h"
#include "qemu/iov.h"
#include "qapi/qmp/qstring.h"
#include <windows.h>
#include <winioctl.h>
@@ -104,7 +101,7 @@ static int aio_worker(void *arg)
switch (aiocb->aio_type & QEMU_AIO_TYPE_MASK) {
case QEMU_AIO_READ:
count = handle_aiocb_rw(aiocb);
if (count < aiocb->aio_nbytes) {
if (count < aiocb->aio_nbytes && aiocb->bs->growable) {
/* A short read means that we have reached EOF. Pad the buffer
* with zeros for bytes after EOF. */
iov_memset(aiocb->aio_iov, aiocb->aio_niov, count,
@@ -121,9 +118,9 @@ static int aio_worker(void *arg)
case QEMU_AIO_WRITE:
count = handle_aiocb_rw(aiocb);
if (count == aiocb->aio_nbytes) {
ret = 0;
count = 0;
} else {
ret = -EINVAL;
count = -EINVAL;
}
break;
case QEMU_AIO_FLUSH:
@@ -137,7 +134,7 @@ static int aio_worker(void *arg)
break;
}
g_free(aiocb);
g_slice_free(RawWin32AIOData, aiocb);
return ret;
}
@@ -145,7 +142,7 @@ static BlockAIOCB *paio_submit(BlockDriverState *bs, HANDLE hfile,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque, int type)
{
RawWin32AIOData *acb = g_new(RawWin32AIOData, 1);
RawWin32AIOData *acb = g_slice_new(RawWin32AIOData);
ThreadPool *pool;
acb->bs = bs;

View File

@@ -26,9 +26,7 @@
* IN THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "block/block_int.h"
#include "qapi/error.h"
#include "qemu/option.h"
static QemuOptsList raw_create_opts = {
@@ -54,75 +52,21 @@ static int coroutine_fn raw_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
return bdrv_co_readv(bs->file->bs, sector_num, nb_sectors, qiov);
return bdrv_co_readv(bs->file, sector_num, nb_sectors, qiov);
}
static int coroutine_fn
raw_co_writev_flags(BlockDriverState *bs, int64_t sector_num, int nb_sectors,
QEMUIOVector *qiov, int flags)
static int coroutine_fn raw_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
void *buf = NULL;
BlockDriver *drv;
QEMUIOVector local_qiov;
int ret;
if (bs->probed && sector_num == 0) {
/* As long as these conditions are true, we can't get partial writes to
* the probe buffer and can just directly check the request. */
QEMU_BUILD_BUG_ON(BLOCK_PROBE_BUF_SIZE != 512);
QEMU_BUILD_BUG_ON(BDRV_SECTOR_SIZE != 512);
if (nb_sectors == 0) {
/* qemu_iovec_to_buf() would fail, but we want to return success
* instead of -EINVAL in this case. */
return 0;
}
buf = qemu_try_blockalign(bs->file->bs, 512);
if (!buf) {
ret = -ENOMEM;
goto fail;
}
ret = qemu_iovec_to_buf(qiov, 0, buf, 512);
if (ret != 512) {
ret = -EINVAL;
goto fail;
}
drv = bdrv_probe_all(buf, 512, NULL);
if (drv != bs->drv) {
ret = -EPERM;
goto fail;
}
/* Use the checked buffer, a malicious guest might be overwriting its
* original buffer in the background. */
qemu_iovec_init(&local_qiov, qiov->niov + 1);
qemu_iovec_add(&local_qiov, buf, 512);
qemu_iovec_concat(&local_qiov, qiov, 512, qiov->size - 512);
qiov = &local_qiov;
}
BLKDBG_EVENT(bs->file, BLKDBG_WRITE_AIO);
ret = bdrv_co_pwritev(bs->file->bs, sector_num * BDRV_SECTOR_SIZE,
nb_sectors * BDRV_SECTOR_SIZE, qiov, flags);
fail:
if (qiov == &local_qiov) {
qemu_iovec_destroy(&local_qiov);
}
qemu_vfree(buf);
return ret;
return bdrv_co_writev(bs->file, sector_num, nb_sectors, qiov);
}
static int64_t coroutine_fn raw_co_get_block_status(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors, int *pnum,
BlockDriverState **file)
int nb_sectors, int *pnum)
{
*pnum = nb_sectors;
*file = bs->file->bs;
return BDRV_BLOCK_RAW | BDRV_BLOCK_OFFSET_VALID | BDRV_BLOCK_DATA |
(sector_num << BDRV_SECTOR_BITS);
}
@@ -131,48 +75,58 @@ static int coroutine_fn raw_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
BdrvRequestFlags flags)
{
return bdrv_co_write_zeroes(bs->file->bs, sector_num, nb_sectors, flags);
return bdrv_co_write_zeroes(bs->file, sector_num, nb_sectors, flags);
}
static int coroutine_fn raw_co_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors)
{
return bdrv_co_discard(bs->file->bs, sector_num, nb_sectors);
return bdrv_co_discard(bs->file, sector_num, nb_sectors);
}
static int64_t raw_getlength(BlockDriverState *bs)
{
return bdrv_getlength(bs->file->bs);
return bdrv_getlength(bs->file);
}
static int raw_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
{
return bdrv_get_info(bs->file->bs, bdi);
return bdrv_get_info(bs->file, bdi);
}
static void raw_refresh_limits(BlockDriverState *bs, Error **errp)
{
bs->bl = bs->file->bs->bl;
bs->bl = bs->file->bl;
}
static int raw_truncate(BlockDriverState *bs, int64_t offset)
{
return bdrv_truncate(bs->file->bs, offset);
return bdrv_truncate(bs->file, offset);
}
static int raw_is_inserted(BlockDriverState *bs)
{
return bdrv_is_inserted(bs->file);
}
static int raw_media_changed(BlockDriverState *bs)
{
return bdrv_media_changed(bs->file->bs);
return bdrv_media_changed(bs->file);
}
static void raw_eject(BlockDriverState *bs, bool eject_flag)
{
bdrv_eject(bs->file->bs, eject_flag);
bdrv_eject(bs->file, eject_flag);
}
static void raw_lock_medium(BlockDriverState *bs, bool locked)
{
bdrv_lock_medium(bs->file->bs, locked);
bdrv_lock_medium(bs->file, locked);
}
static int raw_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
{
return bdrv_ioctl(bs->file, req, buf);
}
static BlockAIOCB *raw_aio_ioctl(BlockDriverState *bs,
@@ -180,12 +134,12 @@ static BlockAIOCB *raw_aio_ioctl(BlockDriverState *bs,
BlockCompletionFunc *cb,
void *opaque)
{
return bdrv_aio_ioctl(bs->file->bs, req, buf, cb, opaque);
return bdrv_aio_ioctl(bs->file, req, buf, cb, opaque);
}
static int raw_has_zero_init(BlockDriverState *bs)
{
return bdrv_has_zero_init(bs->file->bs);
return bdrv_has_zero_init(bs->file);
}
static int raw_create(const char *filename, QemuOpts *opts, Error **errp)
@@ -203,21 +157,7 @@ static int raw_create(const char *filename, QemuOpts *opts, Error **errp)
static int raw_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
bs->sg = bs->file->bs->sg;
bs->supported_write_flags = BDRV_REQ_FUA;
bs->supported_zero_flags = BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP;
if (bs->probed && !bdrv_is_read_only(bs)) {
fprintf(stderr,
"WARNING: Image format was not specified for '%s' and probing "
"guessed raw.\n"
" Automatically detecting the format is dangerous for "
"raw images, write operations on block 0 will be restricted.\n"
" Specify the 'raw' format explicitly to remove the "
"restrictions.\n",
bs->file->bs->filename);
}
bs->sg = bs->file->sg;
return 0;
}
@@ -233,16 +173,6 @@ static int raw_probe(const uint8_t *buf, int buf_size, const char *filename)
return 1;
}
static int raw_probe_blocksizes(BlockDriverState *bs, BlockSizes *bsz)
{
return bdrv_probe_blocksizes(bs->file->bs, bsz);
}
static int raw_probe_geometry(BlockDriverState *bs, HDGeometry *geo)
{
return bdrv_probe_geometry(bs->file->bs, geo);
}
BlockDriver bdrv_raw = {
.format_name = "raw",
.bdrv_probe = &raw_probe,
@@ -251,7 +181,7 @@ BlockDriver bdrv_raw = {
.bdrv_close = &raw_close,
.bdrv_create = &raw_create,
.bdrv_co_readv = &raw_co_readv,
.bdrv_co_writev_flags = &raw_co_writev_flags,
.bdrv_co_writev = &raw_co_writev,
.bdrv_co_write_zeroes = &raw_co_write_zeroes,
.bdrv_co_discard = &raw_co_discard,
.bdrv_co_get_block_status = &raw_co_get_block_status,
@@ -260,11 +190,11 @@ BlockDriver bdrv_raw = {
.has_variable_length = true,
.bdrv_get_info = &raw_get_info,
.bdrv_refresh_limits = &raw_refresh_limits,
.bdrv_probe_blocksizes = &raw_probe_blocksizes,
.bdrv_probe_geometry = &raw_probe_geometry,
.bdrv_is_inserted = &raw_is_inserted,
.bdrv_media_changed = &raw_media_changed,
.bdrv_eject = &raw_eject,
.bdrv_lock_medium = &raw_lock_medium,
.bdrv_ioctl = &raw_ioctl,
.bdrv_aio_ioctl = &raw_aio_ioctl,
.create_opts = &raw_create_opts,
.bdrv_has_zero_init = &raw_has_zero_init

View File

@@ -11,13 +11,11 @@
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu/osdep.h"
#include <inttypes.h>
#include "qapi/error.h"
#include "qemu-common.h"
#include "qemu/error-report.h"
#include "block/block_int.h"
#include "crypto/secret.h"
#include "qemu/cutils.h"
#include <rbd/librbd.h>
@@ -76,18 +74,25 @@ typedef struct RBDAIOCB {
QEMUIOVector *qiov;
char *bounce;
RBDAIOCmd cmd;
int64_t sector_num;
int error;
struct BDRVRBDState *s;
int status;
} RBDAIOCB;
typedef struct RADOSCB {
int rcbid;
RBDAIOCB *acb;
struct BDRVRBDState *s;
int done;
int64_t size;
char *buf;
int64_t ret;
} RADOSCB;
#define RBD_FD_READ 0
#define RBD_FD_WRITE 1
typedef struct BDRVRBDState {
rados_t cluster;
rados_ioctx_t io_ctx;
@@ -230,30 +235,7 @@ static char *qemu_rbd_parse_clientname(const char *conf, char *clientname)
return NULL;
}
static int qemu_rbd_set_auth(rados_t cluster, const char *secretid,
Error **errp)
{
if (secretid == 0) {
return 0;
}
gchar *secret = qcrypto_secret_lookup_as_base64(secretid,
errp);
if (!secret) {
return -1;
}
rados_conf_set(cluster, "key", secret);
g_free(secret);
return 0;
}
static int qemu_rbd_set_conf(rados_t cluster, const char *conf,
bool only_read_conf_file,
Error **errp)
static int qemu_rbd_set_conf(rados_t cluster, const char *conf, Error **errp)
{
char *p, *buf;
char name[RBD_MAX_CONF_NAME_SIZE];
@@ -285,18 +267,14 @@ static int qemu_rbd_set_conf(rados_t cluster, const char *conf,
qemu_rbd_unescape(value);
if (strcmp(name, "conf") == 0) {
/* read the conf file alone, so it doesn't override more
specific settings for a particular device */
if (only_read_conf_file) {
ret = rados_conf_read_file(cluster, value);
if (ret < 0) {
error_setg(errp, "error reading conf file %s", value);
break;
}
ret = rados_conf_read_file(cluster, value);
if (ret < 0) {
error_setg(errp, "error reading conf file %s", value);
break;
}
} else if (strcmp(name, "id") == 0) {
/* ignore, this is parsed by qemu_rbd_parse_clientname() */
} else if (!only_read_conf_file) {
} else {
ret = rados_conf_set(cluster, name, value);
if (ret < 0) {
error_setg(errp, "invalid conf option %s", name);
@@ -322,13 +300,10 @@ static int qemu_rbd_create(const char *filename, QemuOpts *opts, Error **errp)
char conf[RBD_MAX_CONF_SIZE];
char clientname_buf[RBD_MAX_CONF_SIZE];
char *clientname;
const char *secretid;
rados_t cluster;
rados_ioctx_t io_ctx;
int ret;
secretid = qemu_opt_get(opts, "password-secret");
if (qemu_rbd_parsename(filename, pool, sizeof(pool),
snap_buf, sizeof(snap_buf),
name, sizeof(name),
@@ -350,7 +325,7 @@ static int qemu_rbd_create(const char *filename, QemuOpts *opts, Error **errp)
error_setg(errp, "obj size too small");
return -EINVAL;
}
obj_order = ctz32(objsize);
obj_order = ffs(objsize) - 1;
}
clientname = qemu_rbd_parse_clientname(conf, clientname_buf);
@@ -362,25 +337,15 @@ static int qemu_rbd_create(const char *filename, QemuOpts *opts, Error **errp)
if (strstr(conf, "conf=") == NULL) {
/* try default location, but ignore failure */
rados_conf_read_file(cluster, NULL);
} else if (conf[0] != '\0' &&
qemu_rbd_set_conf(cluster, conf, true, &local_err) < 0) {
rados_shutdown(cluster);
error_propagate(errp, local_err);
return -EIO;
}
if (conf[0] != '\0' &&
qemu_rbd_set_conf(cluster, conf, false, &local_err) < 0) {
qemu_rbd_set_conf(cluster, conf, &local_err) < 0) {
rados_shutdown(cluster);
error_propagate(errp, local_err);
return -EIO;
}
if (qemu_rbd_set_auth(cluster, secretid, errp) < 0) {
rados_shutdown(cluster);
return -EIO;
}
if (rados_connect(cluster) < 0) {
error_setg(errp, "error connecting");
rados_shutdown(cluster);
@@ -440,6 +405,7 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb)
}
qemu_vfree(acb->bounce);
acb->common.cb(acb->common.opaque, (acb->ret > 0 ? 0 : acb->ret));
acb->status = 0;
qemu_aio_unref(acb);
}
@@ -454,11 +420,6 @@ static QemuOptsList runtime_opts = {
.type = QEMU_OPT_STRING,
.help = "Specification of the rbd image",
},
{
.name = "password-secret",
.type = QEMU_OPT_STRING,
.help = "ID of secret providing the password",
},
{ /* end of list */ }
},
};
@@ -472,7 +433,6 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags,
char conf[RBD_MAX_CONF_SIZE];
char clientname_buf[RBD_MAX_CONF_SIZE];
char *clientname;
const char *secretid;
QemuOpts *opts;
Error *local_err = NULL;
const char *filename;
@@ -487,7 +447,6 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags,
}
filename = qemu_opt_get(opts, "filename");
secretid = qemu_opt_get(opts, "password-secret");
if (qemu_rbd_parsename(filename, pool, sizeof(pool),
snap_buf, sizeof(snap_buf),
@@ -500,7 +459,7 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags,
clientname = qemu_rbd_parse_clientname(conf, clientname_buf);
r = rados_create(&s->cluster, clientname);
if (r < 0) {
error_setg(errp, "error initializing");
error_setg(&local_err, "error initializing");
goto failed_opts;
}
@@ -509,28 +468,6 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags,
s->snap = g_strdup(snap_buf);
}
if (strstr(conf, "conf=") == NULL) {
/* try default location, but ignore failure */
rados_conf_read_file(s->cluster, NULL);
} else if (conf[0] != '\0') {
r = qemu_rbd_set_conf(s->cluster, conf, true, errp);
if (r < 0) {
goto failed_shutdown;
}
}
if (conf[0] != '\0') {
r = qemu_rbd_set_conf(s->cluster, conf, false, errp);
if (r < 0) {
goto failed_shutdown;
}
}
if (qemu_rbd_set_auth(s->cluster, secretid, errp) < 0) {
r = -EIO;
goto failed_shutdown;
}
/*
* Fallback to more conservative semantics if setting cache
* options fails. Ignore errors from setting rbd_cache because the
@@ -544,21 +481,33 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags,
rados_conf_set(s->cluster, "rbd_cache", "true");
}
if (strstr(conf, "conf=") == NULL) {
/* try default location, but ignore failure */
rados_conf_read_file(s->cluster, NULL);
}
if (conf[0] != '\0') {
r = qemu_rbd_set_conf(s->cluster, conf, errp);
if (r < 0) {
goto failed_shutdown;
}
}
r = rados_connect(s->cluster);
if (r < 0) {
error_setg(errp, "error connecting");
error_setg(&local_err, "error connecting");
goto failed_shutdown;
}
r = rados_ioctx_create(s->cluster, pool, &s->io_ctx);
if (r < 0) {
error_setg(errp, "error opening pool %s", pool);
error_setg(&local_err, "error opening pool %s", pool);
goto failed_shutdown;
}
r = rbd_open(s->io_ctx, s->name, &s->image, s->snap);
if (r < 0) {
error_setg(errp, "error reading header from %s", s->name);
error_setg(&local_err, "error reading header from %s", s->name);
goto failed_open;
}
@@ -672,6 +621,7 @@ static BlockAIOCB *rbd_start_aio(BlockDriverState *bs,
acb->error = 0;
acb->s = s;
acb->bh = NULL;
acb->status = -EINPROGRESS;
if (cmd == RBD_AIO_WRITE) {
qemu_iovec_to_buf(acb->qiov, 0, acb->bounce, qiov->size);
@@ -683,6 +633,7 @@ static BlockAIOCB *rbd_start_aio(BlockDriverState *bs,
size = nb_sectors * BDRV_SECTOR_SIZE;
rcb = g_new(RADOSCB, 1);
rcb->done = 0;
rcb->acb = acb;
rcb->buf = buf;
rcb->s = acb->s;
@@ -962,11 +913,6 @@ static QemuOptsList qemu_rbd_create_opts = {
.type = QEMU_OPT_SIZE,
.help = "RBD object size"
},
{
.name = "password-secret",
.type = QEMU_OPT_STRING,
.help = "ID of secret providing the password",
},
{ /* end of list */ }
}
};

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More