Compare commits

..

63 Commits

Author SHA1 Message Date
Michael Roth
ba014af39c Update VERSION for 1.7.1 release
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-03-03 16:30:51 -06:00
Alexander Graf
d689974b51 KVM: Use return value for error print
Commit 94ccff13 introduced a more verbose failure message and retry
operations on KVM VM creation. However, it ended up using a variable
for its failure message that hasn't been initialized yet.

Fix it to use the value it meant to set.

Cc: qemu-stable@nongnu.org
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 521f438e36)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-27 10:54:41 -06:00
Christoffer Dall
e50218c269 hw/intc/arm_gic: Fix GIC_SET_LEVEL
The GIC_SET_LEVEL macro unfortunately overwrote the entire level
bitmask instead of just or'ing on the necessary bits, causing active
level PPIs on a core to clear PPIs on other cores.

Cc: qemu-stable@nongnu.org
Reported-by: Rob Herring <rob.herring@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Message-id: 1393031030-8692-1-git-send-email-christoffer.dall@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 6453fa998a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-27 09:38:42 -06:00
Peter Maydell
fa98e47a25 hw/arm/musicpal: Remove nonexistent CDTP2, CDTP3 registers
The ethernet device in the musicpal only has two tx queues,
but we modelled it with four CTDP registers, presumably a
cut and paste from the rx queue registers. Since the tx_queue[]
array is only 2 entries long this allowed a guest to overrun
this buffer. Remove the nonexistent registers.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1392737293-10073-1-git-send-email-peter.maydell@linaro.org
Acked-by: Jan Kiszka <jan.kiszka@web.de>
Cc: qemu-stable@nongnu.org
(cherry picked from commit cf143ad350)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-27 09:38:31 -06:00
Peter Maydell
ff51a1d589 hw/intc/exynos4210_combiner: Don't overrun output_irq array in init
The Exynos4210 combiner has IIC_NIRQ inputs and IIC_NGRP outputs;
use the correct constant in the loop initializing our output
sysbus IRQs so that we don't overrun the output_irq[] array.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1392659611-8439-1-git-send-email-peter.maydell@linaro.org
Reviewed-by: Andreas Färber <afaerber@suse.de>
Cc: qemu-stable@nongnu.org
(cherry picked from commit fce0a82608)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-27 09:38:08 -06:00
Peter Maydell
5444df1581 hw/timer/arm_timer: Avoid array overrun for bad addresses
The integrator's timer read/write functions log an error for
bad addresses in guest accesses, but were falling through and
using an out of bounds array index rather than returning early.
Fix this.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Message-id: 1392647854-8067-4-git-send-email-peter.maydell@linaro.org
Cc: qemu-stable@nongnu.org
(cherry picked from commit cba933b225)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-27 09:37:58 -06:00
Peter Maydell
e498311693 hw/misc/arm_sysctl: Fix bad boundary check on mb clock accesses
Fix incorrect use of sizeof() rather than ARRAY_SIZE() to guard
accesses into the mb_clock[] array, which was allowing a malicious
guest to overwrite the end of the array.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Message-id: 1392647854-8067-2-git-send-email-peter.maydell@linaro.org
Cc: qemu-stable@nongnu.org
(cherry picked from commit ec1efab957)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-27 09:37:43 -06:00
Markus Armbruster
4736fb34f7 qga: Fix memory allocation pasto
qmp_guest_file_seek() allocates memory for a GuestFileRead object
instead of the GuestFileSeek object it actually uses.  Harmless,
because the GuestFileRead is slightly larger.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
(cherry picked from commit 10b7c5dd0d)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-25 13:34:15 -06:00
Tomoki Sekiyama
6d0a48acd8 qga: vss-win32: Fix interference with snapshot deletion by other VSS request
When a VSS requester such as vshadow.exe or diskshadow.exe requests to
delete snapshots, qemu-ga VSS provider's DeleteSnapshots() is also called
and returns E_NOTIMPL, that makes the deletion fail.
To avoid this issue, return S_OK and set values that represent no snapshots
are deleted by qemu-ga VSS provider.

Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama@hds.com>
Reviewed-by: Gal Hammer <ghammer@redhat.com>
Reviewed-by: Yan Vugenfirer <yvugenfi@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
(cherry picked from commit d9e1f574cb)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-25 13:34:03 -06:00
Tomoki Sekiyama
5e5d4fc68e qga: vss-win32: Fix interference with snapshot creation by other VSS requesters
When a VSS requester such as vshadow.exe or diskshadow.exe requests to
create disk snapshots, Windows may choose qemu-ga VSS provider if it is
only provider registered on the system. However, because it provides only a
function to freeze the filesystem, the snapshotting fails.

This patch adds a check into CQGAVssProvider::IsVolumeSupported() to reject
the request from other VSS requesters, so that the other provider is chosen.

The check of requester is done by confirming event channels between
qemu-ga's requester and provider established. To ensure that the events are
initialized when CQGAVssProvider::IsVolumeSupported() is called, it moves
the initialization earlier.

Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama@hds.com>
Reviewed-by: Gal Hammer <ghammer@redhat.com>
Reviewed-by: Yan Vugenfirer <yvugenfi@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
(cherry picked from commit ff8adbcfdb)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-25 13:33:54 -06:00
Tomoki Sekiyama
68e3bb1128 qga: vss-win32: Use NULL as an invalid pointer for OpenEvent and CreateEvent
OpenEvent and CreateEvent WinAPI return NULL when failed to open/create
events handles, instead of INVALID_HANDLE_VALUE (although their return
types are HANDLE).
This replaces INVALID_HANDLE_VALUE related to event handles with NULL.

Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama@hds.com>
Reviewed-by: Gal Hammer <ghammer@redhat.com>
Reviewed-by: Yan Vugenfirer <yvugenfi@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
(cherry picked from commit 4c1b8f1e83)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-25 13:33:47 -06:00
Paolo Bonzini
c885105bf3 adlib: fix patching of port I/O addresses
Commit 2b21fb5 (adlib: sort offsets in portio registration, 2013-08-14)
fixed the offsets in adlib_portio_list, but forgot the matching indices
in adlib_realizefn.

Reported at http://virtuallyfun.superglobalmegacorp.com/?p=3616 by
"neozeed".

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 7f0ba7bb43)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 14:15:35 -06:00
Huw Davies
2cd72adb1c tcg-arm: The shift count of op_rotl_i32 is in args[2] not args[1].
It's this that should be subtracted from 0x20 when converting to a right rotate.

Cc: qemu-stable@nongnu.org
Signed-off-by: Huw Davies <huw@codeweavers.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
(cherry picked from commit 7a3a00979d)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:40:04 -06:00
Paolo Bonzini
819ddf7d1f memory: fix limiting of translation at a page boundary
Commit 360e607 (address_space_translate: do not cross page boundaries,
2014-01-30) broke MMIO accesses in cases where the section is shorter
than the full register width.  This can happen for example with the
Bochs DISPI registers, which are 16 bits wide but have only a 1-byte
long MemoryRegion (if you write to the "second byte" of the register
your access is discarded; it doesn't write only to half of the register).

Restrict the action of commit 360e607 to direct RAM accesses.  This
is enough for Xen, since MMIO will not go through the mapcache.

Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit a87f39543a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:36:00 -06:00
Mark Cave-Ayland
ec6428b598 Update OpenBIOS images
Update OpenBIOS images to SVN r1246 built from submodule.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
(cherry picked from commit fbb9c590ca)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:41 -06:00
Stefan Weil
424388980d linux-user: Fix trampoline code for CRIS
__put_user can write bytes, words (2 bytes) or longwords (4 bytes).
Here obviously words should have been written, but bytes were written,
so values like 0x9c5f were truncated to 0x5f.

Fix this by changing retcode from uint8_t to to uint16_t in
target_signal_frame and also in the unused rt_signal_frame.

This problem was reported by static code analysis (smatch).

Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Acked-by: Riku Voipio <riku.voipio@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
(cherry picked from commit 8cfc114a2f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:41 -06:00
Stefan Weil
6b579c8c53 i386: Add missing include file for QEMU_PACKED
Instead of packing BiosLinkerLoaderEntry, an unused global variable called
QEMU_PACKED was created (detected by smatch static code analysis).

Including qemu-common.h gets the right definition and also includes some
standard include files which now can be removed here.

Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit c428c5a21c)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:41 -06:00
thomas knych
47c6edce7a KVM: Retry KVM_CREATE_VM on EINTR
Upstreaming this change from Android (https://android-review.googlesource.com/54211).

On heavily loaded machines with many VM instances we see KVM_CREATE_VM
failing with EINTR on this path:

kvm_dev_ioctl_create_vm -> kvm_create_vm -> kvm_init_mmu_notifier -> mmu_notifier_register ->  do_mmu_notifier_register -> mm_take_all_locks

which checks if any signals have been raised while it was attaining locks
and returns EINTR.  Retrying the system call greatly improves reliability.

Cc: qemu-stable@nongnu.org
Signed-off-by: thomas knych <thomaswk@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 94ccff1338)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:41 -06:00
Eric Farman
a5221ee143 virtio-scsi: Prevent assertion on missed events
In some cases, an unplug can cause events to be dropped, which
leads to an assertion failure when preparing to notify the guest
kernel.

Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 49fb65c7f9)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:40 -06:00
Eric Farman
30a0fc3607 virtio-scsi: Cleanup of I/Os that never started
There is still a small window that occurs when a cancel I/O affects
an asynchronous I/O operation that hasn't started.  In other words,
when the residual data length equals the expected data length.

Today, the routine virtio_scsi_command_complete fails because the
VirtIOSCSIReq pointer (from the hba_private field in SCSIRequest)
was cleared earlier when virtio_scsi_complete_req was called by
the virtio_scsi_request_cancelled routine.  As a result, the
virtio_scsi_command_complete routine needs to simply return when
it is processing a SCSIRequest block that was marked canceled.

Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit e9c0f0f58a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:40 -06:00
Paolo Bonzini
ad0a6444ad scsi: Assign cancel_io vector for scsi_disk_emulate_ops
Some emulated disk operations (MODE SELECT, UNMAP, WRITE SAME)
can trigger asynchronous I/Os.  Provide the cancel_io callback
to ensure that AIOCBs are properly cleaned up.

Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Cc: qemu-stable@nongnu.org
[Tweak commit message. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 33325a53f1)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:40 -06:00
Paolo Bonzini
6b7ed87665 scsi: Support TEST UNIT READY in the dummy LUN0
SeaBIOS waits for LUN0 to respond to the TEST UNIT READY command
in order to decide whether it should part of the boot sequence.
If LUN0 does not respond to the command, boot is delayed by up
to 5 seconds.  This currently happens when there is no LUN0 on
a target.  Fix that by adding a trivial implementation of the
command.

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 1cb27d9233)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:40 -06:00
Peter Maydell
b54720b5d6 block/curl: Implement the libcurl timer callback interface
libcurl versions 7.16.0 and later have a timer callback interface which
must be implemented in order for libcurl to make forward progress (it
will sometimes rely on being called back on the timeout if there are
no file descriptors registered). Implement the callback, and use a
QEMU AIO timer to ensure we prod libcurl again when it asks us to.

Based on Peter's original patch plus my fix to add curl_multi_timeout_do.
Should compile just fine even on older versions of libcurl.

I also tried copy-on-read and streaming:

    $ ./qemu-img create -f qcow2 -o \
         backing_file=http://download.fedoraproject.org/pub/fedora/linux/releases/20/Live/x86_64/Fedora-Live-Desktop-x86_64-20-1.iso \
         foo.qcow2 1G
    $ x86_64-softmmu/qemu-system-x86_64 \
         -drive if=none,file=foo.qcow2,copy-on-read=on,id=cd \
         -device ide-cd,drive=cd --enable-kvm -m 1024

Direct http usage is probably too slow, but with copy-on-read ultimately
the image does boot!

After some time, streaming gets canceled by an EIO, which needs further
investigation.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 031fd1be56)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:40 -06:00
Alex Williamson
c426a2da12 vfio-pci: Release all MSI-X vectors when disabled
We were relying on msix_unset_vector_notifiers() to release all the
vectors when we disable MSI-X, but this only happens when MSI-X is
still enabled on the device.  Perform further cleanup by releasing
any remaining vectors listed as in-use after this call.  This caused
a leak of IRQ routes on hotplug depending on how the guest OS prepared
the device for removal.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-stable@nongnu.org
(cherry picked from commit 3e40ba0faf)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:40 -06:00
Luiz Capitulino
15a14f2eeb migration: qmp_migrate(): keep working after syntax error
If a user or QMP client enter a bad syntax for the migrate
command in QMP/HMP, then the migrate command will never succeed
from that point on.

For example, if you enter:

(qemu) migrate tcp;0:4444
migrate: Parameter 'uri' expects a valid migration protocol

Then the migrate command will always fail from now on:

(qemu) migrate tcp:0:4444
migrate: There's a migration process in progress

The problem is that qmp_migrate() sets the migration status to
MIG_STATE_SETUP and doesn't reset it on syntax error. This bug
was introduced by commit 29ae8a4133.

Reviewed-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
(cherry picked from commit c950114286)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:40 -06:00
Stefan Weil
88d08de7e5 mainstone: Fix duplicate array values for key 'space'
cgcc reported a duplicate initialisation. Mainstone includes a matrix
keyboard where two different positions map to 'space'.

QEMU uses the reversed mapping and does not map 'space' to two different
matrix positions.

Some other keys are either missing or might be mapped wrongly (cf. Linux
kernel code). Don't fix these until someone can test them with real
hardware, but add TODO comments.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 7dbc1158bc)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:40 -06:00
Corey Bryant
109b2439f0 seccomp: exit if seccomp_init() fails
This fixes a bug where we weren't exiting if seccomp_init() failed.

Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>
Acked-by: Eduardo Otubo <otubo@linux.vnet.ibm.com>
Acked-by: Paul Moore <pmoore@redhat.com>
(cherry picked from commit 2a13f99112)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:40 -06:00
Cornelia Huck
c2f6dc66bc s390x/kvm: Fix diagnose handling.
The instruction intercept handler for diagnose used only the displacement
when trying to calculate the function code. This is only correct for base
0, however; we need to perform a complete base/displacement address
calculation and use bits 48-63 as the function code.

Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
(cherry picked from commit 638129ff47)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:40 -06:00
Laszlo Ersek
dc9e1e798c qemu_opts_parse(): always check return value
qemu_opts_parse() can always return NULL, even if the QemuOptsList.desc in
question would be trivial to satisfy (eg. because it's empty). For
example:

qemu_opts_parse()
  opts_parse()
    qemu_opts_create()
      id_wellformed()

In practice:

  $ .../qemu-system-x86_64 -acpitable id=3
  qemu-system-x86_64: -acpitable id=3: Parameter 'id' expects an identifier
  **
  ERROR:vl.c:3491:main: assertion failed: (opts != NULL)
  Aborted (core dumped)

  $ .../qemu-system-x86_64 -smbios id=3
  qemu-system-x86_64: -smbios id=3: Parameter 'id' expects an identifier
  Segmentation fault (core dumped)

I checked all qemu_opts_parse() invocations (and all drive_def()
invocations too, because it blindly forwards the former's retval). Only
the two above examples look problematic.

Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1385658779-7529-1-git-send-email-lersek@redhat.com
Signed-off-by: Anthony Liguori <aliguori@amazon.com>
(cherry picked from commit f46e720a82)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:40 -06:00
Peter Lieven
02e1c55ddd block/iscsi: use a bh to schedule co reentrance
this fixes a potential segfault and performance regression.

If the coroutine is reentered directly in the iscsi_co_generic_cb
iscsi_process_{read,write} are interrupted and reentered any
time later. One the one hand this could happen after an iscsi_close
where the iscsi context is already gone (segfault). On the
other hand this limits the number of processed callbacks
in each aio_dispatch to one (potential performance regression).

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 8b9dfe9098)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:40 -06:00
Michael S. Tsirkin
9692bad34d hpet: fix build with CONFIG_HPET off
make hpet_find inline so we don't need
to build hpet.c to check if hpet is enabled.

Fixes link error with CONFIG_HPET off.

Cc: qemu-stable@nongnu.org
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 142e0950cf)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:39 -06:00
Aurelien Jarno
6ec62b79e3 tcg/optimize: fix known-zero bits for right shift ops
32-bit versions of sar and shr ops should not propagate known-zero bits
from the unused 32 high bits. For sar it could even lead to wrong code
being generated.

Cc: qemu-stable@nongnu.org
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
(cherry picked from commit e46b225a31)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:39 -06:00
Brad
0e282aca86 Fix QEMU build on OpenBSD on x86 archs
This resolves the build issue with building the ROMs on OpenBSD on x86 archs.
As of OpenBSD 5.3 the compiler builds PIE binaries by default and thus the
whole OS/packages and so forth. The ROMs need to have PIE disabled.
Check in configure whether the compiler supports the flags for disabling
PIE, and if it does then use them for building the ROMs. This fixes the
following buildbot failure:

>From the OpenBSD buildbots..
  Building optionrom/multiboot.img
ld: multiboot.o: relocation R_X86_64_16 can not be used when making a shared object; recompile with -fPIC

Signed-off by: Brad Smith <brad@comstyle.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 46eef33b89)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:39 -06:00
Petar Jovanovic
75b4b747a2 linux-user: create target_structs header to place ipc_perm and shmid_ds
Creating target_structs header in linux-user/$arch/ and making
target_ipc_perm and target_shmid_ds its first inhabitants.
The struct defintions may/should be further fine-tuned by arch maintainers.

Signed-off-by: Petar Jovanovic <petar.jovanovic@imgtec.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
(cherry picked from commit 55a2b1631f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-21 00:34:39 -06:00
Petar Jovanovic
0bc4142e7f linux-user: pass correct parameter to do_shmctl()
Fix shmctl issue by passing correct parameter buf to do_shmctl().

Signed-off-by: Petar Jovanovic <petar.jovanovic@imgtec.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
(cherry picked from commit a29267846a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 22:24:21 -06:00
Petar Jovanovic
b9cabc36a2 target-mips: fix 64-bit FPU config for user-mode emulation
FR bit should be initialized to 1 for MIPS64, under condition that this
bit is writable and that CPU has an FPU unit. It should be initialized to
zero for MIPS32.
This fixes different MIPS32 issues with FPU instructions whose behaviour
defaulted to 64-bit FPU mode.

Signed-off-by: Petar Jovanovic <petar.jovanovic@imgtec.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 4d66261f71)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:18 -06:00
Gerd Hoffmann
03bc4f6628 piix: fix 32bit pci hole
Make the 32bit pci hole start at end of ram, so all possible address
space is covered.

We used to try and make addresses aligned so they are easier to cover
with MTRRs, but since they are cosmetic on KVM, this is probably not
worth worrying about.
Of course the firmware can use less than that.  Leaving space unused is
no problem, mapping pci bars outside the hole causes problems though.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit ddaaefb4dd)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:18 -06:00
Michael S. Tsirkin
8b6d92a565 pc: map PCI address space as catchall region for not mapped addresses
With a help of negative memory region priority PCI address space
is mapped underneath RAM regions effectively catching every access
to addresses not mapped by any other region.
It simplifies PCI address space mapping into system address space.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
(cherry picked from commit 83d08f2673)

*prereq for ddaaefb backport

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:18 -06:00
Marcel Apfelbaum
44c68b84ae exec: separate sections and nodes per address space
Every address space has its own nodes and sections, but
it uses the same global arrays of nodes/section.

This limits the number of devices that can be attached
to the guest to 20-30 devices. It happens because:
 - The sections array is limited to 2^12 entries.
 - The main memory has at least 100 sections.
 - Each device address space is actually an alias to
   main memory, multiplying its number of nodes/sections.

Remove the limitation by using separate arrays of
nodes and sections for each address space.

Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 53cb28cbfe)

Conflicts:

	exec.c

*removed dependency on b35ba30

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:18 -06:00
Michael S. Tsirkin
4c3e00d83f exec: pass hw address to phys_page_find
callers always shift by target page bits so let's just do this
internally.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 97115a8d45)

*prereq for 53cb28c backport

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:18 -06:00
Michael S. Tsirkin
6a108c4802 exec: replace leaf with skip
In preparation for dynamic radix tree depth support, rename is_leaf
field to skip, telling us how many bits to skip to next level.
Set to 0 for leaf.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 9736e55b78)

*prereq for 53cb28c backport

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:18 -06:00
Paolo Bonzini
e480a1b8ff split definitions for exec.c and translate-all.c radix trees
The exec.c and translate-all.c radix trees are quite different, and
the exec.c one in particular is not limited to the CPU---it can be
used also by devices that do DMA, and in that case the address space
is not limited to TARGET_PHYS_ADDR_SPACE_BITS bits.

We want to make exec.c's radix trees 64-bit wide.  As a first step,
stop sharing the constants between exec.c and translate-all.c.
exec.c gets P_L2_* constants, translate-all.c gets V_L2_*, for
consistency with the existing V_L1_* symbols.  Though actually
in the softmmu case translate-all.c is also indexed by physical
addresses...

This patch has no semantic change.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 03f4995781)

*prereq for 53cb28c backport

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:18 -06:00
Markus Armbruster
29b0fcc181 qdev-monitor: Avoid device_add crashing on non-device driver name
Watch this:

    $ upstream-qemu -nodefaults -S -display none -monitor stdio
    QEMU 1.7.50 monitor - type 'help' for more information
    (qemu) device_add rng-egd
    /work/armbru/qemu/qdev-monitor.c:491:qdev_device_add: Object 0x2089b00 is not an instance of type device
    Aborted (core dumped)

Crashes because "rng-egd" exists, but isn't a subtype of TYPE_DEVICE.
Broken in commit 18b6dad.

Cc: qemu-stable@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit 061e84f7a4)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:18 -06:00
Alexander Graf
b8fca09eec x86: only allow real mode to access 32bit without LMA
When we're running in non-64bit mode with qemu-system-x86_64 we can
still end up with virtual addresses that are above the 32bit boundary
if a segment offset is set up.

GNU Hurd does exactly that. It sets the segment offset to 0x80000000 and
puts its EIP value to 0x8xxxxxxx to access low memory.

This doesn't hit us when we enable paging, as there we just mask away the
unused bits. But with real mode, we assume that vaddr == paddr which is
wrong in this case. Real hardware wraps the virtual address around at the
32bit boundary. So let's do the same.

This fixes booting GNU Hurd in qemu-system-x86_64 for me.

Reported-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 33dfdb56f2)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:18 -06:00
Paolo Bonzini
50a203c3b9 vl: add missing transition debug->finish_migrate
This fixes an abort if you invoke the "migrate" command while the
guest is being debugged.

Cc: qemu-stable@nongnu.org
Cc: lcapitulino@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
(cherry picked from commit eca01d3a93)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:18 -06:00
Matthew Garrett
f227ed1842 migration: Fix rate limit
The migration thread appears to want to allow writeout to occur at full
speed rather than being rate limited during completion of state saving,
but sets the limit to INT_MAX when xfer_limit is INT64_MAX. This causes
problems if there's more than 2GB of state left to save at this point. It
probably ought to just be INT64_MAX instead.

Signed-off-by: Matthew Garrett <matthew.garrett@nebula.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 40596834c0)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:18 -06:00
Peter Crosthwaite
2dc7975300 qom: Split out object and class caches
The object-cast and class-cast caches cannot be shared because class
caching is conditional on the target type not being an interface and
object caching is unconditional. Leads to a bug when a class cast
to an interface follows an object cast to the same interface type:

FooObject = FOO(obj);
FooClass = FOO_GET_CLASS(obj);

Where TYPE_FOO is an interface. The first (object) cast will be
successful and cache the casting result (i.e. TYPE_FOO will be cached).
The second (class) cast will then check the shared cast cache
and register a hit. The issue is, when a class cast hits in the cache
it just returns a pointer cast of the input class (i.e. the concrete
class).

When casting to an interface, the cast itself must return the
interface class, not the concrete class. The implementation of class
cast caching already ensures that the returned cast result is only
a pointer cast before caching. The object cast logic however does
not have this check.

Resolve by just splitting the object and class caches.

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Nathan Rossi <nathan.rossi@xilinx.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
(cherry picked from commit 0ab4c94c84)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:18 -06:00
Marcel Apfelbaum
8fa58fe910 memory.c: bugfix - ref counting mismatch in memory_region_find
'address_space_get_flatview' gets a reference to a FlatView.
If the flatview lookup fails, the code returns without
"unreferencing" the view.

Cc: qemu-stable@nongnu.org

Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 6307d974f9)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:18 -06:00
Gerd Hoffmann
97f74de48c intel-hda: fix position buffer
Fix position buffer updates to use the correct stream offset.

Without this patch both IN (record) and OUT (playback) streams
will update the IN buffer positions.  The linux kernel notices
and complains:
  hda-intel: Invalid position buffer, using LPIB read method instead.

The bug may also lead to glitches when recording and playing
at the same time:
  https://bugzilla.redhat.com/show_bug.cgi?id=947785

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit d58ce68a45)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:17 -06:00
Paolo Bonzini
30a08ab4e1 scsi-disk: fix VERIFY emulation
VERIFY emulation was completely botched (and remained botched through
all the refactorings).  The command must be emulated both in check-medium
mode (BYTCHK=00, which we implement by doing nothing) and in check-bytes
mode (which we do not implement yet).  Unlike WRITE AND VERIFY (which we
treat simply as WRITE with FUA bit set), VERIFY cannot be handled like
READ.  In fact the device is _receiving_ data for VERIFY, not _sending_
it like READ.

Cc: qemu-stable@nongnu.org
Tested-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit d97e773081)

Conflicts:

	hw/scsi/scsi-disk.c

*fixed up WRITE_SAME_* conflicts due to 84f94a9a not being in 1.7.0

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:17 -06:00
Paolo Bonzini
df3e347891 scsi-bus: fix transfer length and direction for VERIFY command
The amount of bytes to transfer depends on the BYTCHK field.
If any data is transferred, it is sent to the device.

Cc: qemu-stable@nongnu.org
Tested-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit d12ad44cc4)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:59:17 -06:00
Paolo Bonzini
810766d9dd virtio-pci: add device_unplugged callback
This fixes a crash in hot-unplug of virtio-pci devices behind a PCIe
switch.  The crash happens because the ioeventfd is still set whent the
child is destroyed (destruction happens in postorder).  Then the proxy
tries to unset to ioeventfd, but the virtqueue structure that holds the
EventNotifier has been trashed in the meanwhile.  kvm_set_ioeventfd_pio
does not expect failure and aborts.

The fix is simply to move parts of uninitialization to a new
device_unplugged callback, which is called before the child is destroyed.

Cc: qemu-stable@nongnu.org
Acked-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 06a1307379)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:36:15 -06:00
Paolo Bonzini
3220207c27 virtio-rng: switch exit callback to VirtioDeviceClass
This ensures hot-unplug is handled properly by the proxy, and avoids
leaking bus_name which is freed by virtio_device_exit.

Cc: qemu-stable@nongnu.org
Acked-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 7bb6edb0e3)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:36:15 -06:00
Paolo Bonzini
def56d28cf virtio-balloon: switch exit callback to VirtioDeviceClass
This ensures hot-unplug is handled properly by the proxy, and avoids
leaking bus_name which is freed by virtio_device_exit.

Cc: qemu-stable@nongnu.org
Acked-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit baa61b9870)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:36:15 -06:00
Paolo Bonzini
478f1f6ccf virtio-scsi: switch exit callback to VirtioDeviceClass
This ensures hot-unplug is handled properly by the proxy, and avoids
leaking bus_name which is freed by virtio_device_exit.

Cc: qemu-stable@nongnu.org
Acked-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit e3c9d76acc)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:36:15 -06:00
Paolo Bonzini
8f08550ee2 virtio-net: switch exit callback to VirtioDeviceClass
This ensures hot-unplug is handled properly by the proxy, and avoids
leaking bus_name which is freed by virtio_device_exit.

Cc: qemu-stable@nongnu.org
Acked-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 3786cff5eb)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:36:15 -06:00
Paolo Bonzini
e6c007056c virtio-serial: switch exit callback to VirtioDeviceClass
This ensures hot-unplug is handled properly by the proxy, and avoids
leaking bus_name which is freed by virtio_device_exit.

Cc: qemu-stable@nongnu.org
Acked-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 0e86c13fe2)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:36:15 -06:00
Paolo Bonzini
e84e23de35 virtio-blk: switch exit callback to VirtioDeviceClass
This ensures hot-unplug is handled properly by the proxy, and avoids
leaking bus_name which is freed by virtio_device_exit.

Cc: qemu-stable@nongnu.org
Acked-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 40dfc16f5f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:36:15 -06:00
Paolo Bonzini
40699a469e virtio-bus: cleanup plug/unplug interface
Right now we have these pairs:

- virtio_bus_plug_device/virtio_bus_destroy_device.  The first
  takes a VirtIODevice, the second takes a VirtioBusState

- device_plugged/device_unplug callbacks in the VirtioBusClass
  (here it's just the naming that is inconsistent)

- virtio_bus_destroy_device is not called by anyone (and since
  it calls qdev_free, it would be called by the proxies---but
  then the callback is useless since the proxies can do whatever
  they want before calling virtio_bus_destroy_device)

And there is a k->init but no k->exit, hence virtio_device_exit is
overwritten by subclasses (except virtio-9p).  This cleans it up by:

- renaming the device_unplug callback to device_unplugged

- renaming virtio_bus_plug_device to virtio_bus_device_plugged,
  matching the callback name

- renaming virtio_bus_destroy_device to virtio_bus_device_unplugged,
  removing the qdev_free, making it take a VirtIODevice and calling it
  from virtio_device_exit

- adding a k->exit callback

virtio_device_exit is still overwritten, the next patches will fix that.

Cc: qemu-stable@nongnu.org
Acked-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 5e96f5d2f8)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:36:15 -06:00
Paolo Bonzini
cbf23fdf21 virtio-pci: remove vdev field
The vdev field is complicated to synchronize.  Just access the
BusState's list of children.

Cc: qemu-stable@nongnu.org
Acked-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit a3fc66d9fd)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:36:15 -06:00
Paolo Bonzini
a9b9ca7e0e virtio-ccw: remove vdev field
The vdev field is complicated to synchronize.  Just access the
BusState's list of children.

Cc: qemu-stable@nongnu.org
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit f24a684073)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:36:14 -06:00
Paolo Bonzini
d765275bb1 virtio-bus: remove vdev field
The vdev field is complicated to synchronize.  Just access the
BusState's list of children.

Cc: qemu-stable@nongnu.org
Acked-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 06d3dff072)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:36:14 -06:00
Paolo Bonzini
f47542925e virtio-ccw: move virtio_ccw_stop_ioeventfd to virtio_ccw_busdev_unplug
Similar to the PCI bug that prompted these patches, virtio-ccw will
segfault after the reworking of hotplug/hot-unplug.  Prepare for
this by moving virtio_ccw_stop_ioeventfd to before the freeing
of the proxy device.

A better place for this could be the device_unplugged callback
for the virtio-ccw bus.  However, we do not yet have a callback
that works: this patch avoids the problem while leaving the tree
bisectable.

Cc: qemu-stable@nongnu.org
Reported-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Suggested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 0b81c1ef5c)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-02-20 21:36:14 -06:00
1229 changed files with 98631 additions and 154188 deletions

155
.gitignore vendored
View File

@@ -1,75 +1,68 @@
/config-devices.*
/config-all-devices.*
/config-all-disas.*
/config-host.*
/config-target.*
/config.status
/trace/generated-tracers.h
/trace/generated-tracers.c
/trace/generated-tracers-dtrace.h
/trace/generated-tracers.dtrace
/trace/generated-events.h
/trace/generated-events.c
/trace/generated-ust-provider.h
/trace/generated-ust.c
/libcacard/trace/generated-tracers.c
config-devices.*
config-all-devices.*
config-all-disas.*
config-host.*
config-target.*
trace/generated-tracers.h
trace/generated-tracers.c
trace/generated-tracers-dtrace.h
trace/generated-tracers.dtrace
trace/generated-events.h
trace/generated-events.c
libcacard/trace/generated-tracers.c
*-timestamp
/*-softmmu
/*-darwin-user
/*-linux-user
/*-bsd-user
*-softmmu
*-darwin-user
*-linux-user
*-bsd-user
libdis*
libuser
/linux-headers/asm
/qga/qapi-generated
/qapi-generated
/qapi-types.[ch]
/qapi-visit.[ch]
/qmp-commands.h
/qmp-marshal.c
/qemu-doc.html
/qemu-tech.html
/qemu-doc.info
/qemu-tech.info
/qemu.1
/qemu.pod
/qemu-img.1
/qemu-img.pod
/qemu-img
/qemu-nbd
/qemu-nbd.8
/qemu-nbd.pod
/qemu-options.def
/qemu-options.texi
/qemu-img-cmds.texi
/qemu-img-cmds.h
/qemu-io
/qemu-ga
/qemu-bridge-helper
/qemu-monitor.texi
/qmp-commands.txt
/vscclient
/test-bitops
/test-coroutine
/test-int128
/test-opts-visitor
/test-qmp-input-visitor
/test-qmp-output-visitor
/test-string-input-visitor
/test-string-output-visitor
/test-visitor-serialization
/fsdev/virtfs-proxy-helper
/fsdev/virtfs-proxy-helper.1
/fsdev/virtfs-proxy-helper.pod
/.gdbinit
linux-headers/asm
qapi-generated
qapi-types.[ch]
qapi-visit.[ch]
qmp-commands.h
qmp-marshal.c
qemu-doc.html
qemu-tech.html
qemu-doc.info
qemu-tech.info
qemu.1
qemu.pod
qemu-img.1
qemu-img.pod
qemu-img
qemu-nbd
qemu-nbd.8
qemu-nbd.pod
qemu-options.def
qemu-options.texi
qemu-img-cmds.texi
qemu-img-cmds.h
qemu-io
qemu-ga
qemu-bridge-helper
qemu-monitor.texi
vscclient
qmp-commands.txt
test-bitops
test-coroutine
test-int128
test-opts-visitor
test-qmp-input-visitor
test-qmp-output-visitor
test-string-input-visitor
test-string-output-visitor
test-visitor-serialization
fsdev/virtfs-proxy-helper
fsdev/virtfs-proxy-helper.1
fsdev/virtfs-proxy-helper.pod
.gdbinit
*.a
*.aux
*.cp
*.dvi
*.exe
*.dll
*.so
*.mo
*.fn
*.ky
*.log
@@ -83,7 +76,7 @@ libuser
*.tp
*.vr
*.d
!/scripts/qemu-guest-agent/fsfreeze-hook.d
!scripts/qemu-guest-agent/fsfreeze-hook.d
*.o
*.lo
*.la
@@ -96,22 +89,22 @@ libuser
*.gcda
*.gcno
patches
/pc-bios/bios-pq/status
/pc-bios/vgabios-pq/status
/pc-bios/optionrom/linuxboot.asm
/pc-bios/optionrom/linuxboot.bin
/pc-bios/optionrom/linuxboot.raw
/pc-bios/optionrom/linuxboot.img
/pc-bios/optionrom/multiboot.asm
/pc-bios/optionrom/multiboot.bin
/pc-bios/optionrom/multiboot.raw
/pc-bios/optionrom/multiboot.img
/pc-bios/optionrom/kvmvapic.asm
/pc-bios/optionrom/kvmvapic.bin
/pc-bios/optionrom/kvmvapic.raw
/pc-bios/optionrom/kvmvapic.img
/pc-bios/s390-ccw/s390-ccw.elf
/pc-bios/s390-ccw/s390-ccw.img
pc-bios/bios-pq/status
pc-bios/vgabios-pq/status
pc-bios/optionrom/linuxboot.asm
pc-bios/optionrom/linuxboot.bin
pc-bios/optionrom/linuxboot.raw
pc-bios/optionrom/linuxboot.img
pc-bios/optionrom/multiboot.asm
pc-bios/optionrom/multiboot.bin
pc-bios/optionrom/multiboot.raw
pc-bios/optionrom/multiboot.img
pc-bios/optionrom/kvmvapic.asm
pc-bios/optionrom/kvmvapic.bin
pc-bios/optionrom/kvmvapic.raw
pc-bios/optionrom/kvmvapic.img
pc-bios/s390-ccw/s390-ccw.elf
pc-bios/s390-ccw/s390-ccw.img
.stgit-*
cscope.*
tags

3
.gitmodules vendored
View File

@@ -13,9 +13,6 @@
[submodule "roms/openbios"]
path = roms/openbios
url = git://git.qemu-project.org/openbios.git
[submodule "roms/openhackware"]
path = roms/openhackware
url = git://git.qemu-project.org/openhackware.git
[submodule "roms/qemu-palcode"]
path = roms/qemu-palcode
url = git://github.com/rth7680/qemu-palcode.git

View File

@@ -4,12 +4,6 @@ python:
compiler:
- gcc
- clang
notifications:
irc:
channels:
- "irc.oftc.net#qemu"
on_success: change
on_failure: always
env:
global:
- TEST_CMD="make check"
@@ -20,23 +14,22 @@ env:
- GUI_PKGS="libgtk-3-dev libvte-2.90-dev libsdl1.2-dev libpng12-dev libpixman-1-dev"
- EXTRA_PKGS=""
matrix:
- TARGETS=alpha-softmmu,alpha-linux-user
- TARGETS=arm-softmmu,arm-linux-user
- TARGETS=aarch64-softmmu,aarch64-linux-user
- TARGETS=cris-softmmu
- TARGETS=i386-softmmu,x86_64-softmmu
- TARGETS=lm32-softmmu
- TARGETS=m68k-softmmu
- TARGETS=microblaze-softmmu,microblazeel-softmmu
- TARGETS=mips-softmmu,mips64-softmmu,mips64el-softmmu,mipsel-softmmu
- TARGETS=moxie-softmmu
- TARGETS=or32-softmmu,
- TARGETS=ppc-softmmu,ppc64-softmmu,ppcemb-softmmu
- TARGETS=s390x-softmmu
- TARGETS=sh4-softmmu,sh4eb-softmmu
- TARGETS=sparc-softmmu,sparc64-softmmu
- TARGETS=unicore32-softmmu
- TARGETS=xtensa-softmmu,xtensaeb-softmmu
- TARGETS=alpha-softmmu,alpha-linux-user
- TARGETS=arm-softmmu,arm-linux-user
- TARGETS=cris-softmmu
- TARGETS=i386-softmmu,x86_64-softmmu
- TARGETS=lm32-softmmu
- TARGETS=m68k-softmmu
- TARGETS=microblaze-softmmu,microblazeel-softmmu
- TARGETS=mips-softmmu,mips64-softmmu,mips64el-softmmu,mipsel-softmmu
- TARGETS=moxie-softmmu
- TARGETS=or32-softmmu,
- TARGETS=ppc-softmmu,ppc64-softmmu,ppcemb-softmmu
- TARGETS=s390x-softmmu
- TARGETS=sh4-softmmu,sh4eb-softmmu
- TARGETS=sparc-softmmu,sparc64-softmmu
- TARGETS=unicore32-softmmu
- TARGETS=xtensa-softmmu,xtensaeb-softmmu
before_install:
- git submodule update --init --recursive
- sudo apt-get update -qq
@@ -52,10 +45,6 @@ matrix:
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-debug --enable-tcg-interpreter"
compiler: gcc
# All the extra -dev packages
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="libaio-dev libcap-ng-dev libattr1-dev libbrlapi-dev uuid-dev libusb-1.0.0-dev"
compiler: gcc
# Currently configure doesn't force --disable-pie
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-gprof --enable-gcov --disable-pie"
@@ -75,7 +64,8 @@ matrix:
EXTRA_CONFIG="--enable-trace-backend=ftrace"
TEST_CMD=""
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="liblttng-ust-dev liburcu-dev"
EXTRA_CONFIG="--enable-trace-backend=ust"
compiler: gcc
# This disabled make check for the ftrace backend which needs more setting up
# Currently broken on 12.04 due to mis-packaged liburcu and changed API, will be pulled.
#- env: TARGETS=i386-softmmu,x86_64-softmmu
# EXTRA_PKGS="liblttng-ust-dev liburcu-dev"
# EXTRA_CONFIG="--enable-trace-backend=ust"

View File

@@ -158,6 +158,7 @@ Guest CPU Cores (KVM):
----------------------
Overall
M: Gleb Natapov <gleb@redhat.com>
M: Paolo Bonzini <pbonzini@redhat.com>
L: kvm@vger.kernel.org
S: Supported
@@ -175,14 +176,12 @@ S: Maintained
F: target-ppc/kvm.c
S390
M: Christian Borntraeger <borntraeger@de.ibm.com>
M: Cornelia Huck <cornelia.huck@de.ibm.com>
M: Alexander Graf <agraf@suse.de>
S: Maintained
F: target-s390x/kvm.c
F: hw/intc/s390_flic.[hc]
X86
M: Gleb Natapov <gleb@redhat.com>
M: Marcelo Tosatti <mtosatti@redhat.com>
L: kvm@vger.kernel.org
S: Supported
@@ -220,13 +219,6 @@ F: *win32*
ARM Machines
------------
Allwinner-a10
M: Li Guang <lig.fnst@cn.fujitsu.com>
S: Maintained
F: hw/*/allwinner-a10*
F: include/hw/*/allwinner-a10*
F: hw/arm/cubieboard.c
Exynos
M: Evgeny Voevodin <e.voevodin@samsung.com>
M: Maksim Kozlov <m.kozlov@samsung.com>
@@ -241,12 +233,6 @@ S: Supported
F: hw/arm/highbank.c
F: hw/net/xgmac.c
Canon DIGIC
M: Antony Pavlov <antonynpavlov@gmail.com>
S: Maintained
F: include/hw/arm/digic.h
F: hw/*/digic*
Gumstix
M: qemu-devel@nongnu.org
S: Orphan
@@ -496,13 +482,10 @@ F: hw/s390x/s390-*.c
S390 Virtio-ccw
M: Cornelia Huck <cornelia.huck@de.ibm.com>
M: Christian Borntraeger <borntraeger@de.ibm.com>
M: Alexander Graf <agraf@suse.de>
S: Supported
F: hw/s390x/s390-virtio-ccw.c
F: hw/s390x/css.[hc]
F: hw/s390x/sclp*.[hc]
F: hw/s390x/ipl*.[hc]
T: git git://github.com/cohuck/qemu virtio-ccw-upstr
UniCore32 Machines
@@ -517,23 +500,9 @@ X86 Machines
------------
PC
M: Anthony Liguori <aliguori@amazon.com>
M: Michael S. Tsirkin <mst@redhat.com>
S: Supported
F: include/hw/i386/
F: hw/i386/
F: hw/pci-host/piix.c
F: hw/pci-host/q35.c
F: hw/pci-host/pam.c
F: include/hw/pci-host/q35.h
F: include/hw/pci-host/pam.h
F: hw/isa/piix4.c
F: hw/isa/lpc_ich9.c
F: hw/i2c/smbus_ich9.c
F: hw/acpi/piix4.c
F: hw/acpi/ich9.c
F: include/hw/acpi/ich9.h
F: include/hw/acpi/piix.h
F: hw/i386/pc.[ch]
F: hw/i386/pc_piix.c
Xtensa Machines
---------------
@@ -614,7 +583,6 @@ F: hw/*/*vhost*
virtio
M: Anthony Liguori <aliguori@amazon.com>
M: Michael S. Tsirkin <mst@redhat.com>
S: Supported
F: hw/*/virtio*
@@ -633,7 +601,6 @@ F: hw/block/virtio-blk.c
virtio-ccw
M: Cornelia Huck <cornelia.huck@de.ibm.com>
M: Christian Borntraeger <borntraeger@de.ibm.com>
S: Supported
F: hw/s390x/virtio-ccw.[hc]
T: git git://github.com/cohuck/qemu virtio-ccw-upstr
@@ -720,7 +687,6 @@ F: ui/
Cocoa graphics
M: Andreas Färber <andreas.faerber@web.de>
M: Peter Maydell <peter.maydell@linaro.org>
S: Odd Fixes
F: ui/cocoa.m
@@ -731,7 +697,7 @@ F: vl.c
Human Monitor (HMP)
M: Luiz Capitulino <lcapitulino@redhat.com>
S: Maintained
S: Supported
F: monitor.c
F: hmp.c
F: hmp-commands.hx
@@ -744,14 +710,6 @@ S: Maintained
F: net/
T: git git://github.com/stefanha/qemu.git net
Netmap network backend
M: Luigi Rizzo <rizzo@iet.unipi.it>
M: Giuseppe Lettieri <g.lettieri@iet.unipi.it>
M: Vincenzo Maffione <v.maffione@gmail.com>
W: http://info.iet.unipi.it/~luigi/netmap/
S: Maintained
F: net/netmap.c
Network Block Device (NBD)
M: Paolo Bonzini <pbonzini@redhat.com>
S: Odd Fixes
@@ -763,7 +721,7 @@ T: git git://github.com/bonzini/qemu.git nbd-next
QAPI
M: Luiz Capitulino <lcapitulino@redhat.com>
M: Michael Roth <mdroth@linux.vnet.ibm.com>
S: Maintained
S: Supported
F: qapi/
T: git git://repo.or.cz/qemu/qmp-unstable.git queue/qmp
@@ -777,7 +735,7 @@ T: git git://repo.or.cz/qemu/qmp-unstable.git queue/qmp
QMP
M: Luiz Capitulino <lcapitulino@redhat.com>
S: Maintained
S: Supported
F: qmp.c
F: monitor.c
F: qmp-commands.hx
@@ -921,7 +879,6 @@ F: block/rbd.c
Sheepdog
M: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
M: Liu Yuan <namei.unix@gmail.com>
L: sheepdog@lists.wpkg.org
S: Supported
F: block/sheepdog.c
@@ -942,11 +899,6 @@ M: Peter Lieven <pl@kamp.de>
S: Supported
F: block/iscsi.c
NFS
M: Peter Lieven <pl@kamp.de>
S: Maintained
F: block/nfs.c
SSH
M: Richard W.M. Jones <rjones@redhat.com>
S: Supported

View File

@@ -57,11 +57,6 @@ GENERATED_HEADERS += trace/generated-tracers-dtrace.h
endif
GENERATED_SOURCES += trace/generated-tracers.c
ifeq ($(TRACE_BACKEND),ust)
GENERATED_HEADERS += trace/generated-ust-provider.h
GENERATED_SOURCES += trace/generated-ust.c
endif
# Don't try to regenerate Makefile or configure
# We don't generate any of them
Makefile: ;
@@ -127,29 +122,13 @@ defconfig:
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/Makefile.objs
endif
dummy := $(call unnest-vars,, \
stub-obj-y \
util-obj-y \
qga-obj-y \
block-obj-y \
block-obj-m \
common-obj-y \
common-obj-m)
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/tests/Makefile
endif
ifeq ($(CONFIG_SMARTCARD_NSS),y)
include $(SRC_PATH)/libcacard/Makefile
endif
all: $(DOCS) $(TOOLS) $(HELPERS-y) recurse-all modules
vl.o: QEMU_CFLAGS+=$(GPROF_CFLAGS)
vl.o: QEMU_CFLAGS+=$(SDL_CFLAGS)
all: $(DOCS) $(TOOLS) $(HELPERS-y) recurse-all
config-host.h: config-host.h-timestamp
config-host.h-timestamp: config-host.mak
@@ -159,7 +138,6 @@ qemu-options.def: $(SRC_PATH)/qemu-options.hx
SUBDIR_RULES=$(patsubst %,subdir-%, $(TARGET_DIRS))
SOFTMMU_SUBDIR_RULES=$(filter %-softmmu,$(SUBDIR_RULES))
$(SOFTMMU_SUBDIR_RULES): $(block-obj-y)
$(SOFTMMU_SUBDIR_RULES): config-all-devices.mak
subdir-%:
@@ -209,9 +187,6 @@ Makefile: $(version-obj-y) $(version-lobj-y)
libqemustub.a: $(stub-obj-y)
libqemuutil.a: $(util-obj-y) qapi-types.o qapi-visit.o
block-modules = $(foreach o,$(block-obj-m),"$(basename $(subst /,-,$o))",) NULL
util/module.o-cflags = -D'CONFIG_BLOCK_MODULES=$(block-modules)'
######################################################################
qemu-img.o: qemu-img-cmds.h
@@ -265,7 +240,8 @@ clean:
# avoid old build problems by removing potentially incorrect old files
rm -f config.mak op-i386.h opc-i386.h gen-op-i386.h op-arm.h opc-arm.h gen-op-arm.h
rm -f qemu-options.def
find . \( -name '*.l[oa]' -o -name '*.so' -o -name '*.dll' -o -name '*.mo' -o -name '*.[oda]' \) -type f -exec rm {} +
find . -name '*.[oda]' -type f -exec rm -f {} +
find . -name '*.l[oa]' -type f -exec rm -f {} +
rm -f $(filter-out %.tlb,$(TOOLS)) $(HELPERS-y) qemu-ga TAGS cscope.* *.pod *~ */*~
rm -f fsdev/*.pod
rm -rf .libs */.libs
@@ -314,10 +290,10 @@ common de-ch es fo fr-ca hu ja mk nl-be pt sl tr \
bepo cz
ifdef INSTALL_BLOBS
BLOBS=bios.bin bios-256k.bin sgabios.bin vgabios.bin vgabios-cirrus.bin \
BLOBS=bios.bin sgabios.bin vgabios.bin vgabios-cirrus.bin \
vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin \
acpi-dsdt.aml q35-acpi-dsdt.aml \
ppc_rom.bin openbios-sparc32 openbios-sparc64 openbios-ppc QEMU,tcx.bin QEMU,cgthree.bin \
ppc_rom.bin openbios-sparc32 openbios-sparc64 openbios-ppc QEMU,tcx.bin \
pxe-e1000.rom pxe-eepro100.rom pxe-ne2k_pci.rom \
pxe-pcnet.rom pxe-rtl8139.rom pxe-virtio.rom \
efi-e1000.rom efi-eepro100.rom efi-ne2k_pci.rom \
@@ -373,12 +349,6 @@ install-datadir install-localstatedir
ifneq ($(TOOLS),)
$(INSTALL_PROG) $(STRIP_OPT) $(TOOLS) "$(DESTDIR)$(bindir)"
endif
ifneq ($(CONFIG_MODULES),)
$(INSTALL_DIR) "$(DESTDIR)$(qemu_moddir)"
for s in $(patsubst %.mo,%$(DSOSUF),$(modules-m)); do \
$(INSTALL_PROG) $(STRIP_OPT) $$s "$(DESTDIR)$(qemu_moddir)/$${s//\//-}"; \
done
endif
ifneq ($(HELPERS-y),)
$(INSTALL_DIR) "$(DESTDIR)$(libexecdir)"
$(INSTALL_PROG) $(STRIP_OPT) $(HELPERS-y) "$(DESTDIR)$(libexecdir)"
@@ -396,7 +366,7 @@ endif
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/keymaps/$$x "$(DESTDIR)$(qemu_datadir)/keymaps"; \
done
for d in $(TARGET_DIRS); do \
$(MAKE) $(SUBDIR_MAKEFLAGS) TARGET_DIR=$$d/ -C $$d $@ || exit 1 ; \
$(MAKE) -C $$d $@ || exit 1 ; \
done
# various test targets

View File

@@ -19,8 +19,11 @@ block-obj-y += qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
block-obj-y += qemu-coroutine-sleep.o
block-obj-y += coroutine-$(CONFIG_COROUTINE_BACKEND).o
block-obj-m = block/
ifeq ($(CONFIG_VIRTIO)$(CONFIG_VIRTFS)$(CONFIG_PCI),yyy)
# Lots of the fsdev/9pcode is pulled in by vl.c via qemu_fsdev_add.
# only pull in the actual virtio-9p device if we also enabled virtio.
CONFIG_REALLY_VIRTFS=y
endif
######################################################################
# smartcard
@@ -38,9 +41,9 @@ libcacard-y += libcacard/vcardt.o
# single QEMU executable should support all CPUs and machines.
ifeq ($(CONFIG_SOFTMMU),y)
common-obj-y = blockdev.o blockdev-nbd.o block/
common-obj-y += iothread.o
common-obj-y = $(block-obj-y) blockdev.o blockdev-nbd.o block/
common-obj-y += net/
common-obj-y += readline.o
common-obj-y += qdev-monitor.o device-hotplug.o
common-obj-$(CONFIG_WIN32) += os-win32.o
common-obj-$(CONFIG_POSIX) += os-posix.o
@@ -48,8 +51,6 @@ common-obj-$(CONFIG_POSIX) += os-posix.o
common-obj-$(CONFIG_LINUX) += fsdev/
common-obj-y += migration.o migration-tcp.o
common-obj-y += vmstate.o
common-obj-y += qemu-file.o
common-obj-$(CONFIG_RDMA) += migration-rdma.o
common-obj-y += qemu-char.o #aio.o
common-obj-y += block-migration.o
@@ -109,3 +110,18 @@ version-lobj-$(CONFIG_WIN32) += $(BUILD_DIR)/version.lo
# by libqemuutil.a. These should be moved to a separate .json schema.
qga-obj-y = qga/ qapi-types.o qapi-visit.o
qga-vss-dll-obj-y = qga/
vl.o: QEMU_CFLAGS+=$(GPROF_CFLAGS)
vl.o: QEMU_CFLAGS+=$(SDL_CFLAGS)
QEMU_CFLAGS+=$(GLIB_CFLAGS)
nested-vars += \
stub-obj-y \
util-obj-y \
qga-obj-y \
qga-vss-dll-obj-y \
block-obj-y \
common-obj-y
dummy := $(call unnest-vars)

View File

@@ -130,6 +130,8 @@ else
obj-y += hw/$(TARGET_BASE_ARCH)/
endif
main.o: QEMU_CFLAGS+=$(GPROF_CFLAGS)
GENERATED_HEADERS += hmp-commands.h qmp-commands-old.h
endif # CONFIG_SOFTMMU
@@ -137,26 +139,13 @@ endif # CONFIG_SOFTMMU
# Workaround for http://gcc.gnu.org/PR55489, see configure.
%/translate.o: QEMU_CFLAGS += $(TRANSLATE_OPT_CFLAGS)
dummy := $(call unnest-vars,,obj-y)
nested-vars += obj-y
# we are making another call to unnest-vars with different vars, protect obj-y,
# it can be overriden in subdir Makefile.objs
obj-y-save := $(obj-y)
block-obj-y :=
common-obj-y :=
# This resolves all nested paths, so it must come last
include $(SRC_PATH)/Makefile.objs
dummy := $(call unnest-vars,.., \
block-obj-y \
block-obj-m \
common-obj-y \
common-obj-m)
# Now restore obj-y
obj-y := $(obj-y-save)
all-obj-y = $(obj-y) $(common-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(block-obj-y)
all-obj-y = $(obj-y)
all-obj-y += $(addprefix ../, $(common-obj-y))
ifndef CONFIG_HAIKU
LIBS+=-lm

View File

@@ -1 +1 @@
1.7.90
1.7.1

View File

@@ -217,6 +217,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
ctx->walking_handlers--;
/* early return if we only have the aio_notify() fd */
if (ctx->pollfds->len == 1) {
return progress;
}
/* wait until next event */
ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
ctx->pollfds->len,

View File

@@ -161,6 +161,11 @@ bool aio_poll(AioContext *ctx, bool blocking)
ctx->walking_handlers--;
/* early return if we only have the aio_notify() fd */
if (count == 1) {
return progress;
}
/* wait until next event */
while (count > 0) {
int ret;

View File

@@ -48,9 +48,7 @@
#include "qmp-commands.h"
#include "trace.h"
#include "exec/cpu-all.h"
#include "exec/ram_addr.h"
#include "hw/acpi/acpi.h"
#include "qemu/host-utils.h"
#ifdef DEBUG_ARCH_INIT
#define DPRINTF(fmt, ...) \
@@ -122,6 +120,7 @@ static void check_guest_throttling(void);
#define RAM_SAVE_FLAG_XBZRLE 0x40
/* 0x80 is reserved in migration.h start with 0x100 next */
static struct defconfig_file {
const char *filename;
/* Indicates it is an user config file (disabled by -no-user-config) */
@@ -132,7 +131,6 @@ static struct defconfig_file {
{ NULL }, /* end of list */
};
static const uint8_t ZERO_TARGET_PAGE[TARGET_PAGE_SIZE];
int qemu_read_default_config_files(bool userconfig)
{
@@ -164,63 +162,24 @@ static struct {
uint8_t *encoded_buf;
/* buffer for storing page content */
uint8_t *current_buf;
/* Cache for XBZRLE, Protected by lock. */
/* buffer used for XBZRLE decoding */
uint8_t *decoded_buf;
/* Cache for XBZRLE */
PageCache *cache;
QemuMutex lock;
} XBZRLE = {
.encoded_buf = NULL,
.current_buf = NULL,
.decoded_buf = NULL,
.cache = NULL,
};
/* buffer used for XBZRLE decoding */
static uint8_t *xbzrle_decoded_buf;
static void XBZRLE_cache_lock(void)
{
if (migrate_use_xbzrle())
qemu_mutex_lock(&XBZRLE.lock);
}
static void XBZRLE_cache_unlock(void)
{
if (migrate_use_xbzrle())
qemu_mutex_unlock(&XBZRLE.lock);
}
int64_t xbzrle_cache_resize(int64_t new_size)
{
PageCache *new_cache, *cache_to_free;
if (new_size < TARGET_PAGE_SIZE) {
return -1;
}
/* no need to lock, the current thread holds qemu big lock */
if (XBZRLE.cache != NULL) {
/* check XBZRLE.cache again later */
if (pow2floor(new_size) == migrate_xbzrle_cache_size()) {
return pow2floor(new_size);
}
new_cache = cache_init(new_size / TARGET_PAGE_SIZE,
TARGET_PAGE_SIZE);
if (!new_cache) {
DPRINTF("Error creating cache\n");
return -1;
}
XBZRLE_cache_lock();
/* the XBZRLE.cache may have be destroyed, check it again */
if (XBZRLE.cache != NULL) {
cache_to_free = XBZRLE.cache;
XBZRLE.cache = new_cache;
} else {
cache_to_free = new_cache;
}
XBZRLE_cache_unlock();
cache_fini(cache_to_free);
return cache_resize(XBZRLE.cache, new_size / TARGET_PAGE_SIZE) *
TARGET_PAGE_SIZE;
}
return pow2floor(new_size);
}
@@ -310,34 +269,6 @@ static size_t save_block_hdr(QEMUFile *f, RAMBlock *block, ram_addr_t offset,
return size;
}
/* This is the last block that we have visited serching for dirty pages
*/
static RAMBlock *last_seen_block;
/* This is the last block from where we have sent data */
static RAMBlock *last_sent_block;
static ram_addr_t last_offset;
static unsigned long *migration_bitmap;
static uint64_t migration_dirty_pages;
static uint32_t last_version;
static bool ram_bulk_stage;
/* Update the xbzrle cache to reflect a page that's been sent as all 0.
* The important thing is that a stale (not-yet-0'd) page be replaced
* by the new data.
* As a bonus, if the page wasn't in the cache it gets added so that
* when a small write is made into the 0'd page it gets XBZRLE sent
*/
static void xbzrle_cache_zero_page(ram_addr_t current_addr)
{
if (ram_bulk_stage || !migrate_use_xbzrle()) {
return;
}
/* We don't care if this fails to allocate a new cache page
* as long as it updated an old one */
cache_insert(XBZRLE.cache, current_addr, ZERO_TARGET_PAGE);
}
#define ENCODING_FLAG_XBZRLE 0x1
static int save_xbzrle_page(QEMUFile *f, uint8_t *current_data,
@@ -349,9 +280,7 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t *current_data,
if (!cache_is_cached(XBZRLE.cache, current_addr)) {
if (!last_stage) {
if (cache_insert(XBZRLE.cache, current_addr, current_data) == -1) {
return -1;
}
cache_insert(XBZRLE.cache, current_addr, current_data);
}
acct_info.xbzrle_cache_miss++;
return -1;
@@ -394,6 +323,18 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t *current_data,
return bytes_sent;
}
/* This is the last block that we have visited serching for dirty pages
*/
static RAMBlock *last_seen_block;
/* This is the last block from where we have sent data */
static RAMBlock *last_sent_block;
static ram_addr_t last_offset;
static unsigned long *migration_bitmap;
static uint64_t migration_dirty_pages;
static uint32_t last_version;
static bool ram_bulk_stage;
static inline
ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
ram_addr_t start)
@@ -418,10 +359,11 @@ ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
return (next - base) << TARGET_PAGE_BITS;
}
static inline bool migration_bitmap_set_dirty(ram_addr_t addr)
static inline bool migration_bitmap_set_dirty(MemoryRegion *mr,
ram_addr_t offset)
{
bool ret;
int nr = addr >> TARGET_PAGE_BITS;
int nr = (mr->ram_addr + offset) >> TARGET_PAGE_BITS;
ret = test_and_set_bit(nr, migration_bitmap);
@@ -431,47 +373,12 @@ static inline bool migration_bitmap_set_dirty(ram_addr_t addr)
return ret;
}
static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
{
ram_addr_t addr;
unsigned long page = BIT_WORD(start >> TARGET_PAGE_BITS);
/* start address is aligned at the start of a word? */
if (((page * BITS_PER_LONG) << TARGET_PAGE_BITS) == start) {
int k;
int nr = BITS_TO_LONGS(length >> TARGET_PAGE_BITS);
unsigned long *src = ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION];
for (k = page; k < page + nr; k++) {
if (src[k]) {
unsigned long new_dirty;
new_dirty = ~migration_bitmap[k];
migration_bitmap[k] |= src[k];
new_dirty &= src[k];
migration_dirty_pages += ctpopl(new_dirty);
src[k] = 0;
}
}
} else {
for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) {
if (cpu_physical_memory_get_dirty(start + addr,
TARGET_PAGE_SIZE,
DIRTY_MEMORY_MIGRATION)) {
cpu_physical_memory_reset_dirty(start + addr,
TARGET_PAGE_SIZE,
DIRTY_MEMORY_MIGRATION);
migration_bitmap_set_dirty(start + addr);
}
}
}
}
/* Needs iothread lock! */
static void migration_bitmap_sync(void)
{
RAMBlock *block;
ram_addr_t addr;
uint64_t num_dirty_pages_init = migration_dirty_pages;
MigrationState *s = migrate_get_current();
static int64_t start_time;
@@ -492,7 +399,13 @@ static void migration_bitmap_sync(void)
address_space_sync_dirty_bitmap(&address_space_memory);
QTAILQ_FOREACH(block, &ram_list.blocks, next) {
migration_bitmap_sync_range(block->mr->ram_addr, block->length);
for (addr = 0; addr < block->length; addr += TARGET_PAGE_SIZE) {
if (memory_region_test_and_clear_dirty(block->mr,
addr, TARGET_PAGE_SIZE,
DIRTY_MEMORY_MIGRATION)) {
migration_bitmap_set_dirty(block->mr, addr);
}
}
}
trace_migration_bitmap_sync_end(migration_dirty_pages
- num_dirty_pages_init);
@@ -565,7 +478,6 @@ static int ram_save_block(QEMUFile *f, bool last_stage)
} else {
int ret;
uint8_t *p;
bool send_async = true;
int cont = (block == last_sent_block) ?
RAM_SAVE_FLAG_CONTINUE : 0;
@@ -576,9 +488,6 @@ static int ram_save_block(QEMUFile *f, bool last_stage)
ret = ram_control_save_page(f, block->offset,
offset, TARGET_PAGE_SIZE, &bytes_sent);
XBZRLE_cache_lock();
current_addr = block->offset + offset;
if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
if (ret != RAM_SAVE_CONTROL_DELAYED) {
if (bytes_sent > 0) {
@@ -593,40 +502,23 @@ static int ram_save_block(QEMUFile *f, bool last_stage)
RAM_SAVE_FLAG_COMPRESS);
qemu_put_byte(f, 0);
bytes_sent++;
/* Must let xbzrle know, otherwise a previous (now 0'd) cached
* page would be stale
*/
xbzrle_cache_zero_page(current_addr);
} else if (!ram_bulk_stage && migrate_use_xbzrle()) {
current_addr = block->offset + offset;
bytes_sent = save_xbzrle_page(f, p, current_addr, block,
offset, cont, last_stage);
if (!last_stage) {
/* We must send exactly what's in the xbzrle cache
* even if the page wasn't xbzrle compressed, so that
* it's right next time.
*/
p = get_cached_data(XBZRLE.cache, current_addr);
/* Can't send this cached data async, since the cache page
* might get updated before it gets to the wire
*/
send_async = false;
}
}
/* XBZRLE overflow or normal page */
if (bytes_sent == -1) {
bytes_sent = save_block_hdr(f, block, offset, cont, RAM_SAVE_FLAG_PAGE);
if (send_async) {
qemu_put_buffer_async(f, p, TARGET_PAGE_SIZE);
} else {
qemu_put_buffer(f, p, TARGET_PAGE_SIZE);
}
qemu_put_buffer_async(f, p, TARGET_PAGE_SIZE);
bytes_sent += TARGET_PAGE_SIZE;
acct_info.norm_pages++;
}
XBZRLE_cache_unlock();
/* if page is unmodified, continue to the next */
if (bytes_sent > 0) {
last_sent_block = block;
@@ -680,12 +572,6 @@ uint64_t ram_bytes_total(void)
return total;
}
void free_xbzrle_decoded_buf(void)
{
g_free(xbzrle_decoded_buf);
xbzrle_decoded_buf = NULL;
}
static void migration_end(void)
{
if (migration_bitmap) {
@@ -694,17 +580,14 @@ static void migration_end(void)
migration_bitmap = NULL;
}
XBZRLE_cache_lock();
if (XBZRLE.cache) {
cache_fini(XBZRLE.cache);
g_free(XBZRLE.cache);
g_free(XBZRLE.encoded_buf);
g_free(XBZRLE.current_buf);
g_free(XBZRLE.decoded_buf);
XBZRLE.cache = NULL;
XBZRLE.encoded_buf = NULL;
XBZRLE.current_buf = NULL;
}
XBZRLE_cache_unlock();
}
static void ram_migration_cancel(void *opaque)
@@ -735,33 +618,15 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
dirty_rate_high_cnt = 0;
if (migrate_use_xbzrle()) {
qemu_mutex_lock_iothread();
XBZRLE.cache = cache_init(migrate_xbzrle_cache_size() /
TARGET_PAGE_SIZE,
TARGET_PAGE_SIZE);
if (!XBZRLE.cache) {
qemu_mutex_unlock_iothread();
DPRINTF("Error creating cache\n");
return -1;
}
qemu_mutex_init(&XBZRLE.lock);
qemu_mutex_unlock_iothread();
/* We prefer not to abort if there is no memory */
XBZRLE.encoded_buf = g_try_malloc0(TARGET_PAGE_SIZE);
if (!XBZRLE.encoded_buf) {
DPRINTF("Error allocating encoded_buf\n");
return -1;
}
XBZRLE.current_buf = g_try_malloc(TARGET_PAGE_SIZE);
if (!XBZRLE.current_buf) {
DPRINTF("Error allocating current_buf\n");
g_free(XBZRLE.encoded_buf);
XBZRLE.encoded_buf = NULL;
return -1;
}
XBZRLE.encoded_buf = g_malloc0(TARGET_PAGE_SIZE);
XBZRLE.current_buf = g_malloc(TARGET_PAGE_SIZE);
acct_clear();
}
@@ -912,8 +777,8 @@ static int load_xbzrle(QEMUFile *f, ram_addr_t addr, void *host)
unsigned int xh_len;
int xh_flags;
if (!xbzrle_decoded_buf) {
xbzrle_decoded_buf = g_malloc(TARGET_PAGE_SIZE);
if (!XBZRLE.decoded_buf) {
XBZRLE.decoded_buf = g_malloc(TARGET_PAGE_SIZE);
}
/* extract RLE header */
@@ -930,10 +795,10 @@ static int load_xbzrle(QEMUFile *f, ram_addr_t addr, void *host)
return -1;
}
/* load data and decode */
qemu_get_buffer(f, xbzrle_decoded_buf, xh_len);
qemu_get_buffer(f, XBZRLE.decoded_buf, xh_len);
/* decode RLE */
ret = xbzrle_decode_buffer(xbzrle_decoded_buf, xh_len, host,
ret = xbzrle_decode_buffer(XBZRLE.decoded_buf, xh_len, host,
TARGET_PAGE_SIZE);
if (ret == -1) {
fprintf(stderr, "Failed to load XBZRLE page - decode error!\n");

18
async.c
View File

@@ -214,7 +214,6 @@ aio_ctx_finalize(GSource *source)
thread_pool_free(ctx->thread_pool);
aio_set_event_notifier(ctx, &ctx->notifier, NULL);
event_notifier_cleanup(&ctx->notifier);
rfifolock_destroy(&ctx->lock);
qemu_mutex_destroy(&ctx->bh_lock);
g_array_free(ctx->pollfds, TRUE);
timerlistgroup_deinit(&ctx->tlg);
@@ -251,12 +250,6 @@ static void aio_timerlist_notify(void *opaque)
aio_notify(opaque);
}
static void aio_rfifolock_cb(void *opaque)
{
/* Kick owner thread in case they are blocked in aio_poll() */
aio_notify(opaque);
}
AioContext *aio_context_new(void)
{
AioContext *ctx;
@@ -264,7 +257,6 @@ AioContext *aio_context_new(void)
ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
ctx->thread_pool = NULL;
qemu_mutex_init(&ctx->bh_lock);
rfifolock_init(&ctx->lock, aio_rfifolock_cb, ctx);
event_notifier_init(&ctx->notifier, false);
aio_set_event_notifier(ctx, &ctx->notifier,
(EventNotifierHandler *)
@@ -283,13 +275,3 @@ void aio_context_unref(AioContext *ctx)
{
g_source_unref(&ctx->source);
}
void aio_context_acquire(AioContext *ctx)
{
rfifolock_lock(&ctx->lock);
}
void aio_context_release(AioContext *ctx)
{
rfifolock_unlock(&ctx->lock);
}

View File

@@ -95,7 +95,7 @@ static struct {
}
},
.period = { .hertz = 100 },
.period = { .hertz = 250 },
.plive = 0,
.log_to_monitor = 0,
.try_poll_in = 1,

View File

@@ -547,11 +547,11 @@ static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
ss.rate = as->freq;
/*
* qemu audio tick runs at 100 Hz (by default), so processing
* data chunks worth 10 ms of sound should be a good fit.
* qemu audio tick runs at 250 Hz (by default), so processing
* data chunks worth 4 ms of sound should be a good fit.
*/
ba.tlength = pa_usec_to_bytes (10 * 1000, &ss);
ba.minreq = pa_usec_to_bytes (5 * 1000, &ss);
ba.tlength = pa_usec_to_bytes (4 * 1000, &ss);
ba.minreq = pa_usec_to_bytes (2 * 1000, &ss);
ba.maxlength = -1;
ba.prebuf = -1;

View File

@@ -25,17 +25,8 @@
#include "audio.h"
#include "audio_int.h"
#if SPICE_INTERFACE_PLAYBACK_MAJOR > 1 || SPICE_INTERFACE_PLAYBACK_MINOR >= 3
#define LINE_OUT_SAMPLES (480 * 4)
#else
#define LINE_OUT_SAMPLES (256 * 4)
#endif
#if SPICE_INTERFACE_RECORD_MAJOR > 2 || SPICE_INTERFACE_RECORD_MINOR >= 3
#define LINE_IN_SAMPLES (480 * 4)
#else
#define LINE_IN_SAMPLES (256 * 4)
#endif
#define LINE_IN_SAMPLES 1024
#define LINE_OUT_SAMPLES 1024
typedef struct SpiceRateCtl {
int64_t start_ticks;
@@ -120,11 +111,7 @@ static int line_out_init (HWVoiceOut *hw, struct audsettings *as)
SpiceVoiceOut *out = container_of (hw, SpiceVoiceOut, hw);
struct audsettings settings;
#if SPICE_INTERFACE_PLAYBACK_MAJOR > 1 || SPICE_INTERFACE_PLAYBACK_MINOR >= 3
settings.freq = spice_server_get_best_playback_rate(NULL);
#else
settings.freq = SPICE_INTERFACE_PLAYBACK_FREQ;
#endif
settings.nchannels = SPICE_INTERFACE_PLAYBACK_CHAN;
settings.fmt = AUD_FMT_S16;
settings.endianness = AUDIO_HOST_ENDIANNESS;
@@ -135,9 +122,6 @@ static int line_out_init (HWVoiceOut *hw, struct audsettings *as)
out->sin.base.sif = &playback_sif.base;
qemu_spice_add_interface (&out->sin.base);
#if SPICE_INTERFACE_PLAYBACK_MAJOR > 1 || SPICE_INTERFACE_PLAYBACK_MINOR >= 3
spice_server_set_playback_rate(&out->sin, settings.freq);
#endif
return 0;
}
@@ -248,11 +232,7 @@ static int line_in_init (HWVoiceIn *hw, struct audsettings *as)
SpiceVoiceIn *in = container_of (hw, SpiceVoiceIn, hw);
struct audsettings settings;
#if SPICE_INTERFACE_RECORD_MAJOR > 2 || SPICE_INTERFACE_RECORD_MINOR >= 3
settings.freq = spice_server_get_best_record_rate(NULL);
#else
settings.freq = SPICE_INTERFACE_RECORD_FREQ;
#endif
settings.nchannels = SPICE_INTERFACE_RECORD_CHAN;
settings.fmt = AUD_FMT_S16;
settings.endianness = AUDIO_HOST_ENDIANNESS;
@@ -263,9 +243,6 @@ static int line_in_init (HWVoiceIn *hw, struct audsettings *as)
in->sin.base.sif = &record_sif.base;
qemu_spice_add_interface (&in->sin.base);
#if SPICE_INTERFACE_RECORD_MAJOR > 2 || SPICE_INTERFACE_RECORD_MINOR >= 3
spice_server_set_record_rate(&in->sin, settings.freq);
#endif
return 0;
}

View File

@@ -566,7 +566,7 @@ CharDriverState *chr_baum_init(void)
BaumDriverState *baum;
CharDriverState *chr;
brlapi_handle_t *handle;
#if defined(CONFIG_SDL) && SDL_COMPILEDVERSION < SDL_VERSIONNUM(2, 0, 0)
#ifdef CONFIG_SDL
SDL_SysWMinfo info;
#endif
int tty;
@@ -595,7 +595,7 @@ CharDriverState *chr_baum_init(void)
goto fail;
}
#if defined(CONFIG_SDL) && SDL_COMPILEDVERSION < SDL_VERSIONNUM(2, 0, 0)
#ifdef CONFIG_SDL
memset(&info, 0, sizeof(info));
SDL_VERSION(&info.version);
if (SDL_GetWMInfo(&info))

View File

@@ -123,15 +123,15 @@ static void rng_random_init(Object *obj)
NULL);
s->filename = g_strdup("/dev/random");
s->fd = -1;
}
static void rng_random_finalize(Object *obj)
{
RndRandom *s = RNG_RANDOM(obj);
qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
if (s->fd != -1) {
qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
qemu_close(s->fd);
}

View File

@@ -12,7 +12,6 @@
#include "sysemu/rng.h"
#include "qapi/qmp/qerror.h"
#include "qom/object_interfaces.h"
void rng_backend_request_entropy(RngBackend *s, size_t size,
EntropyReceiveFunc *receive_entropy,
@@ -41,9 +40,9 @@ static bool rng_backend_prop_get_opened(Object *obj, Error **errp)
return s->opened;
}
static void rng_backend_complete(UserCreatable *uc, Error **errp)
void rng_backend_open(RngBackend *s, Error **errp)
{
object_property_set_bool(OBJECT(uc), true, "opened", errp);
object_property_set_bool(OBJECT(s), true, "opened", errp);
}
static void rng_backend_prop_set_opened(Object *obj, bool value, Error **errp)
@@ -77,25 +76,13 @@ static void rng_backend_init(Object *obj)
NULL);
}
static void rng_backend_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
ucc->complete = rng_backend_complete;
}
static const TypeInfo rng_backend_info = {
.name = TYPE_RNG_BACKEND,
.parent = TYPE_OBJECT,
.instance_size = sizeof(RngBackend),
.instance_init = rng_backend_init,
.class_size = sizeof(RngBackendClass),
.class_init = rng_backend_class_init,
.abstract = true,
.interfaces = (InterfaceInfo[]) {
{ TYPE_USER_CREATABLE },
{ }
}
};
static void register_types(void)

View File

@@ -58,7 +58,6 @@ typedef struct BlkMigDevState {
/* Protected by block migration lock. */
unsigned long *aio_bitmap;
int64_t completed_sectors;
BdrvDirtyBitmap *dirty_bitmap;
} BlkMigDevState;
typedef struct BlkMigBlock {
@@ -310,21 +309,12 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
/* Called with iothread lock taken. */
static void set_dirty_tracking(void)
static void set_dirty_tracking(int enable)
{
BlkMigDevState *bmds;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
bmds->dirty_bitmap = bdrv_create_dirty_bitmap(bmds->bs, BLOCK_SIZE);
}
}
static void unset_dirty_tracking(void)
{
BlkMigDevState *bmds;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
bdrv_release_dirty_bitmap(bmds->bs, bmds->dirty_bitmap);
bdrv_set_dirty_tracking(bmds->bs, enable ? BLOCK_SIZE : 0);
}
}
@@ -442,7 +432,7 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
} else {
blk_mig_unlock();
}
if (bdrv_get_dirty(bmds->bs, bmds->dirty_bitmap, sector)) {
if (bdrv_get_dirty(bmds->bs, sector)) {
if (total_sectors - sector < BDRV_SECTORS_PER_DIRTY_CHUNK) {
nr_sectors = total_sectors - sector;
@@ -564,7 +554,7 @@ static int64_t get_remaining_dirty(void)
int64_t dirty = 0;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
dirty += bdrv_get_dirty_count(bmds->bs, bmds->dirty_bitmap);
dirty += bdrv_get_dirty_count(bmds->bs);
}
return dirty << BDRV_SECTOR_BITS;
@@ -579,7 +569,7 @@ static void blk_mig_cleanup(void)
bdrv_drain_all();
unset_dirty_tracking();
set_dirty_tracking(0);
blk_mig_lock();
while ((bmds = QSIMPLEQ_FIRST(&block_mig_state.bmds_list)) != NULL) {
@@ -614,7 +604,7 @@ static int block_save_setup(QEMUFile *f, void *opaque)
init_blk_migration(f);
/* start track dirty blocks */
set_dirty_tracking();
set_dirty_tracking(1);
qemu_mutex_unlock_iothread();
ret = flush_blks(f);
@@ -790,8 +780,7 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
}
if (flags & BLK_MIG_FLAG_ZERO_BLOCK) {
ret = bdrv_write_zeroes(bs, addr, nr_sectors,
BDRV_REQ_MAY_UNMAP);
ret = bdrv_write_zeroes(bs, addr, nr_sectors);
} else {
buf = g_malloc(BLOCK_SIZE);
qemu_get_buffer(f, buf, BLOCK_SIZE);

1671
block.c

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,6 @@ block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-c
block-obj-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o
block-obj-y += qed-check.o
block-obj-$(CONFIG_VHDX) += vhdx.o vhdx-endian.o vhdx-log.o
block-obj-$(CONFIG_QUORUM) += quorum.o
block-obj-y += parallels.o blkdebug.o blkverify.o
block-obj-y += snapshot.o qapi.o
block-obj-$(CONFIG_WIN32) += raw-win32.o win32-aio.o
@@ -11,9 +10,8 @@ block-obj-$(CONFIG_POSIX) += raw-posix.o
block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
ifeq ($(CONFIG_POSIX),y)
block-obj-y += nbd.o nbd-client.o sheepdog.o
block-obj-y += nbd.o sheepdog.o
block-obj-$(CONFIG_LIBISCSI) += iscsi.o
block-obj-$(CONFIG_LIBNFS) += nfs.o
block-obj-$(CONFIG_CURL) += curl.o
block-obj-$(CONFIG_RBD) += rbd.o
block-obj-$(CONFIG_GLUSTERFS) += gluster.o
@@ -25,15 +23,4 @@ common-obj-y += commit.o
common-obj-y += mirror.o
common-obj-y += backup.o
iscsi.o-cflags := $(LIBISCSI_CFLAGS)
iscsi.o-libs := $(LIBISCSI_LIBS)
curl.o-cflags := $(CURL_CFLAGS)
curl.o-libs := $(CURL_LIBS)
rbd.o-cflags := $(RBD_CFLAGS)
rbd.o-libs := $(RBD_LIBS)
gluster.o-cflags := $(GLUSTERFS_CFLAGS)
gluster.o-libs := $(GLUSTERFS_LIBS)
ssh.o-cflags := $(LIBSSH2_CFLAGS)
ssh.o-libs := $(LIBSSH2_LIBS)
qcow.o-libs := -lz
linux-aio.o-libs := -laio
$(obj)/curl.o: QEMU_CFLAGS+=$(CURL_CFLAGS)

View File

@@ -138,8 +138,7 @@ static int coroutine_fn backup_do_cow(BlockDriverState *bs,
if (buffer_is_zero(iov.iov_base, iov.iov_len)) {
ret = bdrv_co_write_zeroes(job->target,
start * BACKUP_SECTORS_PER_CLUSTER,
n, BDRV_REQ_MAY_UNMAP);
start * BACKUP_SECTORS_PER_CLUSTER, n);
} else {
ret = bdrv_co_writev(job->target,
start * BACKUP_SECTORS_PER_CLUSTER, n,
@@ -181,13 +180,8 @@ static int coroutine_fn backup_before_write_notify(
void *opaque)
{
BdrvTrackedRequest *req = opaque;
int64_t sector_num = req->offset >> BDRV_SECTOR_BITS;
int nb_sectors = req->bytes >> BDRV_SECTOR_BITS;
assert((req->offset & (BDRV_SECTOR_SIZE - 1)) == 0);
assert((req->bytes & (BDRV_SECTOR_SIZE - 1)) == 0);
return backup_do_cow(req->bs, sector_num, nb_sectors, NULL);
return backup_do_cow(req->bs, req->sector_num, req->nb_sectors, NULL);
}
static void backup_set_speed(BlockJob *job, int64_t speed, Error **errp)

View File

@@ -186,14 +186,6 @@ static const char *event_names[BLKDBG_EVENT_MAX] = {
[BLKDBG_FLUSH_TO_OS] = "flush_to_os",
[BLKDBG_FLUSH_TO_DISK] = "flush_to_disk",
[BLKDBG_PWRITEV_RMW_HEAD] = "pwritev_rmw.head",
[BLKDBG_PWRITEV_RMW_AFTER_HEAD] = "pwritev_rmw.after_head",
[BLKDBG_PWRITEV_RMW_TAIL] = "pwritev_rmw.tail",
[BLKDBG_PWRITEV_RMW_AFTER_TAIL] = "pwritev_rmw.after_tail",
[BLKDBG_PWRITEV] = "pwritev",
[BLKDBG_PWRITEV_ZERO] = "pwritev_zero",
[BLKDBG_PWRITEV_DONE] = "pwritev_done",
};
static int get_event_by_name(const char *name, BlkDebugEvent *event)
@@ -279,33 +271,19 @@ static void remove_rule(BlkdebugRule *rule)
g_free(rule);
}
static int read_config(BDRVBlkdebugState *s, const char *filename,
QDict *options, Error **errp)
static int read_config(BDRVBlkdebugState *s, const char *filename)
{
FILE *f = NULL;
FILE *f;
int ret;
struct add_rule_data d;
Error *local_err = NULL;
if (filename) {
f = fopen(filename, "r");
if (f == NULL) {
error_setg_errno(errp, errno, "Could not read blkdebug config file");
return -errno;
}
ret = qemu_config_parse(f, config_groups, filename);
if (ret < 0) {
error_setg(errp, "Could not parse blkdebug config file");
ret = -EINVAL;
goto fail;
}
f = fopen(filename, "r");
if (f == NULL) {
return -errno;
}
qemu_config_parse_qdict(options, config_groups, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
ret = qemu_config_parse(f, config_groups, filename);
if (ret < 0) {
goto fail;
}
@@ -320,9 +298,7 @@ static int read_config(BDRVBlkdebugState *s, const char *filename,
fail:
qemu_opts_reset(&inject_error_opts);
qemu_opts_reset(&set_state_opts);
if (f) {
fclose(f);
}
fclose(f);
return ret;
}
@@ -334,9 +310,7 @@ static void blkdebug_parse_filename(const char *filename, QDict *options,
/* Parse the blkdebug: prefix */
if (!strstart(filename, "blkdebug:", &filename)) {
/* There was no prefix; therefore, all options have to be already
present in the QDict (except for the filename) */
qdict_put(options, "x-image", qstring_from_str(filename));
error_setg(errp, "File name string must start with 'blkdebug:'");
return;
}
@@ -372,11 +346,6 @@ static QemuOptsList runtime_opts = {
.type = QEMU_OPT_STRING,
.help = "[internal use only, will be removed]",
},
{
.name = "align",
.type = QEMU_OPT_SIZE,
.help = "Required alignment in bytes",
},
{ /* end of list */ }
},
};
@@ -387,53 +356,46 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
BDRVBlkdebugState *s = bs->opaque;
QemuOpts *opts;
Error *local_err = NULL;
const char *config;
uint64_t align;
const char *filename, *config;
int ret;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto out;
goto fail;
}
/* Read rules from config file or command line options */
/* Read rules from config file */
config = qemu_opt_get(opts, "config");
ret = read_config(s, config, options, errp);
if (ret) {
goto out;
if (config) {
ret = read_config(s, config);
if (ret < 0) {
error_setg_errno(errp, -ret, "Could not read blkdebug config file");
goto fail;
}
}
/* Set initial state */
s->state = 1;
/* Open the backing file */
assert(bs->file == NULL);
ret = bdrv_open_image(&bs->file, qemu_opt_get(opts, "x-image"), options, "image",
flags | BDRV_O_PROTOCOL, false, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto out;
filename = qemu_opt_get(opts, "x-image");
if (filename == NULL) {
error_setg(errp, "Could not retrieve image file name");
ret = -EINVAL;
goto fail;
}
/* Set request alignment */
align = qemu_opt_get_size(opts, "align", bs->request_alignment);
if (align > 0 && align < INT_MAX && !(align & (align - 1))) {
bs->request_alignment = align;
} else {
error_setg(errp, "Invalid alignment");
ret = -EINVAL;
goto fail_unref;
ret = bdrv_file_open(&bs->file, filename, NULL, flags, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto fail;
}
ret = 0;
goto out;
fail_unref:
bdrv_unref(bs->file);
out:
fail:
qemu_opts_del(opts);
return ret;
}
@@ -632,9 +594,9 @@ static int blkdebug_debug_breakpoint(BlockDriverState *bs, const char *event,
static int blkdebug_debug_resume(BlockDriverState *bs, const char *tag)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugSuspendedReq *r, *next;
BlkdebugSuspendedReq *r;
QLIST_FOREACH_SAFE(r, &s->suspended_reqs, next, next) {
QLIST_FOREACH(r, &s->suspended_reqs, next) {
if (!strcmp(r->tag, tag)) {
qemu_coroutine_enter(r->co, NULL);
return 0;
@@ -643,31 +605,6 @@ static int blkdebug_debug_resume(BlockDriverState *bs, const char *tag)
return -ENOENT;
}
static int blkdebug_debug_remove_breakpoint(BlockDriverState *bs,
const char *tag)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugSuspendedReq *r, *r_next;
BlkdebugRule *rule, *next;
int i, ret = -ENOENT;
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
QLIST_FOREACH_SAFE(rule, &s->rules[i], next, next) {
if (rule->action == ACTION_SUSPEND &&
!strcmp(rule->options.suspend.tag, tag)) {
remove_rule(rule);
ret = 0;
}
}
}
QLIST_FOREACH_SAFE(r, &s->suspended_reqs, next, r_next) {
if (!strcmp(r->tag, tag)) {
qemu_coroutine_enter(r->co, NULL);
ret = 0;
}
}
return ret;
}
static bool blkdebug_debug_is_suspended(BlockDriverState *bs, const char *tag)
{
@@ -702,8 +639,6 @@ static BlockDriver bdrv_blkdebug = {
.bdrv_debug_event = blkdebug_debug_event,
.bdrv_debug_breakpoint = blkdebug_debug_breakpoint,
.bdrv_debug_remove_breakpoint
= blkdebug_debug_remove_breakpoint,
.bdrv_debug_resume = blkdebug_debug_resume,
.bdrv_debug_is_suspended = blkdebug_debug_is_suspended,
};

View File

@@ -78,9 +78,7 @@ static void blkverify_parse_filename(const char *filename, QDict *options,
/* Parse the blkverify: prefix */
if (!strstart(filename, "blkverify:", &filename)) {
/* There was no prefix; therefore, all options have to be already
present in the QDict (except for the filename) */
qdict_put(options, "x-image", qstring_from_str(filename));
error_setg(errp, "File name string must start with 'blkverify:'");
return;
}
@@ -124,31 +122,44 @@ static int blkverify_open(BlockDriverState *bs, QDict *options, int flags,
BDRVBlkverifyState *s = bs->opaque;
QemuOpts *opts;
Error *local_err = NULL;
const char *filename, *raw;
int ret;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
}
/* Open the raw file */
assert(bs->file == NULL);
ret = bdrv_open_image(&bs->file, qemu_opt_get(opts, "x-raw"), options,
"raw", flags | BDRV_O_PROTOCOL, false, &local_err);
/* Parse the raw image filename */
raw = qemu_opt_get(opts, "x-raw");
if (raw == NULL) {
error_setg(errp, "Could not retrieve raw image filename");
ret = -EINVAL;
goto fail;
}
ret = bdrv_file_open(&bs->file, raw, NULL, flags, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto fail;
}
/* Open the test file */
assert(s->test_file == NULL);
ret = bdrv_open_image(&s->test_file, qemu_opt_get(opts, "x-image"), options,
"test", flags, false, &local_err);
filename = qemu_opt_get(opts, "x-image");
if (filename == NULL) {
error_setg(errp, "Could not retrieve test image filename");
ret = -EINVAL;
goto fail;
}
s->test_file = bdrv_new("");
ret = bdrv_open(s->test_file, filename, NULL, flags, NULL, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
bdrv_unref(s->test_file);
s->test_file = NULL;
goto fail;
}
@@ -173,6 +184,110 @@ static int64_t blkverify_getlength(BlockDriverState *bs)
return bdrv_getlength(s->test_file);
}
/**
* Check that I/O vector contents are identical
*
* @a: I/O vector
* @b: I/O vector
* @ret: Offset to first mismatching byte or -1 if match
*/
static ssize_t blkverify_iovec_compare(QEMUIOVector *a, QEMUIOVector *b)
{
int i;
ssize_t offset = 0;
assert(a->niov == b->niov);
for (i = 0; i < a->niov; i++) {
size_t len = 0;
uint8_t *p = (uint8_t *)a->iov[i].iov_base;
uint8_t *q = (uint8_t *)b->iov[i].iov_base;
assert(a->iov[i].iov_len == b->iov[i].iov_len);
while (len < a->iov[i].iov_len && *p++ == *q++) {
len++;
}
offset += len;
if (len != a->iov[i].iov_len) {
return offset;
}
}
return -1;
}
typedef struct {
int src_index;
struct iovec *src_iov;
void *dest_base;
} IOVectorSortElem;
static int sortelem_cmp_src_base(const void *a, const void *b)
{
const IOVectorSortElem *elem_a = a;
const IOVectorSortElem *elem_b = b;
/* Don't overflow */
if (elem_a->src_iov->iov_base < elem_b->src_iov->iov_base) {
return -1;
} else if (elem_a->src_iov->iov_base > elem_b->src_iov->iov_base) {
return 1;
} else {
return 0;
}
}
static int sortelem_cmp_src_index(const void *a, const void *b)
{
const IOVectorSortElem *elem_a = a;
const IOVectorSortElem *elem_b = b;
return elem_a->src_index - elem_b->src_index;
}
/**
* Copy contents of I/O vector
*
* The relative relationships of overlapping iovecs are preserved. This is
* necessary to ensure identical semantics in the cloned I/O vector.
*/
static void blkverify_iovec_clone(QEMUIOVector *dest, const QEMUIOVector *src,
void *buf)
{
IOVectorSortElem sortelems[src->niov];
void *last_end;
int i;
/* Sort by source iovecs by base address */
for (i = 0; i < src->niov; i++) {
sortelems[i].src_index = i;
sortelems[i].src_iov = &src->iov[i];
}
qsort(sortelems, src->niov, sizeof(sortelems[0]), sortelem_cmp_src_base);
/* Allocate buffer space taking into account overlapping iovecs */
last_end = NULL;
for (i = 0; i < src->niov; i++) {
struct iovec *cur = sortelems[i].src_iov;
ptrdiff_t rewind = 0;
/* Detect overlap */
if (last_end && last_end > cur->iov_base) {
rewind = last_end - cur->iov_base;
}
sortelems[i].dest_base = buf - rewind;
buf += cur->iov_len - MIN(rewind, cur->iov_len);
last_end = MAX(cur->iov_base + cur->iov_len, last_end);
}
/* Sort by source iovec index and build destination iovec */
qsort(sortelems, src->niov, sizeof(sortelems[0]), sortelem_cmp_src_index);
for (i = 0; i < src->niov; i++) {
qemu_iovec_add(dest, sortelems[i].dest_base, src->iov[i].iov_len);
}
}
static BlkverifyAIOCB *blkverify_aio_get(BlockDriverState *bs, bool is_write,
int64_t sector_num, QEMUIOVector *qiov,
int nb_sectors,
@@ -236,7 +351,7 @@ static void blkverify_aio_cb(void *opaque, int ret)
static void blkverify_verify_readv(BlkverifyAIOCB *acb)
{
ssize_t offset = qemu_iovec_compare(acb->qiov, &acb->raw_qiov);
ssize_t offset = blkverify_iovec_compare(acb->qiov, &acb->raw_qiov);
if (offset != -1) {
blkverify_err(acb, "contents mismatch in sector %" PRId64,
acb->sector_num + (int64_t)(offset / BDRV_SECTOR_SIZE));
@@ -254,7 +369,7 @@ static BlockDriverAIOCB *blkverify_aio_readv(BlockDriverState *bs,
acb->verify = blkverify_verify_readv;
acb->buf = qemu_blockalign(bs->file, qiov->size);
qemu_iovec_init(&acb->raw_qiov, acb->qiov->niov);
qemu_iovec_clone(&acb->raw_qiov, qiov, acb->buf);
blkverify_iovec_clone(&acb->raw_qiov, qiov, acb->buf);
bdrv_aio_readv(s->test_file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb);
@@ -288,20 +403,6 @@ static BlockDriverAIOCB *blkverify_aio_flush(BlockDriverState *bs,
return bdrv_aio_flush(s->test_file, cb, opaque);
}
static bool blkverify_recurse_is_first_non_filter(BlockDriverState *bs,
BlockDriverState *candidate)
{
BDRVBlkverifyState *s = bs->opaque;
bool perm = bdrv_recurse_is_first_non_filter(bs->file, candidate);
if (perm) {
return true;
}
return bdrv_recurse_is_first_non_filter(s->test_file, candidate);
}
static BlockDriver bdrv_blkverify = {
.format_name = "blkverify",
.protocol_name = "blkverify",
@@ -316,8 +417,7 @@ static BlockDriver bdrv_blkverify = {
.bdrv_aio_writev = blkverify_aio_writev,
.bdrv_aio_flush = blkverify_aio_flush,
.is_filter = true,
.bdrv_recurse_is_first_non_filter = blkverify_recurse_is_first_non_filter,
.bdrv_check_ext_snapshot = bdrv_check_ext_snapshot_forbidden,
};
static void bdrv_blkverify_init(void)

View File

@@ -129,8 +129,7 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
strcmp(bochs.subtype, GROWING_TYPE) ||
((le32_to_cpu(bochs.version) != HEADER_VERSION) &&
(le32_to_cpu(bochs.version) != HEADER_V1))) {
error_setg(errp, "Image not in Bochs format");
return -EINVAL;
return -EMEDIUMTYPE;
}
if (le32_to_cpu(bochs.version) == HEADER_V1) {

View File

@@ -198,7 +198,13 @@ void commit_start(BlockDriverState *bs, BlockDriverState *base,
return;
}
assert(top != bs);
/* Once we support top == active layer, remove this check */
if (top == bs) {
error_setg(errp,
"Top image as the active layer is currently unsupported");
return;
}
if (top == base) {
error_setg(errp, "Invalid files for merge: top and base are the same");
return;

View File

@@ -74,8 +74,7 @@ static int cow_open(BlockDriverState *bs, QDict *options, int flags,
}
if (be32_to_cpu(cow_header.magic) != COW_MAGIC) {
error_setg(errp, "Image not in COW format");
ret = -EINVAL;
ret = -EMEDIUMTYPE;
goto fail;
}
@@ -83,7 +82,7 @@ static int cow_open(BlockDriverState *bs, QDict *options, int flags,
char version[64];
snprintf(version, sizeof(version),
"COW version %d", cow_header.version);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
qerror_report(QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "cow", version);
ret = -ENOTSUP;
goto fail;
@@ -104,18 +103,40 @@ static int cow_open(BlockDriverState *bs, QDict *options, int flags,
return ret;
}
static inline void cow_set_bits(uint8_t *bitmap, int start, int64_t nb_sectors)
/*
* XXX(hch): right now these functions are extremely inefficient.
* We should just read the whole bitmap we'll need in one go instead.
*/
static inline int cow_set_bit(BlockDriverState *bs, int64_t bitnum, bool *first)
{
int64_t bitnum = start, last = start + nb_sectors;
while (bitnum < last) {
if ((bitnum & 7) == 0 && bitnum + 8 <= last) {
bitmap[bitnum / 8] = 0xFF;
bitnum += 8;
continue;
}
bitmap[bitnum/8] |= (1 << (bitnum % 8));
bitnum++;
uint64_t offset = sizeof(struct cow_header_v2) + bitnum / 8;
uint8_t bitmap;
int ret;
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
if (bitmap & (1 << (bitnum % 8))) {
return 0;
}
if (*first) {
ret = bdrv_flush(bs->file);
if (ret < 0) {
return ret;
}
*first = false;
}
bitmap |= (1 << (bitnum % 8));
ret = bdrv_pwrite(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
return 0;
}
#define BITS_PER_BITMAP_SECTOR (512 * 8)
@@ -153,34 +174,18 @@ static int coroutine_fn cow_co_is_allocated(BlockDriverState *bs,
{
int64_t bitnum = sector_num + sizeof(struct cow_header_v2) * 8;
uint64_t offset = (bitnum / 8) & -BDRV_SECTOR_SIZE;
bool first = true;
int changed = 0, same = 0;
uint8_t bitmap[BDRV_SECTOR_SIZE];
int ret;
int changed;
do {
int ret;
uint8_t bitmap[BDRV_SECTOR_SIZE];
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
bitnum &= BITS_PER_BITMAP_SECTOR - 1;
int sector_bits = MIN(nb_sectors, BITS_PER_BITMAP_SECTOR - bitnum);
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
if (first) {
changed = cow_test_bit(bitnum, bitmap);
first = false;
}
same += cow_find_streak(bitmap, changed, bitnum, nb_sectors);
bitnum += sector_bits;
nb_sectors -= sector_bits;
offset += BDRV_SECTOR_SIZE;
} while (nb_sectors);
*num_same = same;
bitnum &= BITS_PER_BITMAP_SECTOR - 1;
changed = cow_test_bit(bitnum, bitmap);
*num_same = cow_find_streak(bitmap, changed, bitnum, nb_sectors);
return changed;
}
@@ -199,52 +204,18 @@ static int64_t coroutine_fn cow_co_get_block_status(BlockDriverState *bs,
static int cow_update_bitmap(BlockDriverState *bs, int64_t sector_num,
int nb_sectors)
{
int64_t bitnum = sector_num + sizeof(struct cow_header_v2) * 8;
uint64_t offset = (bitnum / 8) & -BDRV_SECTOR_SIZE;
int error = 0;
int i;
bool first = true;
int sector_bits;
for ( ; nb_sectors;
bitnum += sector_bits,
nb_sectors -= sector_bits,
offset += BDRV_SECTOR_SIZE) {
int ret, set;
uint8_t bitmap[BDRV_SECTOR_SIZE];
bitnum &= BITS_PER_BITMAP_SECTOR - 1;
sector_bits = MIN(nb_sectors, BITS_PER_BITMAP_SECTOR - bitnum);
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
/* Skip over any already set bits */
set = cow_find_streak(bitmap, 1, bitnum, sector_bits);
bitnum += set;
sector_bits -= set;
nb_sectors -= set;
if (!sector_bits) {
continue;
}
if (first) {
ret = bdrv_flush(bs->file);
if (ret < 0) {
return ret;
}
first = false;
}
cow_set_bits(bitmap, bitnum, sector_bits);
ret = bdrv_pwrite(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
for (i = 0; i < nb_sectors; i++) {
error = cow_set_bit(bs, sector_num + i, &first);
if (error) {
break;
}
}
return 0;
return error;
}
static int coroutine_fn cow_read(BlockDriverState *bs, int64_t sector_num,
@@ -347,15 +318,15 @@ static int cow_create(const char *filename, QEMUOptionParameter *options,
ret = bdrv_create_file(filename, options, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
qerror_report_err(local_err);
error_free(local_err);
return ret;
}
cow_bs = NULL;
ret = bdrv_open(&cow_bs, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, NULL, &local_err);
ret = bdrv_file_open(&cow_bs, filename, NULL, BDRV_O_RDWR, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
qerror_report_err(local_err);
error_free(local_err);
return ret;
}

View File

@@ -456,27 +456,30 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags,
static int inited = 0;
if (flags & BDRV_O_RDWR) {
error_setg(errp, "curl block device does not support writes");
qerror_report(ERROR_CLASS_GENERIC_ERROR,
"curl block device does not support writes");
return -EROFS;
}
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
goto out_noclean;
}
s->readahead_size = qemu_opt_get_size(opts, "readahead", READ_AHEAD_SIZE);
if ((s->readahead_size & 0x1ff) != 0) {
error_setg(errp, "HTTP_READAHEAD_SIZE %zd is not a multiple of 512",
s->readahead_size);
fprintf(stderr, "HTTP_READAHEAD_SIZE %zd is not a multiple of 512\n",
s->readahead_size);
goto out_noclean;
}
file = qemu_opt_get(opts, "url");
if (file == NULL) {
error_setg(errp, "curl block driver requires an 'url' option");
qerror_report(ERROR_CLASS_GENERIC_ERROR, "curl block driver requires "
"an 'url' option");
goto out_noclean;
}

View File

@@ -3,26 +3,42 @@
*
* Copyright (C) 2012 Bharata B Rao <bharata@linux.vnet.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
* Pipe handling mechanism in AIO implementation is derived from
* block/rbd.c. Hence,
*
* Copyright (C) 2010-2011 Christian Brunner <chb@muc.de>,
* Josh Durgin <josh.durgin@dreamhost.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include <glusterfs/api/glfs.h>
#include "block/block_int.h"
#include "qemu/sockets.h"
#include "qemu/uri.h"
typedef struct GlusterAIOCB {
BlockDriverAIOCB common;
int64_t size;
int ret;
bool *finished;
QEMUBH *bh;
Coroutine *coroutine;
} GlusterAIOCB;
typedef struct BDRVGlusterState {
struct glfs *glfs;
int fds[2];
struct glfs_fd *fd;
int event_reader_pos;
GlusterAIOCB *event_acb;
} BDRVGlusterState;
#define GLUSTER_FD_READ 0
#define GLUSTER_FD_WRITE 1
typedef struct GlusterConf {
char *server;
int port;
@@ -33,13 +49,11 @@ typedef struct GlusterConf {
static void qemu_gluster_gconf_free(GlusterConf *gconf)
{
if (gconf) {
g_free(gconf->server);
g_free(gconf->volname);
g_free(gconf->image);
g_free(gconf->transport);
g_free(gconf);
}
g_free(gconf->server);
g_free(gconf->volname);
g_free(gconf->image);
g_free(gconf->transport);
g_free(gconf);
}
static int parse_volume_options(GlusterConf *gconf, char *path)
@@ -117,7 +131,7 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
}
/* transport */
if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
if (!strcmp(uri->scheme, "gluster")) {
gconf->transport = g_strdup("tcp");
} else if (!strcmp(uri->scheme, "gluster+tcp")) {
gconf->transport = g_strdup("tcp");
@@ -153,7 +167,7 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
}
gconf->server = g_strdup(qp->p[0].value);
} else {
gconf->server = g_strdup(uri->server ? uri->server : "localhost");
gconf->server = g_strdup(uri->server);
gconf->port = uri->port;
}
@@ -165,8 +179,7 @@ out:
return ret;
}
static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
Error **errp)
static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename)
{
struct glfs *glfs = NULL;
int ret;
@@ -174,8 +187,8 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
ret = qemu_gluster_parseuri(gconf, filename);
if (ret < 0) {
error_setg(errp, "Usage: file=gluster[+transport]://[server[:port]]/"
"volname/image[?socket=...]");
error_report("Usage: file=gluster[+transport]://[server[:port]]/"
"volname/image[?socket=...]");
errno = -ret;
goto out;
}
@@ -202,11 +215,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
ret = glfs_init(glfs);
if (ret) {
error_setg_errno(errp, errno,
"Gluster connection failed for server=%s port=%d "
"volume=%s image=%s transport=%s", gconf->server,
gconf->port, gconf->volname, gconf->image,
gconf->transport);
error_report("Gluster connection failed for server=%s port=%d "
"volume=%s image=%s transport=%s", gconf->server, gconf->port,
gconf->volname, gconf->image, gconf->transport);
goto out;
}
return glfs;
@@ -220,32 +231,46 @@ out:
return NULL;
}
static void qemu_gluster_complete_aio(void *opaque)
static void qemu_gluster_complete_aio(GlusterAIOCB *acb, BDRVGlusterState *s)
{
GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
int ret;
bool *finished = acb->finished;
BlockDriverCompletionFunc *cb = acb->common.cb;
void *opaque = acb->common.opaque;
qemu_bh_delete(acb->bh);
acb->bh = NULL;
qemu_coroutine_enter(acb->coroutine, NULL);
}
/*
* AIO callback routine called from GlusterFS thread.
*/
static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
{
GlusterAIOCB *acb = (GlusterAIOCB *)arg;
if (!ret || ret == acb->size) {
acb->ret = 0; /* Success */
} else if (ret < 0) {
acb->ret = ret; /* Read/Write failed */
if (!acb->ret || acb->ret == acb->size) {
ret = 0; /* Success */
} else if (acb->ret < 0) {
ret = acb->ret; /* Read/Write failed */
} else {
acb->ret = -EIO; /* Partial read/write - fail it */
ret = -EIO; /* Partial read/write - fail it */
}
acb->bh = qemu_bh_new(qemu_gluster_complete_aio, acb);
qemu_bh_schedule(acb->bh);
qemu_aio_release(acb);
cb(opaque, ret);
if (finished) {
*finished = true;
}
}
static void qemu_gluster_aio_event_reader(void *opaque)
{
BDRVGlusterState *s = opaque;
ssize_t ret;
do {
char *p = (char *)&s->event_acb;
ret = read(s->fds[GLUSTER_FD_READ], p + s->event_reader_pos,
sizeof(s->event_acb) - s->event_reader_pos);
if (ret > 0) {
s->event_reader_pos += ret;
if (s->event_reader_pos == sizeof(s->event_acb)) {
s->event_reader_pos = 0;
qemu_gluster_complete_aio(s->event_acb, s);
}
}
} while (ret < 0 && errno == EINTR);
}
/* TODO Convert to fine grained options */
@@ -262,57 +287,60 @@ static QemuOptsList runtime_opts = {
},
};
static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags)
{
assert(open_flags != NULL);
*open_flags |= O_BINARY;
if (bdrv_flags & BDRV_O_RDWR) {
*open_flags |= O_RDWR;
} else {
*open_flags |= O_RDONLY;
}
if ((bdrv_flags & BDRV_O_NOCACHE)) {
*open_flags |= O_DIRECT;
}
}
static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
int bdrv_flags, Error **errp)
{
BDRVGlusterState *s = bs->opaque;
int open_flags = 0;
int open_flags = O_BINARY;
int ret = 0;
GlusterConf *gconf = g_malloc0(sizeof(GlusterConf));
QemuOpts *opts;
Error *local_err = NULL;
const char *filename;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
ret = -EINVAL;
goto out;
}
filename = qemu_opt_get(opts, "filename");
s->glfs = qemu_gluster_init(gconf, filename, errp);
s->glfs = qemu_gluster_init(gconf, filename);
if (!s->glfs) {
ret = -errno;
goto out;
}
qemu_gluster_parse_flags(bdrv_flags, &open_flags);
if (bdrv_flags & BDRV_O_RDWR) {
open_flags |= O_RDWR;
} else {
open_flags |= O_RDONLY;
}
if ((bdrv_flags & BDRV_O_NOCACHE)) {
open_flags |= O_DIRECT;
}
s->fd = glfs_open(s->glfs, gconf->image, open_flags);
if (!s->fd) {
ret = -errno;
goto out;
}
ret = qemu_pipe(s->fds);
if (ret < 0) {
ret = -errno;
goto out;
}
fcntl(s->fds[GLUSTER_FD_READ], F_SETFL, O_NONBLOCK);
qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ],
qemu_gluster_aio_event_reader, NULL, s);
out:
qemu_opts_del(opts);
qemu_gluster_gconf_free(gconf);
@@ -328,180 +356,24 @@ out:
return ret;
}
typedef struct BDRVGlusterReopenState {
struct glfs *glfs;
struct glfs_fd *fd;
} BDRVGlusterReopenState;
static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
BlockReopenQueue *queue, Error **errp)
{
int ret = 0;
BDRVGlusterReopenState *reop_s;
GlusterConf *gconf = NULL;
int open_flags = 0;
assert(state != NULL);
assert(state->bs != NULL);
state->opaque = g_malloc0(sizeof(BDRVGlusterReopenState));
reop_s = state->opaque;
qemu_gluster_parse_flags(state->flags, &open_flags);
gconf = g_malloc0(sizeof(GlusterConf));
reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
if (reop_s->glfs == NULL) {
ret = -errno;
goto exit;
}
reop_s->fd = glfs_open(reop_s->glfs, gconf->image, open_flags);
if (reop_s->fd == NULL) {
/* reops->glfs will be cleaned up in _abort */
ret = -errno;
goto exit;
}
exit:
/* state->opaque will be freed in either the _abort or _commit */
qemu_gluster_gconf_free(gconf);
return ret;
}
static void qemu_gluster_reopen_commit(BDRVReopenState *state)
{
BDRVGlusterReopenState *reop_s = state->opaque;
BDRVGlusterState *s = state->bs->opaque;
/* close the old */
if (s->fd) {
glfs_close(s->fd);
}
if (s->glfs) {
glfs_fini(s->glfs);
}
/* use the newly opened image / connection */
s->fd = reop_s->fd;
s->glfs = reop_s->glfs;
g_free(state->opaque);
state->opaque = NULL;
return;
}
static void qemu_gluster_reopen_abort(BDRVReopenState *state)
{
BDRVGlusterReopenState *reop_s = state->opaque;
if (reop_s == NULL) {
return;
}
if (reop_s->fd) {
glfs_close(reop_s->fd);
}
if (reop_s->glfs) {
glfs_fini(reop_s->glfs);
}
g_free(state->opaque);
state->opaque = NULL;
return;
}
#ifdef CONFIG_GLUSTERFS_ZEROFILL
static coroutine_fn int qemu_gluster_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, BdrvRequestFlags flags)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
BDRVGlusterState *s = bs->opaque;
off_t size = nb_sectors * BDRV_SECTOR_SIZE;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
acb->size = size;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
ret = glfs_zerofill_async(s->fd, offset, size, &gluster_finish_aiocb, acb);
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
}
static inline bool gluster_supports_zerofill(void)
{
return 1;
}
static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
int64_t size)
{
return glfs_zerofill(fd, offset, size);
}
#else
static inline bool gluster_supports_zerofill(void)
{
return 0;
}
static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
int64_t size)
{
return 0;
}
#endif
static int qemu_gluster_create(const char *filename,
QEMUOptionParameter *options, Error **errp)
{
struct glfs *glfs;
struct glfs_fd *fd;
int ret = 0;
int prealloc = 0;
int64_t total_size = 0;
GlusterConf *gconf = g_malloc0(sizeof(GlusterConf));
glfs = qemu_gluster_init(gconf, filename, errp);
glfs = qemu_gluster_init(gconf, filename);
if (!glfs) {
ret = -EINVAL;
ret = -errno;
goto out;
}
while (options && options->name) {
if (!strcmp(options->name, BLOCK_OPT_SIZE)) {
total_size = options->value.n / BDRV_SECTOR_SIZE;
} else if (!strcmp(options->name, BLOCK_OPT_PREALLOC)) {
if (!options->value.s || !strcmp(options->value.s, "off")) {
prealloc = 0;
} else if (!strcmp(options->value.s, "full") &&
gluster_supports_zerofill()) {
prealloc = 1;
} else {
error_setg(errp, "Invalid preallocation mode: '%s'"
" or GlusterFS doesn't support zerofill API",
options->value.s);
ret = -EINVAL;
goto out;
}
}
options++;
}
@@ -511,15 +383,9 @@ static int qemu_gluster_create(const char *filename,
if (!fd) {
ret = -errno;
} else {
if (!glfs_ftruncate(fd, total_size * BDRV_SECTOR_SIZE)) {
if (prealloc && qemu_gluster_zerofill(fd, 0,
total_size * BDRV_SECTOR_SIZE)) {
ret = -errno;
}
} else {
if (glfs_ftruncate(fd, total_size * BDRV_SECTOR_SIZE) != 0) {
ret = -errno;
}
if (glfs_close(fd) != 0) {
ret = -errno;
}
@@ -532,18 +398,58 @@ out:
return ret;
}
static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov, int write)
static void qemu_gluster_aio_cancel(BlockDriverAIOCB *blockacb)
{
GlusterAIOCB *acb = (GlusterAIOCB *)blockacb;
bool finished = false;
acb->finished = &finished;
while (!finished) {
qemu_aio_wait();
}
}
static const AIOCBInfo gluster_aiocb_info = {
.aiocb_size = sizeof(GlusterAIOCB),
.cancel = qemu_gluster_aio_cancel,
};
static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
{
GlusterAIOCB *acb = (GlusterAIOCB *)arg;
BlockDriverState *bs = acb->common.bs;
BDRVGlusterState *s = bs->opaque;
int retval;
acb->ret = ret;
retval = qemu_write_full(s->fds[GLUSTER_FD_WRITE], &acb, sizeof(acb));
if (retval != sizeof(acb)) {
/*
* Gluster AIO callback thread failed to notify the waiting
* QEMU thread about IO completion.
*/
error_report("Gluster AIO completion failed: %s", strerror(errno));
abort();
}
}
static BlockDriverAIOCB *qemu_gluster_aio_rw(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque, int write)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
GlusterAIOCB *acb;
BDRVGlusterState *s = bs->opaque;
size_t size = nb_sectors * BDRV_SECTOR_SIZE;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
size_t size;
off_t offset;
offset = sector_num * BDRV_SECTOR_SIZE;
size = nb_sectors * BDRV_SECTOR_SIZE;
acb = qemu_aio_get(&gluster_aiocb_info, bs, cb, opaque);
acb->size = size;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
acb->finished = NULL;
if (write) {
ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0,
@@ -554,16 +460,13 @@ static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
}
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
return &acb->common;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
qemu_aio_release(acb);
return NULL;
}
static int qemu_gluster_truncate(BlockDriverState *bs, int64_t offset)
@@ -579,68 +482,71 @@ static int qemu_gluster_truncate(BlockDriverState *bs, int64_t offset)
return 0;
}
static coroutine_fn int qemu_gluster_co_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
static BlockDriverAIOCB *qemu_gluster_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 0);
return qemu_gluster_aio_rw(bs, sector_num, qiov, nb_sectors, cb, opaque, 0);
}
static coroutine_fn int qemu_gluster_co_writev(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
static BlockDriverAIOCB *qemu_gluster_aio_writev(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 1);
return qemu_gluster_aio_rw(bs, sector_num, qiov, nb_sectors, cb, opaque, 1);
}
static coroutine_fn int qemu_gluster_co_flush_to_disk(BlockDriverState *bs)
static BlockDriverAIOCB *qemu_gluster_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
GlusterAIOCB *acb;
BDRVGlusterState *s = bs->opaque;
acb = qemu_aio_get(&gluster_aiocb_info, bs, cb, opaque);
acb->size = 0;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
acb->finished = NULL;
ret = glfs_fsync_async(s->fd, &gluster_finish_aiocb, acb);
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
return &acb->common;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
qemu_aio_release(acb);
return NULL;
}
#ifdef CONFIG_GLUSTERFS_DISCARD
static coroutine_fn int qemu_gluster_co_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors)
static BlockDriverAIOCB *qemu_gluster_aio_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, BlockDriverCompletionFunc *cb,
void *opaque)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
GlusterAIOCB *acb;
BDRVGlusterState *s = bs->opaque;
size_t size = nb_sectors * BDRV_SECTOR_SIZE;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
size_t size;
off_t offset;
offset = sector_num * BDRV_SECTOR_SIZE;
size = nb_sectors * BDRV_SECTOR_SIZE;
acb = qemu_aio_get(&gluster_aiocb_info, bs, cb, opaque);
acb->size = 0;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
acb->finished = NULL;
ret = glfs_discard_async(s->fd, offset, size, &gluster_finish_aiocb, acb);
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
return &acb->common;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
qemu_aio_release(acb);
return NULL;
}
#endif
@@ -675,6 +581,10 @@ static void qemu_gluster_close(BlockDriverState *bs)
{
BDRVGlusterState *s = bs->opaque;
close(s->fds[GLUSTER_FD_READ]);
close(s->fds[GLUSTER_FD_WRITE]);
qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL);
if (s->fd) {
glfs_close(s->fd);
s->fd = NULL;
@@ -694,11 +604,6 @@ static QEMUOptionParameter qemu_gluster_create_options[] = {
.type = OPT_SIZE,
.help = "Virtual disk size"
},
{
.name = BLOCK_OPT_PREALLOC,
.type = OPT_STRING,
.help = "Preallocation mode (allowed values: off, full)"
},
{ NULL }
};
@@ -708,23 +613,17 @@ static BlockDriver bdrv_gluster = {
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_aio_readv = qemu_gluster_aio_readv,
.bdrv_aio_writev = qemu_gluster_aio_writev,
.bdrv_aio_flush = qemu_gluster_aio_flush,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
.bdrv_aio_discard = qemu_gluster_aio_discard,
#endif
.create_options = qemu_gluster_create_options,
};
@@ -735,23 +634,17 @@ static BlockDriver bdrv_gluster_tcp = {
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_aio_readv = qemu_gluster_aio_readv,
.bdrv_aio_writev = qemu_gluster_aio_writev,
.bdrv_aio_flush = qemu_gluster_aio_flush,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
.bdrv_aio_discard = qemu_gluster_aio_discard,
#endif
.create_options = qemu_gluster_create_options,
};
@@ -762,23 +655,17 @@ static BlockDriver bdrv_gluster_unix = {
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_aio_readv = qemu_gluster_aio_readv,
.bdrv_aio_writev = qemu_gluster_aio_writev,
.bdrv_aio_flush = qemu_gluster_aio_flush,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
.bdrv_aio_discard = qemu_gluster_aio_discard,
#endif
.create_options = qemu_gluster_create_options,
};
@@ -789,23 +676,17 @@ static BlockDriver bdrv_gluster_rdma = {
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_aio_readv = qemu_gluster_aio_readv,
.bdrv_aio_writev = qemu_gluster_aio_writev,
.bdrv_aio_flush = qemu_gluster_aio_flush,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
.bdrv_aio_discard = qemu_gluster_aio_discard,
#endif
.create_options = qemu_gluster_create_options,
};

File diff suppressed because it is too large Load Diff

View File

@@ -31,8 +31,7 @@ typedef struct MirrorBlockJob {
BlockJob common;
RateLimit limit;
BlockDriverState *target;
BlockDriverState *base;
bool is_none_mode;
MirrorSyncMode mode;
BlockdevOnError on_source_error, on_target_error;
bool synced;
bool should_complete;
@@ -40,7 +39,6 @@ typedef struct MirrorBlockJob {
int64_t granularity;
size_t buf_size;
unsigned long *cow_bitmap;
BdrvDirtyBitmap *dirty_bitmap;
HBitmapIter hbi;
uint8_t *buf;
QSIMPLEQ_HEAD(, MirrorBuffer) buf_free;
@@ -96,7 +94,6 @@ static void mirror_iteration_done(MirrorOp *op, int ret)
bitmap_set(s->cow_bitmap, chunk_num, nb_chunks);
}
qemu_iovec_destroy(&op->qiov);
g_slice_free(MirrorOp, op);
qemu_coroutine_enter(s->common.co, NULL);
}
@@ -148,10 +145,9 @@ static void coroutine_fn mirror_iteration(MirrorBlockJob *s)
s->sector_num = hbitmap_iter_next(&s->hbi);
if (s->sector_num < 0) {
bdrv_dirty_iter_init(source, s->dirty_bitmap, &s->hbi);
bdrv_dirty_iter_init(source, &s->hbi);
s->sector_num = hbitmap_iter_next(&s->hbi);
trace_mirror_restart_iter(s,
bdrv_get_dirty_count(source, s->dirty_bitmap));
trace_mirror_restart_iter(s, bdrv_get_dirty_count(source));
assert(s->sector_num >= 0);
}
@@ -187,7 +183,7 @@ static void coroutine_fn mirror_iteration(MirrorBlockJob *s)
do {
int added_sectors, added_chunks;
if (!bdrv_get_dirty(source, s->dirty_bitmap, next_sector) ||
if (!bdrv_get_dirty(source, next_sector) ||
test_bit(next_chunk, s->in_flight_bitmap)) {
assert(nb_sectors > 0);
break;
@@ -253,8 +249,7 @@ static void coroutine_fn mirror_iteration(MirrorBlockJob *s)
/* Advance the HBitmapIter in parallel, so that we do not examine
* the same sector twice.
*/
if (next_sector > hbitmap_next_sector
&& bdrv_get_dirty(source, s->dirty_bitmap, next_sector)) {
if (next_sector > hbitmap_next_sector && bdrv_get_dirty(source, next_sector)) {
hbitmap_next_sector = hbitmap_iter_next(&s->hbi);
}
@@ -337,9 +332,10 @@ static void coroutine_fn mirror_run(void *opaque)
sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
mirror_free_init(s);
if (!s->is_none_mode) {
if (s->mode != MIRROR_SYNC_MODE_NONE) {
/* First part, loop on the sectors and initialize the dirty bitmap. */
BlockDriverState *base = s->base;
BlockDriverState *base;
base = s->mode == MIRROR_SYNC_MODE_FULL ? NULL : bs->backing_hd;
for (sector_num = 0; sector_num < end; ) {
int64_t next = (sector_num | (sectors_per_chunk - 1)) + 1;
ret = bdrv_is_allocated_above(bs, base,
@@ -359,7 +355,7 @@ static void coroutine_fn mirror_run(void *opaque)
}
}
bdrv_dirty_iter_init(bs, s->dirty_bitmap, &s->hbi);
bdrv_dirty_iter_init(bs, &s->hbi);
last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
for (;;) {
uint64_t delay_ns;
@@ -371,7 +367,7 @@ static void coroutine_fn mirror_run(void *opaque)
goto immediate_exit;
}
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
cnt = bdrv_get_dirty_count(bs);
/* Note that even when no rate limit is applied we need to yield
* periodically with no pending I/O so that qemu_aio_flush() returns.
@@ -413,7 +409,7 @@ static void coroutine_fn mirror_run(void *opaque)
should_complete = s->should_complete ||
block_job_is_cancelled(&s->common);
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
cnt = bdrv_get_dirty_count(bs);
}
}
@@ -428,7 +424,7 @@ static void coroutine_fn mirror_run(void *opaque)
*/
trace_mirror_before_drain(s, cnt);
bdrv_drain_all();
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
cnt = bdrv_get_dirty_count(bs);
}
ret = 0;
@@ -475,21 +471,15 @@ immediate_exit:
qemu_vfree(s->buf);
g_free(s->cow_bitmap);
g_free(s->in_flight_bitmap);
bdrv_release_dirty_bitmap(bs, s->dirty_bitmap);
bdrv_set_dirty_tracking(bs, 0);
bdrv_iostatus_disable(s->target);
if (s->should_complete && ret == 0) {
if (bdrv_get_flags(s->target) != bdrv_get_flags(s->common.bs)) {
bdrv_reopen(s->target, bdrv_get_flags(s->common.bs), NULL);
}
bdrv_swap(s->target, s->common.bs);
if (s->common.driver->job_type == BLOCK_JOB_TYPE_COMMIT) {
/* drop the bs loop chain formed by the swap: break the loop then
* trigger the unref from the top one */
BlockDriverState *p = s->base->backing_hd;
s->base->backing_hd = NULL;
bdrv_unref(p);
}
}
bdrv_close(s->target);
bdrv_unref(s->target);
block_job_completed(&s->common, ret);
}
@@ -520,6 +510,9 @@ static void mirror_complete(BlockJob *job, Error **errp)
ret = bdrv_open_backing_file(s->target, NULL, &local_err);
if (ret < 0) {
char backing_filename[PATH_MAX];
bdrv_get_full_backing_filename(s->target, backing_filename,
sizeof(backing_filename));
error_propagate(errp, local_err);
return;
}
@@ -540,24 +533,12 @@ static const BlockJobDriver mirror_job_driver = {
.complete = mirror_complete,
};
static const BlockJobDriver commit_active_job_driver = {
.instance_size = sizeof(MirrorBlockJob),
.job_type = BLOCK_JOB_TYPE_COMMIT,
.set_speed = mirror_set_speed,
.iostatus_reset
= mirror_iostatus_reset,
.complete = mirror_complete,
};
static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, int64_t granularity,
int64_t buf_size,
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp,
const BlockJobDriver *driver,
bool is_none_mode, BlockDriverState *base)
void mirror_start(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, int64_t granularity, int64_t buf_size,
MirrorSyncMode mode, BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp)
{
MirrorBlockJob *s;
@@ -582,8 +563,7 @@ static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
return;
}
s = block_job_create(driver, bs, speed, cb, opaque, errp);
s = block_job_create(&mirror_job_driver, bs, speed, cb, opaque, errp);
if (!s) {
return;
}
@@ -591,12 +571,11 @@ static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
s->on_source_error = on_source_error;
s->on_target_error = on_target_error;
s->target = target;
s->is_none_mode = is_none_mode;
s->base = base;
s->mode = mode;
s->granularity = granularity;
s->buf_size = MAX(buf_size, granularity);
s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity);
bdrv_set_dirty_tracking(bs, granularity);
bdrv_set_enable_write_cache(s->target, true);
bdrv_set_on_error(s->target, on_target_error, on_target_error);
bdrv_iostatus_enable(s->target);
@@ -604,80 +583,3 @@ static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
trace_mirror_start(bs, s, s->common.co, opaque);
qemu_coroutine_enter(s->common.co, s);
}
void mirror_start(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, int64_t granularity, int64_t buf_size,
MirrorSyncMode mode, BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp)
{
bool is_none_mode;
BlockDriverState *base;
is_none_mode = mode == MIRROR_SYNC_MODE_NONE;
base = mode == MIRROR_SYNC_MODE_TOP ? bs->backing_hd : NULL;
mirror_start_job(bs, target, speed, granularity, buf_size,
on_source_error, on_target_error, cb, opaque, errp,
&mirror_job_driver, is_none_mode, base);
}
void commit_active_start(BlockDriverState *bs, BlockDriverState *base,
int64_t speed,
BlockdevOnError on_error,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp)
{
int64_t length, base_length;
int orig_base_flags;
int ret;
Error *local_err = NULL;
orig_base_flags = bdrv_get_flags(base);
if (bdrv_reopen(base, bs->open_flags, errp)) {
return;
}
length = bdrv_getlength(bs);
if (length < 0) {
error_setg_errno(errp, -length,
"Unable to determine length of %s", bs->filename);
goto error_restore_flags;
}
base_length = bdrv_getlength(base);
if (base_length < 0) {
error_setg_errno(errp, -base_length,
"Unable to determine length of %s", base->filename);
goto error_restore_flags;
}
if (length > base_length) {
ret = bdrv_truncate(base, length);
if (ret < 0) {
error_setg_errno(errp, -ret,
"Top image %s is larger than base image %s, and "
"resize of base image failed",
bs->filename, base->filename);
goto error_restore_flags;
}
}
bdrv_ref(base);
mirror_start_job(bs, base, speed, 0, 0,
on_error, on_error, cb, opaque, &local_err,
&commit_active_job_driver, false, base);
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
goto error_restore_flags;
}
return;
error_restore_flags:
/* ignore error and errp for bdrv_reopen, because we want to propagate
* the original error */
bdrv_reopen(base, orig_base_flags, NULL);
return;
}

View File

@@ -1,388 +0,0 @@
/*
* QEMU Block driver for NBD
*
* Copyright (C) 2008 Bull S.A.S.
* Author: Laurent Vivier <Laurent.Vivier@bull.net>
*
* Some parts:
* Copyright (C) 2007 Anthony Liguori <anthony@codemonkey.ws>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "nbd-client.h"
#include "qemu/sockets.h"
#define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs))
#define INDEX_TO_HANDLE(bs, index) ((index) ^ ((uint64_t)(intptr_t)bs))
static void nbd_recv_coroutines_enter_all(NbdClientSession *s)
{
int i;
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
}
}
}
static void nbd_teardown_connection(NbdClientSession *client)
{
/* finish any pending coroutines */
shutdown(client->sock, 2);
nbd_recv_coroutines_enter_all(client);
qemu_aio_set_fd_handler(client->sock, NULL, NULL, NULL);
closesocket(client->sock);
client->sock = -1;
}
static void nbd_reply_ready(void *opaque)
{
NbdClientSession *s = opaque;
uint64_t i;
int ret;
if (s->reply.handle == 0) {
/* No reply already in flight. Fetch a header. It is possible
* that another thread has done the same thing in parallel, so
* the socket is not readable anymore.
*/
ret = nbd_receive_reply(s->sock, &s->reply);
if (ret == -EAGAIN) {
return;
}
if (ret < 0) {
s->reply.handle = 0;
goto fail;
}
}
/* There's no need for a mutex on the receive side, because the
* handler acts as a synchronization point and ensures that only
* one coroutine is called until the reply finishes. */
i = HANDLE_TO_INDEX(s, s->reply.handle);
if (i >= MAX_NBD_REQUESTS) {
goto fail;
}
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
return;
}
fail:
nbd_teardown_connection(s);
}
static void nbd_restart_write(void *opaque)
{
NbdClientSession *s = opaque;
qemu_coroutine_enter(s->send_coroutine, NULL);
}
static int nbd_co_send_request(NbdClientSession *s,
struct nbd_request *request,
QEMUIOVector *qiov, int offset)
{
int rc, ret;
qemu_co_mutex_lock(&s->send_mutex);
s->send_coroutine = qemu_coroutine_self();
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write, s);
if (qiov) {
if (!s->is_unix) {
socket_set_cork(s->sock, 1);
}
rc = nbd_send_request(s->sock, request);
if (rc >= 0) {
ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
rc = -EIO;
}
}
if (!s->is_unix) {
socket_set_cork(s->sock, 0);
}
} else {
rc = nbd_send_request(s->sock, request);
}
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL, s);
s->send_coroutine = NULL;
qemu_co_mutex_unlock(&s->send_mutex);
return rc;
}
static void nbd_co_receive_reply(NbdClientSession *s,
struct nbd_request *request, struct nbd_reply *reply,
QEMUIOVector *qiov, int offset)
{
int ret;
/* Wait until we're woken up by the read handler. TODO: perhaps
* peek at the next reply and avoid yielding if it's ours? */
qemu_coroutine_yield();
*reply = s->reply;
if (reply->handle != request->handle) {
reply->error = EIO;
} else {
if (qiov && reply->error == 0) {
ret = qemu_co_recvv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
reply->error = EIO;
}
}
/* Tell the read handler to read another header. */
s->reply.handle = 0;
}
}
static void nbd_coroutine_start(NbdClientSession *s,
struct nbd_request *request)
{
int i;
/* Poor man semaphore. The free_sema is locked when no other request
* can be accepted, and unlocked after receiving one reply. */
if (s->in_flight >= MAX_NBD_REQUESTS - 1) {
qemu_co_mutex_lock(&s->free_sema);
assert(s->in_flight < MAX_NBD_REQUESTS);
}
s->in_flight++;
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i] == NULL) {
s->recv_coroutine[i] = qemu_coroutine_self();
break;
}
}
assert(i < MAX_NBD_REQUESTS);
request->handle = INDEX_TO_HANDLE(s, i);
}
static void nbd_coroutine_end(NbdClientSession *s,
struct nbd_request *request)
{
int i = HANDLE_TO_INDEX(s, request->handle);
s->recv_coroutine[i] = NULL;
if (s->in_flight-- == MAX_NBD_REQUESTS) {
qemu_co_mutex_unlock(&s->free_sema);
}
}
static int nbd_co_readv_1(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
struct nbd_request request = { .type = NBD_CMD_READ };
struct nbd_reply reply;
ssize_t ret;
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, qiov, offset);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
static int nbd_co_writev_1(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
struct nbd_request request = { .type = NBD_CMD_WRITE };
struct nbd_reply reply;
ssize_t ret;
if (!bdrv_enable_write_cache(client->bs) &&
(client->nbdflags & NBD_FLAG_SEND_FUA)) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, qiov, offset);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, NULL, 0);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
/* qemu-nbd has a limit of slightly less than 1M per request. Try to
* remain aligned to 4K. */
#define NBD_MAX_SECTORS 2040
int nbd_client_session_co_readv(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_readv_1(client, sector_num,
NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_readv_1(client, sector_num, nb_sectors, qiov, offset);
}
int nbd_client_session_co_writev(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_writev_1(client, sector_num,
NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_writev_1(client, sector_num, nb_sectors, qiov, offset);
}
int nbd_client_session_co_flush(NbdClientSession *client)
{
struct nbd_request request = { .type = NBD_CMD_FLUSH };
struct nbd_reply reply;
ssize_t ret;
if (!(client->nbdflags & NBD_FLAG_SEND_FLUSH)) {
return 0;
}
if (client->nbdflags & NBD_FLAG_SEND_FUA) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = 0;
request.len = 0;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, NULL, 0);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
int nbd_client_session_co_discard(NbdClientSession *client, int64_t sector_num,
int nb_sectors)
{
struct nbd_request request = { .type = NBD_CMD_TRIM };
struct nbd_reply reply;
ssize_t ret;
if (!(client->nbdflags & NBD_FLAG_SEND_TRIM)) {
return 0;
}
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, NULL, 0);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
void nbd_client_session_close(NbdClientSession *client)
{
struct nbd_request request = {
.type = NBD_CMD_DISC,
.from = 0,
.len = 0
};
if (!client->bs) {
return;
}
if (client->sock == -1) {
return;
}
nbd_send_request(client->sock, &request);
nbd_teardown_connection(client);
client->bs = NULL;
}
int nbd_client_session_init(NbdClientSession *client, BlockDriverState *bs,
int sock, const char *export)
{
int ret;
/* NBD handshake */
logout("session init %s\n", export);
qemu_set_block(sock);
ret = nbd_receive_negotiate(sock, export,
&client->nbdflags, &client->size,
&client->blocksize);
if (ret < 0) {
logout("Failed to negotiate with the NBD server\n");
closesocket(sock);
return ret;
}
qemu_co_mutex_init(&client->send_mutex);
qemu_co_mutex_init(&client->free_sema);
client->bs = bs;
client->sock = sock;
/* Now that we're connected, set the socket to be non-blocking and
* kick the reply mechanism. */
qemu_set_nonblock(sock);
qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL, client);
logout("Established connection with NBD server\n");
return 0;
}

View File

@@ -1,50 +0,0 @@
#ifndef NBD_CLIENT_H
#define NBD_CLIENT_H
#include "qemu-common.h"
#include "block/nbd.h"
#include "block/block_int.h"
/* #define DEBUG_NBD */
#if defined(DEBUG_NBD)
#define logout(fmt, ...) \
fprintf(stderr, "nbd\t%-24s" fmt, __func__, ##__VA_ARGS__)
#else
#define logout(fmt, ...) ((void)0)
#endif
#define MAX_NBD_REQUESTS 16
typedef struct NbdClientSession {
int sock;
uint32_t nbdflags;
off_t size;
size_t blocksize;
CoMutex send_mutex;
CoMutex free_sema;
Coroutine *send_coroutine;
int in_flight;
Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
struct nbd_reply reply;
bool is_unix;
BlockDriverState *bs;
} NbdClientSession;
int nbd_client_session_init(NbdClientSession *client, BlockDriverState *bs,
int sock, const char *export_name);
void nbd_client_session_close(NbdClientSession *client);
int nbd_client_session_co_discard(NbdClientSession *client, int64_t sector_num,
int nb_sectors);
int nbd_client_session_co_flush(NbdClientSession *client);
int nbd_client_session_co_writev(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
int nbd_client_session_co_readv(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
#endif /* NBD_CLIENT_H */

View File

@@ -26,7 +26,8 @@
* THE SOFTWARE.
*/
#include "block/nbd-client.h"
#include "qemu-common.h"
#include "block/nbd.h"
#include "qemu/uri.h"
#include "block/block_int.h"
#include "qemu/module.h"
@@ -39,9 +40,37 @@
#define EN_OPTSTR ":exportname="
/* #define DEBUG_NBD */
#if defined(DEBUG_NBD)
#define logout(fmt, ...) \
fprintf(stderr, "nbd\t%-24s" fmt, __func__, ##__VA_ARGS__)
#else
#define logout(fmt, ...) ((void)0)
#endif
#define MAX_NBD_REQUESTS 16
#define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs))
#define INDEX_TO_HANDLE(bs, index) ((index) ^ ((uint64_t)(intptr_t)bs))
typedef struct BDRVNBDState {
NbdClientSession client;
int sock;
uint32_t nbdflags;
off_t size;
size_t blocksize;
CoMutex send_mutex;
CoMutex free_sema;
Coroutine *send_coroutine;
int in_flight;
Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
struct nbd_reply reply;
bool is_unix;
QemuOpts *socket_opts;
char *export_name; /* An NBD server may export several devices */
} BDRVNBDState;
static int nbd_parse_uri(const char *filename, QDict *options)
@@ -188,49 +217,195 @@ out:
g_free(file);
}
static void nbd_config(BDRVNBDState *s, QDict *options, char **export,
Error **errp)
static int nbd_config(BDRVNBDState *s, QDict *options)
{
Error *local_err = NULL;
if (qdict_haskey(options, "path") == qdict_haskey(options, "host")) {
if (qdict_haskey(options, "path")) {
error_setg(errp, "path and host may not be used at the same time.");
} else {
error_setg(errp, "one of path and host must be specified.");
if (qdict_haskey(options, "path")) {
if (qdict_haskey(options, "host")) {
qerror_report(ERROR_CLASS_GENERIC_ERROR, "path and host may not "
"be used at the same time.");
return -EINVAL;
}
return;
s->is_unix = true;
} else if (qdict_haskey(options, "host")) {
s->is_unix = false;
} else {
return -EINVAL;
}
s->client.is_unix = qdict_haskey(options, "path");
s->socket_opts = qemu_opts_create(&socket_optslist, NULL, 0,
&error_abort);
s->socket_opts = qemu_opts_create_nofail(&socket_optslist);
qemu_opts_absorb_qdict(s->socket_opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
return -EINVAL;
}
if (!qemu_opt_get(s->socket_opts, "port")) {
qemu_opt_set_number(s->socket_opts, "port", NBD_DEFAULT_PORT);
}
*export = g_strdup(qdict_get_try_str(options, "export"));
if (*export) {
s->export_name = g_strdup(qdict_get_try_str(options, "export"));
if (s->export_name) {
qdict_del(options, "export");
}
return 0;
}
static void nbd_coroutine_start(BDRVNBDState *s, struct nbd_request *request)
{
int i;
/* Poor man semaphore. The free_sema is locked when no other request
* can be accepted, and unlocked after receiving one reply. */
if (s->in_flight >= MAX_NBD_REQUESTS - 1) {
qemu_co_mutex_lock(&s->free_sema);
assert(s->in_flight < MAX_NBD_REQUESTS);
}
s->in_flight++;
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i] == NULL) {
s->recv_coroutine[i] = qemu_coroutine_self();
break;
}
}
assert(i < MAX_NBD_REQUESTS);
request->handle = INDEX_TO_HANDLE(s, i);
}
static void nbd_reply_ready(void *opaque)
{
BDRVNBDState *s = opaque;
uint64_t i;
int ret;
if (s->reply.handle == 0) {
/* No reply already in flight. Fetch a header. It is possible
* that another thread has done the same thing in parallel, so
* the socket is not readable anymore.
*/
ret = nbd_receive_reply(s->sock, &s->reply);
if (ret == -EAGAIN) {
return;
}
if (ret < 0) {
s->reply.handle = 0;
goto fail;
}
}
/* There's no need for a mutex on the receive side, because the
* handler acts as a synchronization point and ensures that only
* one coroutine is called until the reply finishes. */
i = HANDLE_TO_INDEX(s, s->reply.handle);
if (i >= MAX_NBD_REQUESTS) {
goto fail;
}
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
return;
}
fail:
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
}
}
}
static int nbd_establish_connection(BlockDriverState *bs, Error **errp)
static void nbd_restart_write(void *opaque)
{
BDRVNBDState *s = opaque;
qemu_coroutine_enter(s->send_coroutine, NULL);
}
static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
QEMUIOVector *qiov, int offset)
{
int rc, ret;
qemu_co_mutex_lock(&s->send_mutex);
s->send_coroutine = qemu_coroutine_self();
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write, s);
if (qiov) {
if (!s->is_unix) {
socket_set_cork(s->sock, 1);
}
rc = nbd_send_request(s->sock, request);
if (rc >= 0) {
ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
rc = -EIO;
}
}
if (!s->is_unix) {
socket_set_cork(s->sock, 0);
}
} else {
rc = nbd_send_request(s->sock, request);
}
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL, s);
s->send_coroutine = NULL;
qemu_co_mutex_unlock(&s->send_mutex);
return rc;
}
static void nbd_co_receive_reply(BDRVNBDState *s, struct nbd_request *request,
struct nbd_reply *reply,
QEMUIOVector *qiov, int offset)
{
int ret;
/* Wait until we're woken up by the read handler. TODO: perhaps
* peek at the next reply and avoid yielding if it's ours? */
qemu_coroutine_yield();
*reply = s->reply;
if (reply->handle != request->handle) {
reply->error = EIO;
} else {
if (qiov && reply->error == 0) {
ret = qemu_co_recvv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
reply->error = EIO;
}
}
/* Tell the read handler to read another header. */
s->reply.handle = 0;
}
}
static void nbd_coroutine_end(BDRVNBDState *s, struct nbd_request *request)
{
int i = HANDLE_TO_INDEX(s, request->handle);
s->recv_coroutine[i] = NULL;
if (s->in_flight-- == MAX_NBD_REQUESTS) {
qemu_co_mutex_unlock(&s->free_sema);
}
}
static int nbd_establish_connection(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
int sock;
int ret;
off_t size;
size_t blocksize;
if (s->client.is_unix) {
sock = unix_connect_opts(s->socket_opts, errp, NULL, NULL);
if (s->is_unix) {
sock = unix_socket_outgoing(qemu_opt_get(s->socket_opts, "path"));
} else {
sock = inet_connect_opts(s->socket_opts, errp, NULL, NULL);
sock = tcp_socket_outgoing_opts(s->socket_opts);
if (sock >= 0) {
socket_set_nodelay(sock);
}
@@ -242,85 +417,226 @@ static int nbd_establish_connection(BlockDriverState *bs, Error **errp)
return -errno;
}
return sock;
/* NBD handshake */
ret = nbd_receive_negotiate(sock, s->export_name, &s->nbdflags, &size,
&blocksize);
if (ret < 0) {
logout("Failed to negotiate with the NBD server\n");
closesocket(sock);
return ret;
}
/* Now that we're connected, set the socket to be non-blocking and
* kick the reply mechanism. */
qemu_set_nonblock(sock);
qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL, s);
s->sock = sock;
s->size = size;
s->blocksize = blocksize;
logout("Established connection with NBD server\n");
return 0;
}
static void nbd_teardown_connection(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
request.type = NBD_CMD_DISC;
request.from = 0;
request.len = 0;
nbd_send_request(s->sock, &request);
qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL);
closesocket(s->sock);
}
static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVNBDState *s = bs->opaque;
char *export = NULL;
int result, sock;
Error *local_err = NULL;
int result;
qemu_co_mutex_init(&s->send_mutex);
qemu_co_mutex_init(&s->free_sema);
/* Pop the config into our state object. Exit if invalid. */
nbd_config(s, options, &export, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return -EINVAL;
result = nbd_config(s, options);
if (result != 0) {
return result;
}
/* establish TCP connection, return error if it fails
* TODO: Configurable retry-until-timeout behaviour.
*/
sock = nbd_establish_connection(bs, errp);
if (sock < 0) {
return sock;
}
result = nbd_establish_connection(bs);
/* NBD handshake */
result = nbd_client_session_init(&s->client, bs, sock, export);
g_free(export);
return result;
}
static int nbd_co_readv_1(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
request.type = NBD_CMD_READ;
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, qiov, offset);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
static int nbd_co_writev_1(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
request.type = NBD_CMD_WRITE;
if (!bdrv_enable_write_cache(bs) && (s->nbdflags & NBD_FLAG_SEND_FUA)) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, qiov, offset);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, NULL, 0);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
/* qemu-nbd has a limit of slightly less than 1M per request. Try to
* remain aligned to 4K. */
#define NBD_MAX_SECTORS 2040
static int nbd_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BDRVNBDState *s = bs->opaque;
return nbd_client_session_co_readv(&s->client, sector_num,
nb_sectors, qiov);
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_readv_1(bs, sector_num, NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_readv_1(bs, sector_num, nb_sectors, qiov, offset);
}
static int nbd_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BDRVNBDState *s = bs->opaque;
return nbd_client_session_co_writev(&s->client, sector_num,
nb_sectors, qiov);
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_writev_1(bs, sector_num, NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_writev_1(bs, sector_num, nb_sectors, qiov, offset);
}
static int nbd_co_flush(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
return nbd_client_session_co_flush(&s->client);
if (!(s->nbdflags & NBD_FLAG_SEND_FLUSH)) {
return 0;
}
request.type = NBD_CMD_FLUSH;
if (s->nbdflags & NBD_FLAG_SEND_FUA) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = 0;
request.len = 0;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, NULL, 0);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
static int nbd_co_discard(BlockDriverState *bs, int64_t sector_num,
int nb_sectors)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
return nbd_client_session_co_discard(&s->client, sector_num,
nb_sectors);
if (!(s->nbdflags & NBD_FLAG_SEND_TRIM)) {
return 0;
}
request.type = NBD_CMD_TRIM;
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, NULL, 0);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
static void nbd_close(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
g_free(s->export_name);
qemu_opts_del(s->socket_opts);
nbd_client_session_close(&s->client);
nbd_teardown_connection(bs);
}
static int64_t nbd_getlength(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
return s->client.size;
return s->size;
}
static BlockDriver bdrv_nbd = {

View File

@@ -1,439 +0,0 @@
/*
* QEMU Block driver for native access to files on NFS shares
*
* Copyright (c) 2014 Peter Lieven <pl@kamp.de>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "config-host.h"
#include <poll.h>
#include "qemu-common.h"
#include "qemu/config-file.h"
#include "qemu/error-report.h"
#include "block/block_int.h"
#include "trace.h"
#include "qemu/iov.h"
#include "qemu/uri.h"
#include "sysemu/sysemu.h"
#include <nfsc/libnfs.h>
typedef struct NFSClient {
struct nfs_context *context;
struct nfsfh *fh;
int events;
bool has_zero_init;
} NFSClient;
typedef struct NFSRPC {
int ret;
int complete;
QEMUIOVector *iov;
struct stat *st;
Coroutine *co;
QEMUBH *bh;
} NFSRPC;
static void nfs_process_read(void *arg);
static void nfs_process_write(void *arg);
static void nfs_set_events(NFSClient *client)
{
int ev = nfs_which_events(client->context);
if (ev != client->events) {
qemu_aio_set_fd_handler(nfs_get_fd(client->context),
(ev & POLLIN) ? nfs_process_read : NULL,
(ev & POLLOUT) ? nfs_process_write : NULL,
client);
}
client->events = ev;
}
static void nfs_process_read(void *arg)
{
NFSClient *client = arg;
nfs_service(client->context, POLLIN);
nfs_set_events(client);
}
static void nfs_process_write(void *arg)
{
NFSClient *client = arg;
nfs_service(client->context, POLLOUT);
nfs_set_events(client);
}
static void nfs_co_init_task(NFSClient *client, NFSRPC *task)
{
*task = (NFSRPC) {
.co = qemu_coroutine_self(),
};
}
static void nfs_co_generic_bh_cb(void *opaque)
{
NFSRPC *task = opaque;
qemu_bh_delete(task->bh);
qemu_coroutine_enter(task->co, NULL);
}
static void
nfs_co_generic_cb(int ret, struct nfs_context *nfs, void *data,
void *private_data)
{
NFSRPC *task = private_data;
task->complete = 1;
task->ret = ret;
if (task->ret > 0 && task->iov) {
if (task->ret <= task->iov->size) {
qemu_iovec_from_buf(task->iov, 0, data, task->ret);
} else {
task->ret = -EIO;
}
}
if (task->ret == 0 && task->st) {
memcpy(task->st, data, sizeof(struct stat));
}
if (task->co) {
task->bh = qemu_bh_new(nfs_co_generic_bh_cb, task);
qemu_bh_schedule(task->bh);
}
}
static int coroutine_fn nfs_co_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
QEMUIOVector *iov)
{
NFSClient *client = bs->opaque;
NFSRPC task;
nfs_co_init_task(client, &task);
task.iov = iov;
if (nfs_pread_async(client->context, client->fh,
sector_num * BDRV_SECTOR_SIZE,
nb_sectors * BDRV_SECTOR_SIZE,
nfs_co_generic_cb, &task) != 0) {
return -ENOMEM;
}
while (!task.complete) {
nfs_set_events(client);
qemu_coroutine_yield();
}
if (task.ret < 0) {
return task.ret;
}
/* zero pad short reads */
if (task.ret < iov->size) {
qemu_iovec_memset(iov, task.ret, 0, iov->size - task.ret);
}
return 0;
}
static int coroutine_fn nfs_co_writev(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
QEMUIOVector *iov)
{
NFSClient *client = bs->opaque;
NFSRPC task;
char *buf = NULL;
nfs_co_init_task(client, &task);
buf = g_malloc(nb_sectors * BDRV_SECTOR_SIZE);
qemu_iovec_to_buf(iov, 0, buf, nb_sectors * BDRV_SECTOR_SIZE);
if (nfs_pwrite_async(client->context, client->fh,
sector_num * BDRV_SECTOR_SIZE,
nb_sectors * BDRV_SECTOR_SIZE,
buf, nfs_co_generic_cb, &task) != 0) {
g_free(buf);
return -ENOMEM;
}
while (!task.complete) {
nfs_set_events(client);
qemu_coroutine_yield();
}
g_free(buf);
if (task.ret != nb_sectors * BDRV_SECTOR_SIZE) {
return task.ret < 0 ? task.ret : -EIO;
}
return 0;
}
static int coroutine_fn nfs_co_flush(BlockDriverState *bs)
{
NFSClient *client = bs->opaque;
NFSRPC task;
nfs_co_init_task(client, &task);
if (nfs_fsync_async(client->context, client->fh, nfs_co_generic_cb,
&task) != 0) {
return -ENOMEM;
}
while (!task.complete) {
nfs_set_events(client);
qemu_coroutine_yield();
}
return task.ret;
}
/* TODO Convert to fine grained options */
static QemuOptsList runtime_opts = {
.name = "nfs",
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
.desc = {
{
.name = "filename",
.type = QEMU_OPT_STRING,
.help = "URL to the NFS file",
},
{ /* end of list */ }
},
};
static void nfs_client_close(NFSClient *client)
{
if (client->context) {
if (client->fh) {
nfs_close(client->context, client->fh);
}
qemu_aio_set_fd_handler(nfs_get_fd(client->context), NULL, NULL, NULL);
nfs_destroy_context(client->context);
}
memset(client, 0, sizeof(NFSClient));
}
static void nfs_file_close(BlockDriverState *bs)
{
NFSClient *client = bs->opaque;
nfs_client_close(client);
}
static int64_t nfs_client_open(NFSClient *client, const char *filename,
int flags, Error **errp)
{
int ret = -EINVAL, i;
struct stat st;
URI *uri;
QueryParams *qp = NULL;
char *file = NULL, *strp = NULL;
uri = uri_parse(filename);
if (!uri) {
error_setg(errp, "Invalid URL specified");
goto fail;
}
strp = strrchr(uri->path, '/');
if (strp == NULL) {
error_setg(errp, "Invalid URL specified");
goto fail;
}
file = g_strdup(strp);
*strp = 0;
client->context = nfs_init_context();
if (client->context == NULL) {
error_setg(errp, "Failed to init NFS context");
goto fail;
}
qp = query_params_parse(uri->query);
for (i = 0; i < qp->n; i++) {
if (!qp->p[i].value) {
error_setg(errp, "Value for NFS parameter expected: %s",
qp->p[i].name);
goto fail;
}
if (!strncmp(qp->p[i].name, "uid", 3)) {
nfs_set_uid(client->context, atoi(qp->p[i].value));
} else if (!strncmp(qp->p[i].name, "gid", 3)) {
nfs_set_gid(client->context, atoi(qp->p[i].value));
} else if (!strncmp(qp->p[i].name, "tcp-syncnt", 10)) {
nfs_set_tcp_syncnt(client->context, atoi(qp->p[i].value));
} else {
error_setg(errp, "Unknown NFS parameter name: %s",
qp->p[i].name);
goto fail;
}
}
ret = nfs_mount(client->context, uri->server, uri->path);
if (ret < 0) {
error_setg(errp, "Failed to mount nfs share: %s",
nfs_get_error(client->context));
goto fail;
}
if (flags & O_CREAT) {
ret = nfs_creat(client->context, file, 0600, &client->fh);
if (ret < 0) {
error_setg(errp, "Failed to create file: %s",
nfs_get_error(client->context));
goto fail;
}
} else {
ret = nfs_open(client->context, file, flags, &client->fh);
if (ret < 0) {
error_setg(errp, "Failed to open file : %s",
nfs_get_error(client->context));
goto fail;
}
}
ret = nfs_fstat(client->context, client->fh, &st);
if (ret < 0) {
error_setg(errp, "Failed to fstat file: %s",
nfs_get_error(client->context));
goto fail;
}
ret = DIV_ROUND_UP(st.st_size, BDRV_SECTOR_SIZE);
client->has_zero_init = S_ISREG(st.st_mode);
goto out;
fail:
nfs_client_close(client);
out:
if (qp) {
query_params_free(qp);
}
uri_free(uri);
g_free(file);
return ret;
}
static int nfs_file_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp) {
NFSClient *client = bs->opaque;
int64_t ret;
QemuOpts *opts;
Error *local_err = NULL;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
return -EINVAL;
}
ret = nfs_client_open(client, qemu_opt_get(opts, "filename"),
(flags & BDRV_O_RDWR) ? O_RDWR : O_RDONLY,
errp);
if (ret < 0) {
return ret;
}
bs->total_sectors = ret;
return 0;
}
static int nfs_file_create(const char *url, QEMUOptionParameter *options,
Error **errp)
{
int ret = 0;
int64_t total_size = 0;
NFSClient *client = g_malloc0(sizeof(NFSClient));
/* Read out options */
while (options && options->name) {
if (!strcmp(options->name, "size")) {
total_size = options->value.n;
}
options++;
}
ret = nfs_client_open(client, url, O_CREAT, errp);
if (ret < 0) {
goto out;
}
ret = nfs_ftruncate(client->context, client->fh, total_size);
nfs_client_close(client);
out:
g_free(client);
return ret;
}
static int nfs_has_zero_init(BlockDriverState *bs)
{
NFSClient *client = bs->opaque;
return client->has_zero_init;
}
static int64_t nfs_get_allocated_file_size(BlockDriverState *bs)
{
NFSClient *client = bs->opaque;
NFSRPC task = {0};
struct stat st;
task.st = &st;
if (nfs_fstat_async(client->context, client->fh, nfs_co_generic_cb,
&task) != 0) {
return -ENOMEM;
}
while (!task.complete) {
nfs_set_events(client);
qemu_aio_wait();
}
return (task.ret < 0 ? task.ret : st.st_blocks * st.st_blksize);
}
static int nfs_file_truncate(BlockDriverState *bs, int64_t offset)
{
NFSClient *client = bs->opaque;
return nfs_ftruncate(client->context, client->fh, offset);
}
static BlockDriver bdrv_nfs = {
.format_name = "nfs",
.protocol_name = "nfs",
.instance_size = sizeof(NFSClient),
.bdrv_needs_filename = true,
.bdrv_has_zero_init = nfs_has_zero_init,
.bdrv_get_allocated_file_size = nfs_get_allocated_file_size,
.bdrv_truncate = nfs_file_truncate,
.bdrv_file_open = nfs_file_open,
.bdrv_close = nfs_file_close,
.bdrv_create = nfs_file_create,
.bdrv_co_readv = nfs_co_readv,
.bdrv_co_writev = nfs_co_writev,
.bdrv_co_flush_to_disk = nfs_co_flush,
};
static void nfs_block_init(void)
{
bdrv_register(&bdrv_nfs);
}
block_init(nfs_block_init);

View File

@@ -85,8 +85,7 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
if (memcmp(ph.magic, HEADER_MAGIC, 16) ||
(le32_to_cpu(ph.version) != HEADER_VERSION)) {
error_setg(errp, "Image not in Parallels format");
ret = -EINVAL;
ret = -EMEDIUMTYPE;
goto fail;
}

View File

@@ -29,60 +29,6 @@
#include "qapi/qmp-output-visitor.h"
#include "qapi/qmp/types.h"
BlockDeviceInfo *bdrv_block_device_info(BlockDriverState *bs)
{
BlockDeviceInfo *info = g_malloc0(sizeof(*info));
info->file = g_strdup(bs->filename);
info->ro = bs->read_only;
info->drv = g_strdup(bs->drv->format_name);
info->encrypted = bs->encrypted;
info->encryption_key_missing = bdrv_key_required(bs);
if (bs->node_name[0]) {
info->has_node_name = true;
info->node_name = g_strdup(bs->node_name);
}
if (bs->backing_file[0]) {
info->has_backing_file = true;
info->backing_file = g_strdup(bs->backing_file);
}
info->backing_file_depth = bdrv_get_backing_file_depth(bs);
if (bs->io_limits_enabled) {
ThrottleConfig cfg;
throttle_get_config(&bs->throttle_state, &cfg);
info->bps = cfg.buckets[THROTTLE_BPS_TOTAL].avg;
info->bps_rd = cfg.buckets[THROTTLE_BPS_READ].avg;
info->bps_wr = cfg.buckets[THROTTLE_BPS_WRITE].avg;
info->iops = cfg.buckets[THROTTLE_OPS_TOTAL].avg;
info->iops_rd = cfg.buckets[THROTTLE_OPS_READ].avg;
info->iops_wr = cfg.buckets[THROTTLE_OPS_WRITE].avg;
info->has_bps_max = cfg.buckets[THROTTLE_BPS_TOTAL].max;
info->bps_max = cfg.buckets[THROTTLE_BPS_TOTAL].max;
info->has_bps_rd_max = cfg.buckets[THROTTLE_BPS_READ].max;
info->bps_rd_max = cfg.buckets[THROTTLE_BPS_READ].max;
info->has_bps_wr_max = cfg.buckets[THROTTLE_BPS_WRITE].max;
info->bps_wr_max = cfg.buckets[THROTTLE_BPS_WRITE].max;
info->has_iops_max = cfg.buckets[THROTTLE_OPS_TOTAL].max;
info->iops_max = cfg.buckets[THROTTLE_OPS_TOTAL].max;
info->has_iops_rd_max = cfg.buckets[THROTTLE_OPS_READ].max;
info->iops_rd_max = cfg.buckets[THROTTLE_OPS_READ].max;
info->has_iops_wr_max = cfg.buckets[THROTTLE_OPS_WRITE].max;
info->iops_wr_max = cfg.buckets[THROTTLE_OPS_WRITE].max;
info->has_iops_size = cfg.op_size;
info->iops_size = cfg.op_size;
}
return info;
}
/*
* Returns 0 on success, with *p_list either set to describe snapshot
* information, or NULL because there are no snapshots. Returns -errno on
@@ -258,20 +204,76 @@ void bdrv_query_info(BlockDriverState *bs,
info->io_status = bs->iostatus;
}
if (!QLIST_EMPTY(&bs->dirty_bitmaps)) {
info->has_dirty_bitmaps = true;
info->dirty_bitmaps = bdrv_query_dirty_bitmaps(bs);
if (bs->dirty_bitmap) {
info->has_dirty = true;
info->dirty = g_malloc0(sizeof(*info->dirty));
info->dirty->count = bdrv_get_dirty_count(bs) * BDRV_SECTOR_SIZE;
info->dirty->granularity =
((int64_t) BDRV_SECTOR_SIZE << hbitmap_granularity(bs->dirty_bitmap));
}
if (bs->drv) {
info->has_inserted = true;
info->inserted = bdrv_block_device_info(bs);
info->inserted = g_malloc0(sizeof(*info->inserted));
info->inserted->file = g_strdup(bs->filename);
info->inserted->ro = bs->read_only;
info->inserted->drv = g_strdup(bs->drv->format_name);
info->inserted->encrypted = bs->encrypted;
info->inserted->encryption_key_missing = bdrv_key_required(bs);
if (bs->backing_file[0]) {
info->inserted->has_backing_file = true;
info->inserted->backing_file = g_strdup(bs->backing_file);
}
info->inserted->backing_file_depth = bdrv_get_backing_file_depth(bs);
if (bs->io_limits_enabled) {
ThrottleConfig cfg;
throttle_get_config(&bs->throttle_state, &cfg);
info->inserted->bps = cfg.buckets[THROTTLE_BPS_TOTAL].avg;
info->inserted->bps_rd = cfg.buckets[THROTTLE_BPS_READ].avg;
info->inserted->bps_wr = cfg.buckets[THROTTLE_BPS_WRITE].avg;
info->inserted->iops = cfg.buckets[THROTTLE_OPS_TOTAL].avg;
info->inserted->iops_rd = cfg.buckets[THROTTLE_OPS_READ].avg;
info->inserted->iops_wr = cfg.buckets[THROTTLE_OPS_WRITE].avg;
info->inserted->has_bps_max =
cfg.buckets[THROTTLE_BPS_TOTAL].max;
info->inserted->bps_max =
cfg.buckets[THROTTLE_BPS_TOTAL].max;
info->inserted->has_bps_rd_max =
cfg.buckets[THROTTLE_BPS_READ].max;
info->inserted->bps_rd_max =
cfg.buckets[THROTTLE_BPS_READ].max;
info->inserted->has_bps_wr_max =
cfg.buckets[THROTTLE_BPS_WRITE].max;
info->inserted->bps_wr_max =
cfg.buckets[THROTTLE_BPS_WRITE].max;
info->inserted->has_iops_max =
cfg.buckets[THROTTLE_OPS_TOTAL].max;
info->inserted->iops_max =
cfg.buckets[THROTTLE_OPS_TOTAL].max;
info->inserted->has_iops_rd_max =
cfg.buckets[THROTTLE_OPS_READ].max;
info->inserted->iops_rd_max =
cfg.buckets[THROTTLE_OPS_READ].max;
info->inserted->has_iops_wr_max =
cfg.buckets[THROTTLE_OPS_WRITE].max;
info->inserted->iops_wr_max =
cfg.buckets[THROTTLE_OPS_WRITE].max;
info->inserted->has_iops_size = cfg.op_size;
info->inserted->iops_size = cfg.op_size;
}
bs0 = bs;
p_image_info = &info->inserted->image;
while (1) {
bdrv_query_image_info(bs0, p_image_info, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
goto err;
}
@@ -319,11 +321,6 @@ BlockStats *bdrv_query_stats(const BlockDriverState *bs)
s->parent = bdrv_query_stats(bs->file);
}
if (bs->backing_hd) {
s->has_backing = true;
s->backing = bdrv_query_stats(bs->backing_hd);
}
return s;
}
@@ -336,7 +333,7 @@ BlockInfoList *qmp_query_block(Error **errp)
while ((bs = bdrv_next(bs))) {
BlockInfoList *info = g_malloc0(sizeof(*info));
bdrv_query_info(bs, &info->value, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
goto err;
}

View File

@@ -113,26 +113,23 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
be64_to_cpus(&header.l1_table_offset);
if (header.magic != QCOW_MAGIC) {
error_setg(errp, "Image not in qcow format");
ret = -EINVAL;
ret = -EMEDIUMTYPE;
goto fail;
}
if (header.version != QCOW_VERSION) {
char version[64];
snprintf(version, sizeof(version), "QCOW version %d", header.version);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "qcow", version);
qerror_report(QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "qcow", version);
ret = -ENOTSUP;
goto fail;
}
if (header.size <= 1 || header.cluster_bits < 9) {
error_setg(errp, "invalid value in qcow header");
ret = -EINVAL;
goto fail;
}
if (header.crypt_method > QCOW_CRYPT_AES) {
error_setg(errp, "invalid encryption method in qcow header");
ret = -EINVAL;
goto fail;
}
@@ -689,15 +686,15 @@ static int qcow_create(const char *filename, QEMUOptionParameter *options,
ret = bdrv_create_file(filename, options, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
qerror_report_err(local_err);
error_free(local_err);
return ret;
}
qcow_bs = NULL;
ret = bdrv_open(&qcow_bs, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, NULL, &local_err);
ret = bdrv_file_open(&qcow_bs, filename, NULL, BDRV_O_RDWR, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
qerror_report_err(local_err);
error_free(local_err);
return ret;
}

View File

@@ -380,10 +380,6 @@ static int coroutine_fn copy_sectors(BlockDriverState *bs,
BLKDBG_EVENT(bs->file, BLKDBG_COW_READ);
if (!bs->drv) {
return -ENOMEDIUM;
}
/* Call .bdrv_co_readv() directly instead of using the public block-layer
* interface. This avoids double I/O throttling and request tracking,
* which can lead to deadlock when block layer copy-on-read is enabled.
@@ -1186,7 +1182,7 @@ fail:
* Return 0 on success and -errno in error cases
*/
int qcow2_alloc_cluster_offset(BlockDriverState *bs, uint64_t offset,
int *num, uint64_t *host_offset, QCowL2Meta **m)
int n_start, int n_end, int *num, uint64_t *host_offset, QCowL2Meta **m)
{
BDRVQcowState *s = bs->opaque;
uint64_t start, remaining;
@@ -1194,13 +1190,15 @@ int qcow2_alloc_cluster_offset(BlockDriverState *bs, uint64_t offset,
uint64_t cur_bytes;
int ret;
trace_qcow2_alloc_clusters_offset(qemu_coroutine_self(), offset, *num);
trace_qcow2_alloc_clusters_offset(qemu_coroutine_self(), offset,
n_start, n_end);
assert((offset & ~BDRV_SECTOR_MASK) == 0);
assert(n_start * BDRV_SECTOR_SIZE == offset_into_cluster(s, offset));
offset = start_of_cluster(s, offset);
again:
start = offset;
remaining = *num << BDRV_SECTOR_BITS;
start = offset + (n_start << BDRV_SECTOR_BITS);
remaining = (n_end - n_start) << BDRV_SECTOR_BITS;
cluster_offset = 0;
*host_offset = 0;
cur_bytes = 0;
@@ -1286,7 +1284,7 @@ again:
}
}
*num -= remaining >> BDRV_SECTOR_BITS;
*num = (n_end - n_start) - (remaining >> BDRV_SECTOR_BITS);
assert(*num > 0);
assert(*host_offset != 0);
@@ -1371,31 +1369,13 @@ static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
uint64_t old_offset;
old_offset = be64_to_cpu(l2_table[l2_index + i]);
/*
* Make sure that a discarded area reads back as zeroes for v3 images
* (we cannot do it for v2 without actually writing a zero-filled
* buffer). We can skip the operation if the cluster is already marked
* as zero, or if it's unallocated and we don't have a backing file.
*
* TODO We might want to use bdrv_get_block_status(bs) here, but we're
* holding s->lock, so that doesn't work today.
*/
if (old_offset & QCOW_OFLAG_ZERO) {
continue;
}
if ((old_offset & L2E_OFFSET_MASK) == 0 && !bs->backing_hd) {
if ((old_offset & L2E_OFFSET_MASK) == 0) {
continue;
}
/* First remove L2 entries */
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
if (s->qcow_version >= 3) {
l2_table[l2_index + i] = cpu_to_be64(QCOW_OFLAG_ZERO);
} else {
l2_table[l2_index + i] = cpu_to_be64(0);
}
l2_table[l2_index + i] = cpu_to_be64(0);
/* Then decrease the refcount */
qcow2_free_any_clusters(bs, old_offset, 1, type);
@@ -1421,7 +1401,7 @@ int qcow2_discard_clusters(BlockDriverState *bs, uint64_t offset,
/* Round start up and end down */
offset = align_offset(offset, s->cluster_size);
end_offset = start_of_cluster(s, end_offset);
end_offset &= ~(s->cluster_size - 1);
if (offset > end_offset) {
return 0;
@@ -1633,7 +1613,7 @@ static int expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
}
ret = bdrv_write_zeroes(bs->file, offset / BDRV_SECTOR_SIZE,
s->cluster_sectors, 0);
s->cluster_sectors);
if (ret < 0) {
if (!preallocated) {
qcow2_free_clusters(bs, offset, s->cluster_size,

View File

@@ -96,8 +96,7 @@ static int get_refcount(BlockDriverState *bs, int64_t cluster_index)
refcount_table_index = cluster_index >> (s->cluster_bits - REFCOUNT_SHIFT);
if (refcount_table_index >= s->refcount_table_size)
return 0;
refcount_block_offset =
s->refcount_table[refcount_table_index] & REFT_OFFSET_MASK;
refcount_block_offset = s->refcount_table[refcount_table_index];
if (!refcount_block_offset)
return 0;
@@ -516,8 +515,8 @@ static int QEMU_WARN_UNUSED_RESULT update_refcount(BlockDriverState *bs,
s->l2_table_cache);
}
start = start_of_cluster(s, offset);
last = start_of_cluster(s, offset + length - 1);
start = offset & ~(s->cluster_size - 1);
last = (offset + length - 1) & ~(s->cluster_size - 1);
for(cluster_offset = start; cluster_offset <= last;
cluster_offset += s->cluster_size)
{
@@ -677,13 +676,7 @@ int qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
BDRVQcowState *s = bs->opaque;
uint64_t cluster_index;
uint64_t old_free_cluster_index;
uint64_t i;
int refcount, ret;
assert(nb_clusters >= 0);
if (nb_clusters == 0) {
return 0;
}
int i, refcount, ret;
/* Check how many clusters there are free */
cluster_index = offset >> s->cluster_bits;
@@ -731,7 +724,7 @@ int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size)
}
redo:
free_in_cluster = s->cluster_size -
offset_into_cluster(s, s->free_byte_offset);
(s->free_byte_offset & (s->cluster_size - 1));
if (size <= free_in_cluster) {
/* enough space in current cluster */
offset = s->free_byte_offset;
@@ -739,7 +732,7 @@ int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size)
free_in_cluster -= size;
if (free_in_cluster == 0)
s->free_byte_offset = 0;
if (offset_into_cluster(s, offset) != 0)
if ((offset & (s->cluster_size - 1)) != 0)
qcow2_update_cluster_refcount(bs, offset >> s->cluster_bits, 1,
QCOW2_DISCARD_NEVER);
} else {
@@ -747,7 +740,7 @@ int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size)
if (offset < 0) {
return offset;
}
cluster_offset = start_of_cluster(s, s->free_byte_offset);
cluster_offset = s->free_byte_offset & ~(s->cluster_size - 1);
if ((cluster_offset + s->cluster_size) == offset) {
/* we are lucky: contiguous data */
offset = s->free_byte_offset;
@@ -1017,8 +1010,8 @@ static void inc_refcounts(BlockDriverState *bs,
if (size <= 0)
return;
start = start_of_cluster(s, offset);
last = start_of_cluster(s, offset + size - 1);
start = offset & ~(s->cluster_size - 1);
last = (offset + size - 1) & ~(s->cluster_size - 1);
for(cluster_offset = start; cluster_offset <= last;
cluster_offset += s->cluster_size) {
k = cluster_offset >> s->cluster_bits;
@@ -1129,7 +1122,7 @@ static int check_refcounts_l2(BlockDriverState *bs, BdrvCheckResult *res,
offset, s->cluster_size);
/* Correct offsets are cluster aligned */
if (offset_into_cluster(s, offset)) {
if (offset & (s->cluster_size - 1)) {
fprintf(stderr, "ERROR offset=%" PRIx64 ": Cluster is not "
"properly aligned; L2 entry corrupted.\n", offset);
res->corruptions++;
@@ -1201,7 +1194,7 @@ static int check_refcounts_l1(BlockDriverState *bs,
l2_offset, s->cluster_size);
/* L2 tables are cluster aligned */
if (offset_into_cluster(s, l2_offset)) {
if (l2_offset & (s->cluster_size - 1)) {
fprintf(stderr, "ERROR l2_offset=%" PRIx64 ": Table is not "
"cluster aligned; L1 entry corrupted\n", l2_offset);
res->corruptions++;
@@ -1430,7 +1423,7 @@ static int64_t realloc_refcount_block(BlockDriverState *bs, int reftable_index,
}
/* update refcount table */
assert(!offset_into_cluster(s, new_offset));
assert(!(new_offset & (s->cluster_size - 1)));
s->refcount_table[reftable_index] = new_offset;
ret = write_reftable_entry(bs, reftable_index);
if (ret < 0) {
@@ -1514,7 +1507,7 @@ int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
cluster = offset >> s->cluster_bits;
/* Refcount blocks are cluster aligned */
if (offset_into_cluster(s, offset)) {
if (offset & (s->cluster_size - 1)) {
fprintf(stderr, "ERROR refcount block %" PRId64 " is not "
"cluster aligned; refcount table entry corrupted\n", i);
res->corruptions++;

View File

@@ -606,8 +606,7 @@ int qcow2_snapshot_delete(BlockDriverState *bs,
s->nb_snapshots--;
ret = qcow2_write_snapshots(bs);
if (ret < 0) {
error_setg_errno(errp, -ret,
"Failed to remove snapshot from snapshot list");
error_setg(errp, "Failed to remove snapshot from snapshot list");
return ret;
}
@@ -625,7 +624,7 @@ int qcow2_snapshot_delete(BlockDriverState *bs,
ret = qcow2_update_snapshot_refcount(bs, sn.l1_table_offset,
sn.l1_size, -1);
if (ret < 0) {
error_setg_errno(errp, -ret, "Failed to free the cluster and L1 table");
error_setg(errp, "Failed to free the cluster and L1 table");
return ret;
}
qcow2_free_clusters(bs, sn.l1_table_offset, sn.l1_size * sizeof(uint64_t),
@@ -634,8 +633,7 @@ int qcow2_snapshot_delete(BlockDriverState *bs,
/* must update the copied flag on the current cluster offsets */
ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 0);
if (ret < 0) {
error_setg_errno(errp, -ret,
"Failed to update snapshot status in disk");
error_setg(errp, "Failed to update snapshot status in disk");
return ret;
}
@@ -677,10 +675,7 @@ int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab)
return s->nb_snapshots;
}
int qcow2_snapshot_load_tmp(BlockDriverState *bs,
const char *snapshot_id,
const char *name,
Error **errp)
int qcow2_snapshot_load_tmp(BlockDriverState *bs, const char *snapshot_name)
{
int i, snapshot_index;
BDRVQcowState *s = bs->opaque;
@@ -692,10 +687,8 @@ int qcow2_snapshot_load_tmp(BlockDriverState *bs,
assert(bs->read_only);
/* Search the snapshot */
snapshot_index = find_snapshot_by_id_and_name(bs, snapshot_id, name);
snapshot_index = find_snapshot_by_id_or_name(bs, snapshot_name);
if (snapshot_index < 0) {
error_setg(errp,
"Can't find snapshot");
return -ENOENT;
}
sn = &s->snapshots[snapshot_index];
@@ -706,7 +699,6 @@ int qcow2_snapshot_load_tmp(BlockDriverState *bs,
ret = bdrv_pread(bs->file, sn->l1_table_offset, new_l1_table, new_l1_bytes);
if (ret < 0) {
error_setg(errp, "Failed to read l1 table for snapshot");
g_free(new_l1_table);
return ret;
}

View File

@@ -449,7 +449,7 @@ static int qcow2_open(BlockDriverState *bs, QDict *options, int flags,
if (header.magic != QCOW_MAGIC) {
error_setg(errp, "Image is not in qcow2 format");
ret = -EINVAL;
ret = -EMEDIUMTYPE;
goto fail;
}
if (header.version < 2 || header.version > 3) {
@@ -644,7 +644,7 @@ static int qcow2_open(BlockDriverState *bs, QDict *options, int flags,
}
/* Clear unknown autoclear feature bits */
if (!bs->read_only && !(flags & BDRV_O_INCOMING) && s->autoclear_features) {
if (!bs->read_only && s->autoclear_features != 0) {
s->autoclear_features = 0;
ret = qcow2_update_header(bs);
if (ret < 0) {
@@ -657,7 +657,7 @@ static int qcow2_open(BlockDriverState *bs, QDict *options, int flags,
qemu_co_mutex_init(&s->lock);
/* Repair image if dirty */
if (!(flags & (BDRV_O_CHECK | BDRV_O_INCOMING)) && !bs->read_only &&
if (!(flags & BDRV_O_CHECK) && !bs->read_only &&
(s->incompatible_features & QCOW2_INCOMPAT_DIRTY)) {
BdrvCheckResult result = {0};
@@ -669,9 +669,9 @@ static int qcow2_open(BlockDriverState *bs, QDict *options, int flags,
}
/* Enable lazy_refcounts according to image and command line options */
opts = qemu_opts_create(&qcow2_runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&qcow2_runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
@@ -750,15 +750,6 @@ static int qcow2_open(BlockDriverState *bs, QDict *options, int flags,
return ret;
}
static int qcow2_refresh_limits(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
bs->bl.write_zeroes_alignment = s->cluster_sectors;
return 0;
}
static int qcow2_set_key(BlockDriverState *bs, const char *key)
{
BDRVQcowState *s = bs->opaque;
@@ -1000,6 +991,7 @@ static coroutine_fn int qcow2_co_writev(BlockDriverState *bs,
{
BDRVQcowState *s = bs->opaque;
int index_in_cluster;
int n_end;
int ret;
int cur_nr_sectors; /* number of sectors in current iteration */
uint64_t cluster_offset;
@@ -1023,16 +1015,14 @@ static coroutine_fn int qcow2_co_writev(BlockDriverState *bs,
trace_qcow2_writev_start_part(qemu_coroutine_self());
index_in_cluster = sector_num & (s->cluster_sectors - 1);
cur_nr_sectors = remaining_sectors;
n_end = index_in_cluster + remaining_sectors;
if (s->crypt_method &&
cur_nr_sectors >
QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors - index_in_cluster) {
cur_nr_sectors =
QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors - index_in_cluster;
n_end > QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors) {
n_end = QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors;
}
ret = qcow2_alloc_cluster_offset(bs, sector_num << 9,
&cur_nr_sectors, &cluster_offset, &l2meta);
index_in_cluster, n_end, &cur_nr_sectors, &cluster_offset, &l2meta);
if (ret < 0) {
goto fail;
}
@@ -1137,12 +1127,10 @@ static void qcow2_close(BlockDriverState *bs)
/* else pre-write overlap checks in cache_destroy may crash */
s->l1_table = NULL;
if (!(bs->open_flags & BDRV_O_INCOMING)) {
qcow2_cache_flush(bs, s->l2_table_cache);
qcow2_cache_flush(bs, s->refcount_block_cache);
qcow2_cache_flush(bs, s->l2_table_cache);
qcow2_cache_flush(bs, s->refcount_block_cache);
qcow2_mark_clean(bs);
}
qcow2_mark_clean(bs);
qcow2_cache_destroy(bs, s->l2_table_cache);
qcow2_cache_destroy(bs, s->refcount_block_cache);
@@ -1178,10 +1166,11 @@ static void qcow2_invalidate_cache(BlockDriverState *bs)
qcow2_close(bs);
bdrv_invalidate_cache(bs->file);
options = qdict_new();
qdict_put(options, QCOW2_OPT_LAZY_REFCOUNTS,
qbool_from_int(s->use_lazy_refcounts));
memset(s, 0, sizeof(BDRVQcowState));
options = qdict_clone_shallow(bs->options);
qcow2_open(bs, options, flags, NULL);
QDECREF(options);
@@ -1405,34 +1394,34 @@ static int preallocate(BlockDriverState *bs)
int ret;
QCowL2Meta *meta;
nb_sectors = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
nb_sectors = bdrv_getlength(bs) >> 9;
offset = 0;
while (nb_sectors) {
num = MIN(nb_sectors, INT_MAX >> BDRV_SECTOR_BITS);
ret = qcow2_alloc_cluster_offset(bs, offset, &num,
num = MIN(nb_sectors, INT_MAX >> 9);
ret = qcow2_alloc_cluster_offset(bs, offset, 0, num, &num,
&host_offset, &meta);
if (ret < 0) {
return ret;
}
if (meta != NULL) {
ret = qcow2_alloc_cluster_link_l2(bs, meta);
if (ret < 0) {
qcow2_free_any_clusters(bs, meta->alloc_offset,
meta->nb_clusters, QCOW2_DISCARD_NEVER);
return ret;
}
ret = qcow2_alloc_cluster_link_l2(bs, meta);
if (ret < 0) {
qcow2_free_any_clusters(bs, meta->alloc_offset, meta->nb_clusters,
QCOW2_DISCARD_NEVER);
return ret;
}
/* There are no dependent requests, but we need to remove our
* request from the list of in-flight requests */
/* There are no dependent requests, but we need to remove our request
* from the list of in-flight requests */
if (meta != NULL) {
QLIST_REMOVE(meta, next_in_flight);
}
/* TODO Preallocate data if requested */
nb_sectors -= num;
offset += num << BDRV_SECTOR_BITS;
offset += num << 9;
}
/*
@@ -1441,10 +1430,9 @@ static int preallocate(BlockDriverState *bs)
* EOF). Extend the image to the last allocated sector.
*/
if (host_offset != 0) {
uint8_t buf[BDRV_SECTOR_SIZE];
memset(buf, 0, BDRV_SECTOR_SIZE);
ret = bdrv_write(bs->file, (host_offset >> BDRV_SECTOR_BITS) + num - 1,
buf, 1);
uint8_t buf[512];
memset(buf, 0, 512);
ret = bdrv_write(bs->file, (host_offset >> 9) + num - 1, buf, 1);
if (ret < 0) {
return ret;
}
@@ -1483,7 +1471,7 @@ static int qcow2_create2(const char *filename, int64_t total_size,
* size for any qcow2 image.
*/
BlockDriverState* bs;
QCowHeader *header;
QCowHeader header;
uint8_t* refcount_table;
Error *local_err = NULL;
int ret;
@@ -1494,43 +1482,37 @@ static int qcow2_create2(const char *filename, int64_t total_size,
return ret;
}
bs = NULL;
ret = bdrv_open(&bs, filename, NULL, NULL, BDRV_O_RDWR | BDRV_O_PROTOCOL,
NULL, &local_err);
ret = bdrv_file_open(&bs, filename, NULL, BDRV_O_RDWR, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
/* Write the header */
QEMU_BUILD_BUG_ON((1 << MIN_CLUSTER_BITS) < sizeof(*header));
header = g_malloc0(cluster_size);
*header = (QCowHeader) {
.magic = cpu_to_be32(QCOW_MAGIC),
.version = cpu_to_be32(version),
.cluster_bits = cpu_to_be32(cluster_bits),
.size = cpu_to_be64(0),
.l1_table_offset = cpu_to_be64(0),
.l1_size = cpu_to_be32(0),
.refcount_table_offset = cpu_to_be64(cluster_size),
.refcount_table_clusters = cpu_to_be32(1),
.refcount_order = cpu_to_be32(3 + REFCOUNT_SHIFT),
.header_length = cpu_to_be32(sizeof(*header)),
};
memset(&header, 0, sizeof(header));
header.magic = cpu_to_be32(QCOW_MAGIC);
header.version = cpu_to_be32(version);
header.cluster_bits = cpu_to_be32(cluster_bits);
header.size = cpu_to_be64(0);
header.l1_table_offset = cpu_to_be64(0);
header.l1_size = cpu_to_be32(0);
header.refcount_table_offset = cpu_to_be64(cluster_size);
header.refcount_table_clusters = cpu_to_be32(1);
header.refcount_order = cpu_to_be32(3 + REFCOUNT_SHIFT);
header.header_length = cpu_to_be32(sizeof(header));
if (flags & BLOCK_FLAG_ENCRYPT) {
header->crypt_method = cpu_to_be32(QCOW_CRYPT_AES);
header.crypt_method = cpu_to_be32(QCOW_CRYPT_AES);
} else {
header->crypt_method = cpu_to_be32(QCOW_CRYPT_NONE);
header.crypt_method = cpu_to_be32(QCOW_CRYPT_NONE);
}
if (flags & BLOCK_FLAG_LAZY_REFCOUNTS) {
header->compatible_features |=
header.compatible_features |=
cpu_to_be64(QCOW2_COMPAT_LAZY_REFCOUNTS);
}
ret = bdrv_pwrite(bs, 0, header, cluster_size);
g_free(header);
ret = bdrv_pwrite(bs, 0, &header, sizeof(header));
if (ret < 0) {
error_setg_errno(errp, -ret, "Could not write qcow2 header");
goto out;
@@ -1546,8 +1528,7 @@ static int qcow2_create2(const char *filename, int64_t total_size,
goto out;
}
bdrv_unref(bs);
bs = NULL;
bdrv_close(bs);
/*
* And now open the image and make it consistent first (i.e. increase the
@@ -1556,7 +1537,7 @@ static int qcow2_create2(const char *filename, int64_t total_size,
*/
BlockDriver* drv = bdrv_find_format("qcow2");
assert(drv != NULL);
ret = bdrv_open(&bs, filename, NULL, NULL,
ret = bdrv_open(bs, filename, NULL,
BDRV_O_RDWR | BDRV_O_CACHE_WB | BDRV_O_NO_FLUSH, drv, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
@@ -1603,23 +1584,19 @@ static int qcow2_create2(const char *filename, int64_t total_size,
}
}
bdrv_unref(bs);
bs = NULL;
bdrv_close(bs);
/* Reopen the image without BDRV_O_NO_FLUSH to flush it before returning */
ret = bdrv_open(&bs, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_CACHE_WB | BDRV_O_NO_BACKING,
drv, &local_err);
if (local_err) {
ret = bdrv_open(bs, filename, NULL,
BDRV_O_RDWR | BDRV_O_CACHE_WB, drv, &local_err);
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
goto out;
}
ret = 0;
out:
if (bs) {
bdrv_unref(bs);
}
bdrv_unref(bs);
return ret;
}
@@ -1692,14 +1669,34 @@ static int qcow2_create(const char *filename, QEMUOptionParameter *options,
ret = qcow2_create2(filename, sectors, backing_file, backing_fmt, flags,
cluster_size, prealloc, options, version, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
}
return ret;
}
static int qcow2_make_empty(BlockDriverState *bs)
{
#if 0
/* XXX: not correct */
BDRVQcowState *s = bs->opaque;
uint32_t l1_length = s->l1_size * sizeof(uint64_t);
int ret;
memset(s->l1_table, 0, l1_length);
if (bdrv_pwrite(bs->file, s->l1_table_offset, s->l1_table, l1_length) < 0)
return -1;
ret = bdrv_truncate(bs->file, s->l1_table_offset + l1_length);
if (ret < 0)
return ret;
l2_cache_reset(bs);
#endif
return 0;
}
static coroutine_fn int qcow2_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, BdrvRequestFlags flags)
int64_t sector_num, int nb_sectors)
{
int ret;
BDRVQcowState *s = bs->opaque;
@@ -1895,8 +1892,6 @@ static coroutine_fn int qcow2_co_flush_to_os(BlockDriverState *bs)
static int qcow2_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
{
BDRVQcowState *s = bs->opaque;
bdi->unallocated_blocks_are_zero = true;
bdi->can_write_zeroes_with_unmap = (s->qcow_version >= 3);
bdi->cluster_size = s->cluster_size;
bdi->vm_state_offset = qcow2_vm_state_offset(s);
return 0;
@@ -2241,6 +2236,7 @@ static BlockDriver bdrv_qcow2 = {
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_co_get_block_status = qcow2_co_get_block_status,
.bdrv_set_key = qcow2_set_key,
.bdrv_make_empty = qcow2_make_empty,
.bdrv_co_readv = qcow2_co_readv,
.bdrv_co_writev = qcow2_co_writev,
@@ -2264,7 +2260,6 @@ static BlockDriver bdrv_qcow2 = {
.bdrv_change_backing_file = qcow2_change_backing_file,
.bdrv_refresh_limits = qcow2_refresh_limits,
.bdrv_invalidate_cache = qcow2_invalidate_cache,
.create_options = qcow2_create_options,

View File

@@ -340,11 +340,11 @@ typedef enum QCow2MetadataOverlap {
#define QCOW2_OL_ALL \
(QCOW2_OL_CACHED | QCOW2_OL_INACTIVE_L2)
#define L1E_OFFSET_MASK 0x00fffffffffffe00ULL
#define L2E_OFFSET_MASK 0x00fffffffffffe00ULL
#define L1E_OFFSET_MASK 0x00ffffffffffff00ULL
#define L2E_OFFSET_MASK 0x00ffffffffffff00ULL
#define L2E_COMPRESSED_OFFSET_SIZE_MASK 0x3fffffffffffffffULL
#define REFT_OFFSET_MASK 0xfffffffffffffe00ULL
#define REFT_OFFSET_MASK 0xffffffffffffff00ULL
static inline int64_t start_of_cluster(BDRVQcowState *s, int64_t offset)
{
@@ -468,7 +468,7 @@ void qcow2_encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
int *num, uint64_t *cluster_offset);
int qcow2_alloc_cluster_offset(BlockDriverState *bs, uint64_t offset,
int *num, uint64_t *host_offset, QCowL2Meta **m);
int n_start, int n_end, int *num, uint64_t *host_offset, QCowL2Meta **m);
uint64_t qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs,
uint64_t offset,
int compressed_size);
@@ -488,10 +488,7 @@ int qcow2_snapshot_delete(BlockDriverState *bs,
const char *name,
Error **errp);
int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab);
int qcow2_snapshot_load_tmp(BlockDriverState *bs,
const char *snapshot_id,
const char *name,
Error **errp);
int qcow2_snapshot_load_tmp(BlockDriverState *bs, const char *snapshot_name);
void qcow2_free_snapshots(BlockDriverState *bs);
int qcow2_read_snapshots(BlockDriverState *bs);

View File

@@ -391,15 +391,14 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
qed_header_le_to_cpu(&le_header, &s->header);
if (s->header.magic != QED_MAGIC) {
error_setg(errp, "Image not in QED format");
return -EINVAL;
return -EMEDIUMTYPE;
}
if (s->header.features & ~QED_FEATURE_MASK) {
/* image uses unsupported feature bits */
char buf[64];
snprintf(buf, sizeof(buf), "%" PRIx64,
s->header.features & ~QED_FEATURE_MASK);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
qerror_report(QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "QED", buf);
return -ENOTSUP;
}
@@ -507,15 +506,6 @@ out:
return ret;
}
static int bdrv_qed_refresh_limits(BlockDriverState *bs)
{
BDRVQEDState *s = bs->opaque;
bs->bl.write_zeroes_alignment = s->header.cluster_size >> BDRV_SECTOR_BITS;
return 0;
}
/* We have nothing to do for QED reopen, stubs just return
* success */
static int bdrv_qed_reopen_prepare(BDRVReopenState *state,
@@ -546,8 +536,7 @@ static void bdrv_qed_close(BlockDriverState *bs)
static int qed_create(const char *filename, uint32_t cluster_size,
uint64_t image_size, uint32_t table_size,
const char *backing_file, const char *backing_fmt,
Error **errp)
const char *backing_file, const char *backing_fmt)
{
QEDHeader header = {
.magic = QED_MAGIC,
@@ -564,20 +553,20 @@ static int qed_create(const char *filename, uint32_t cluster_size,
size_t l1_size = header.cluster_size * header.table_size;
Error *local_err = NULL;
int ret = 0;
BlockDriverState *bs;
BlockDriverState *bs = NULL;
ret = bdrv_create_file(filename, NULL, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
qerror_report_err(local_err);
error_free(local_err);
return ret;
}
bs = NULL;
ret = bdrv_open(&bs, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_CACHE_WB | BDRV_O_PROTOCOL, NULL,
&local_err);
ret = bdrv_file_open(&bs, filename, NULL, BDRV_O_RDWR | BDRV_O_CACHE_WB,
&local_err);
if (ret < 0) {
error_propagate(errp, local_err);
qerror_report_err(local_err);
error_free(local_err);
return ret;
}
@@ -667,7 +656,7 @@ static int bdrv_qed_create(const char *filename, QEMUOptionParameter *options,
}
return qed_create(filename, cluster_size, image_size, table_size,
backing_file, backing_fmt, errp);
backing_file, backing_fmt);
}
typedef struct {
@@ -733,6 +722,11 @@ static int64_t coroutine_fn bdrv_qed_co_get_block_status(BlockDriverState *bs,
return cb.status;
}
static int bdrv_qed_make_empty(BlockDriverState *bs)
{
return -ENOTSUP;
}
static BDRVQEDState *acb_to_s(QEDAIOCB *acb)
{
return acb->common.bs->opaque;
@@ -1403,8 +1397,7 @@ static void coroutine_fn qed_co_write_zeroes_cb(void *opaque, int ret)
static int coroutine_fn bdrv_qed_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors,
BdrvRequestFlags flags)
int nb_sectors)
{
BlockDriverAIOCB *blockacb;
BDRVQEDState *s = bs->opaque;
@@ -1481,8 +1474,6 @@ static int bdrv_qed_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
memset(bdi, 0, sizeof(*bdi));
bdi->cluster_size = s->header.cluster_size;
bdi->is_dirty = s->header.features & QED_F_NEED_CHECK;
bdi->unallocated_blocks_are_zero = true;
bdi->can_write_zeroes_with_unmap = true;
return 0;
}
@@ -1563,9 +1554,6 @@ static void bdrv_qed_invalidate_cache(BlockDriverState *bs)
BDRVQEDState *s = bs->opaque;
bdrv_qed_close(bs);
bdrv_invalidate_cache(bs->file);
memset(s, 0, sizeof(BDRVQEDState));
bdrv_qed_open(bs, NULL, bs->open_flags, NULL);
}
@@ -1617,13 +1605,13 @@ static BlockDriver bdrv_qed = {
.bdrv_create = bdrv_qed_create,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_co_get_block_status = bdrv_qed_co_get_block_status,
.bdrv_make_empty = bdrv_qed_make_empty,
.bdrv_aio_readv = bdrv_qed_aio_readv,
.bdrv_aio_writev = bdrv_qed_aio_writev,
.bdrv_co_write_zeroes = bdrv_qed_co_write_zeroes,
.bdrv_truncate = bdrv_qed_truncate,
.bdrv_getlength = bdrv_qed_getlength,
.bdrv_get_info = bdrv_qed_get_info,
.bdrv_refresh_limits = bdrv_qed_refresh_limits,
.bdrv_change_backing_file = bdrv_qed_change_backing_file,
.bdrv_invalidate_cache = bdrv_qed_invalidate_cache,
.bdrv_check = bdrv_qed_check,

View File

@@ -1,872 +0,0 @@
/*
* Quorum Block filter
*
* Copyright (C) 2012-2014 Nodalink, EURL.
*
* Author:
* Benoît Canet <benoit.canet@irqsave.net>
*
* Based on the design and code of blkverify.c (Copyright (C) 2010 IBM, Corp)
* and blkmirror.c (Copyright (C) 2011 Red Hat, Inc).
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include <gnutls/gnutls.h>
#include <gnutls/crypto.h>
#include "block/block_int.h"
#include "qapi/qmp/qjson.h"
#define HASH_LENGTH 32
#define QUORUM_OPT_VOTE_THRESHOLD "vote-threshold"
#define QUORUM_OPT_BLKVERIFY "blkverify"
/* This union holds a vote hash value */
typedef union QuorumVoteValue {
char h[HASH_LENGTH]; /* SHA-256 hash */
int64_t l; /* simpler 64 bits hash */
} QuorumVoteValue;
/* A vote item */
typedef struct QuorumVoteItem {
int index;
QLIST_ENTRY(QuorumVoteItem) next;
} QuorumVoteItem;
/* this structure is a vote version. A version is the set of votes sharing the
* same vote value.
* The set of votes will be tracked with the items field and its cardinality is
* vote_count.
*/
typedef struct QuorumVoteVersion {
QuorumVoteValue value;
int index;
int vote_count;
QLIST_HEAD(, QuorumVoteItem) items;
QLIST_ENTRY(QuorumVoteVersion) next;
} QuorumVoteVersion;
/* this structure holds a group of vote versions together */
typedef struct QuorumVotes {
QLIST_HEAD(, QuorumVoteVersion) vote_list;
bool (*compare)(QuorumVoteValue *a, QuorumVoteValue *b);
} QuorumVotes;
/* the following structure holds the state of one quorum instance */
typedef struct BDRVQuorumState {
BlockDriverState **bs; /* children BlockDriverStates */
int num_children; /* children count */
int threshold; /* if less than threshold children reads gave the
* same result a quorum error occurs.
*/
bool is_blkverify; /* true if the driver is in blkverify mode
* Writes are mirrored on two children devices.
* On reads the two children devices' contents are
* compared and if a difference is spotted its
* location is printed and the code aborts.
* It is useful to debug other block drivers by
* comparing them with a reference one.
*/
} BDRVQuorumState;
typedef struct QuorumAIOCB QuorumAIOCB;
/* Quorum will create one instance of the following structure per operation it
* performs on its children.
* So for each read/write operation coming from the upper layer there will be
* $children_count QuorumChildRequest.
*/
typedef struct QuorumChildRequest {
BlockDriverAIOCB *aiocb;
QEMUIOVector qiov;
uint8_t *buf;
int ret;
QuorumAIOCB *parent;
} QuorumChildRequest;
/* Quorum will use the following structure to track progress of each read/write
* operation received by the upper layer.
* This structure hold pointers to the QuorumChildRequest structures instances
* used to do operations on each children and track overall progress.
*/
struct QuorumAIOCB {
BlockDriverAIOCB common;
/* Request metadata */
uint64_t sector_num;
int nb_sectors;
QEMUIOVector *qiov; /* calling IOV */
QuorumChildRequest *qcrs; /* individual child requests */
int count; /* number of completed AIOCB */
int success_count; /* number of successfully completed AIOCB */
QuorumVotes votes;
bool is_read;
int vote_ret;
};
static void quorum_vote(QuorumAIOCB *acb);
static void quorum_aio_cancel(BlockDriverAIOCB *blockacb)
{
QuorumAIOCB *acb = container_of(blockacb, QuorumAIOCB, common);
BDRVQuorumState *s = acb->common.bs->opaque;
int i;
/* cancel all callbacks */
for (i = 0; i < s->num_children; i++) {
bdrv_aio_cancel(acb->qcrs[i].aiocb);
}
g_free(acb->qcrs);
qemu_aio_release(acb);
}
static AIOCBInfo quorum_aiocb_info = {
.aiocb_size = sizeof(QuorumAIOCB),
.cancel = quorum_aio_cancel,
};
static void quorum_aio_finalize(QuorumAIOCB *acb)
{
BDRVQuorumState *s = acb->common.bs->opaque;
int i, ret = 0;
if (acb->vote_ret) {
ret = acb->vote_ret;
}
acb->common.cb(acb->common.opaque, ret);
if (acb->is_read) {
for (i = 0; i < s->num_children; i++) {
qemu_vfree(acb->qcrs[i].buf);
qemu_iovec_destroy(&acb->qcrs[i].qiov);
}
}
g_free(acb->qcrs);
qemu_aio_release(acb);
}
static bool quorum_sha256_compare(QuorumVoteValue *a, QuorumVoteValue *b)
{
return !memcmp(a->h, b->h, HASH_LENGTH);
}
static bool quorum_64bits_compare(QuorumVoteValue *a, QuorumVoteValue *b)
{
return a->l == b->l;
}
static QuorumAIOCB *quorum_aio_get(BDRVQuorumState *s,
BlockDriverState *bs,
QEMUIOVector *qiov,
uint64_t sector_num,
int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque)
{
QuorumAIOCB *acb = qemu_aio_get(&quorum_aiocb_info, bs, cb, opaque);
int i;
acb->common.bs->opaque = s;
acb->sector_num = sector_num;
acb->nb_sectors = nb_sectors;
acb->qiov = qiov;
acb->qcrs = g_new0(QuorumChildRequest, s->num_children);
acb->count = 0;
acb->success_count = 0;
acb->votes.compare = quorum_sha256_compare;
QLIST_INIT(&acb->votes.vote_list);
acb->is_read = false;
acb->vote_ret = 0;
for (i = 0; i < s->num_children; i++) {
acb->qcrs[i].buf = NULL;
acb->qcrs[i].ret = 0;
acb->qcrs[i].parent = acb;
}
return acb;
}
static void quorum_report_bad(QuorumAIOCB *acb, char *node_name, int ret)
{
QObject *data;
assert(node_name);
data = qobject_from_jsonf("{ 'node-name': %s"
", 'sector-num': %" PRId64
", 'sectors-count': %d }",
node_name, acb->sector_num, acb->nb_sectors);
if (ret < 0) {
QDict *dict = qobject_to_qdict(data);
qdict_put(dict, "error", qstring_from_str(strerror(-ret)));
}
monitor_protocol_event(QEVENT_QUORUM_REPORT_BAD, data);
qobject_decref(data);
}
static void quorum_report_failure(QuorumAIOCB *acb)
{
QObject *data;
const char *reference = acb->common.bs->device_name[0] ?
acb->common.bs->device_name :
acb->common.bs->node_name;
data = qobject_from_jsonf("{ 'reference': %s"
", 'sector-num': %" PRId64
", 'sectors-count': %d }",
reference, acb->sector_num, acb->nb_sectors);
monitor_protocol_event(QEVENT_QUORUM_FAILURE, data);
qobject_decref(data);
}
static int quorum_vote_error(QuorumAIOCB *acb);
static bool quorum_has_too_much_io_failed(QuorumAIOCB *acb)
{
BDRVQuorumState *s = acb->common.bs->opaque;
if (acb->success_count < s->threshold) {
acb->vote_ret = quorum_vote_error(acb);
quorum_report_failure(acb);
return true;
}
return false;
}
static void quorum_aio_cb(void *opaque, int ret)
{
QuorumChildRequest *sacb = opaque;
QuorumAIOCB *acb = sacb->parent;
BDRVQuorumState *s = acb->common.bs->opaque;
sacb->ret = ret;
acb->count++;
if (ret == 0) {
acb->success_count++;
} else {
quorum_report_bad(acb, sacb->aiocb->bs->node_name, ret);
}
assert(acb->count <= s->num_children);
assert(acb->success_count <= s->num_children);
if (acb->count < s->num_children) {
return;
}
/* Do the vote on read */
if (acb->is_read) {
quorum_vote(acb);
} else {
quorum_has_too_much_io_failed(acb);
}
quorum_aio_finalize(acb);
}
static void quorum_report_bad_versions(BDRVQuorumState *s,
QuorumAIOCB *acb,
QuorumVoteValue *value)
{
QuorumVoteVersion *version;
QuorumVoteItem *item;
QLIST_FOREACH(version, &acb->votes.vote_list, next) {
if (acb->votes.compare(&version->value, value)) {
continue;
}
QLIST_FOREACH(item, &version->items, next) {
quorum_report_bad(acb, s->bs[item->index]->node_name, 0);
}
}
}
static void quorum_copy_qiov(QEMUIOVector *dest, QEMUIOVector *source)
{
int i;
assert(dest->niov == source->niov);
assert(dest->size == source->size);
for (i = 0; i < source->niov; i++) {
assert(dest->iov[i].iov_len == source->iov[i].iov_len);
memcpy(dest->iov[i].iov_base,
source->iov[i].iov_base,
source->iov[i].iov_len);
}
}
static void quorum_count_vote(QuorumVotes *votes,
QuorumVoteValue *value,
int index)
{
QuorumVoteVersion *v = NULL, *version = NULL;
QuorumVoteItem *item;
/* look if we have something with this hash */
QLIST_FOREACH(v, &votes->vote_list, next) {
if (votes->compare(&v->value, value)) {
version = v;
break;
}
}
/* It's a version not yet in the list add it */
if (!version) {
version = g_new0(QuorumVoteVersion, 1);
QLIST_INIT(&version->items);
memcpy(&version->value, value, sizeof(version->value));
version->index = index;
version->vote_count = 0;
QLIST_INSERT_HEAD(&votes->vote_list, version, next);
}
version->vote_count++;
item = g_new0(QuorumVoteItem, 1);
item->index = index;
QLIST_INSERT_HEAD(&version->items, item, next);
}
static void quorum_free_vote_list(QuorumVotes *votes)
{
QuorumVoteVersion *version, *next_version;
QuorumVoteItem *item, *next_item;
QLIST_FOREACH_SAFE(version, &votes->vote_list, next, next_version) {
QLIST_REMOVE(version, next);
QLIST_FOREACH_SAFE(item, &version->items, next, next_item) {
QLIST_REMOVE(item, next);
g_free(item);
}
g_free(version);
}
}
static int quorum_compute_hash(QuorumAIOCB *acb, int i, QuorumVoteValue *hash)
{
int j, ret;
gnutls_hash_hd_t dig;
QEMUIOVector *qiov = &acb->qcrs[i].qiov;
ret = gnutls_hash_init(&dig, GNUTLS_DIG_SHA256);
if (ret < 0) {
return ret;
}
for (j = 0; j < qiov->niov; j++) {
ret = gnutls_hash(dig, qiov->iov[j].iov_base, qiov->iov[j].iov_len);
if (ret < 0) {
break;
}
}
gnutls_hash_deinit(dig, (void *) hash);
return ret;
}
static QuorumVoteVersion *quorum_get_vote_winner(QuorumVotes *votes)
{
int max = 0;
QuorumVoteVersion *candidate, *winner = NULL;
QLIST_FOREACH(candidate, &votes->vote_list, next) {
if (candidate->vote_count > max) {
max = candidate->vote_count;
winner = candidate;
}
}
return winner;
}
/* qemu_iovec_compare is handy for blkverify mode because it returns the first
* differing byte location. Yet it is handcoded to compare vectors one byte
* after another so it does not benefit from the libc SIMD optimizations.
* quorum_iovec_compare is written for speed and should be used in the non
* blkverify mode of quorum.
*/
static bool quorum_iovec_compare(QEMUIOVector *a, QEMUIOVector *b)
{
int i;
int result;
assert(a->niov == b->niov);
for (i = 0; i < a->niov; i++) {
assert(a->iov[i].iov_len == b->iov[i].iov_len);
result = memcmp(a->iov[i].iov_base,
b->iov[i].iov_base,
a->iov[i].iov_len);
if (result) {
return false;
}
}
return true;
}
static void GCC_FMT_ATTR(2, 3) quorum_err(QuorumAIOCB *acb,
const char *fmt, ...)
{
va_list ap;
va_start(ap, fmt);
fprintf(stderr, "quorum: sector_num=%" PRId64 " nb_sectors=%d ",
acb->sector_num, acb->nb_sectors);
vfprintf(stderr, fmt, ap);
fprintf(stderr, "\n");
va_end(ap);
exit(1);
}
static bool quorum_compare(QuorumAIOCB *acb,
QEMUIOVector *a,
QEMUIOVector *b)
{
BDRVQuorumState *s = acb->common.bs->opaque;
ssize_t offset;
/* This driver will replace blkverify in this particular case */
if (s->is_blkverify) {
offset = qemu_iovec_compare(a, b);
if (offset != -1) {
quorum_err(acb, "contents mismatch in sector %" PRId64,
acb->sector_num +
(uint64_t)(offset / BDRV_SECTOR_SIZE));
}
return true;
}
return quorum_iovec_compare(a, b);
}
/* Do a vote to get the error code */
static int quorum_vote_error(QuorumAIOCB *acb)
{
BDRVQuorumState *s = acb->common.bs->opaque;
QuorumVoteVersion *winner = NULL;
QuorumVotes error_votes;
QuorumVoteValue result_value;
int i, ret = 0;
bool error = false;
QLIST_INIT(&error_votes.vote_list);
error_votes.compare = quorum_64bits_compare;
for (i = 0; i < s->num_children; i++) {
ret = acb->qcrs[i].ret;
if (ret) {
error = true;
result_value.l = ret;
quorum_count_vote(&error_votes, &result_value, i);
}
}
if (error) {
winner = quorum_get_vote_winner(&error_votes);
ret = winner->value.l;
}
quorum_free_vote_list(&error_votes);
return ret;
}
static void quorum_vote(QuorumAIOCB *acb)
{
bool quorum = true;
int i, j, ret;
QuorumVoteValue hash;
BDRVQuorumState *s = acb->common.bs->opaque;
QuorumVoteVersion *winner;
if (quorum_has_too_much_io_failed(acb)) {
return;
}
/* get the index of the first successful read */
for (i = 0; i < s->num_children; i++) {
if (!acb->qcrs[i].ret) {
break;
}
}
assert(i < s->num_children);
/* compare this read with all other successful reads stopping at quorum
* failure
*/
for (j = i + 1; j < s->num_children; j++) {
if (acb->qcrs[j].ret) {
continue;
}
quorum = quorum_compare(acb, &acb->qcrs[i].qiov, &acb->qcrs[j].qiov);
if (!quorum) {
break;
}
}
/* Every successful read agrees */
if (quorum) {
quorum_copy_qiov(acb->qiov, &acb->qcrs[i].qiov);
return;
}
/* compute hashes for each successful read, also store indexes */
for (i = 0; i < s->num_children; i++) {
if (acb->qcrs[i].ret) {
continue;
}
ret = quorum_compute_hash(acb, i, &hash);
/* if ever the hash computation failed */
if (ret < 0) {
acb->vote_ret = ret;
goto free_exit;
}
quorum_count_vote(&acb->votes, &hash, i);
}
/* vote to select the most represented version */
winner = quorum_get_vote_winner(&acb->votes);
/* if the winner count is smaller than threshold the read fails */
if (winner->vote_count < s->threshold) {
quorum_report_failure(acb);
acb->vote_ret = -EIO;
goto free_exit;
}
/* we have a winner: copy it */
quorum_copy_qiov(acb->qiov, &acb->qcrs[winner->index].qiov);
/* some versions are bad print them */
quorum_report_bad_versions(s, acb, &winner->value);
free_exit:
/* free lists */
quorum_free_vote_list(&acb->votes);
}
static BlockDriverAIOCB *quorum_aio_readv(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov,
int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque)
{
BDRVQuorumState *s = bs->opaque;
QuorumAIOCB *acb = quorum_aio_get(s, bs, qiov, sector_num,
nb_sectors, cb, opaque);
int i;
acb->is_read = true;
for (i = 0; i < s->num_children; i++) {
acb->qcrs[i].buf = qemu_blockalign(s->bs[i], qiov->size);
qemu_iovec_init(&acb->qcrs[i].qiov, qiov->niov);
qemu_iovec_clone(&acb->qcrs[i].qiov, qiov, acb->qcrs[i].buf);
}
for (i = 0; i < s->num_children; i++) {
bdrv_aio_readv(s->bs[i], sector_num, &acb->qcrs[i].qiov, nb_sectors,
quorum_aio_cb, &acb->qcrs[i]);
}
return &acb->common;
}
static BlockDriverAIOCB *quorum_aio_writev(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov,
int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque)
{
BDRVQuorumState *s = bs->opaque;
QuorumAIOCB *acb = quorum_aio_get(s, bs, qiov, sector_num, nb_sectors,
cb, opaque);
int i;
for (i = 0; i < s->num_children; i++) {
acb->qcrs[i].aiocb = bdrv_aio_writev(s->bs[i], sector_num, qiov,
nb_sectors, &quorum_aio_cb,
&acb->qcrs[i]);
}
return &acb->common;
}
static int64_t quorum_getlength(BlockDriverState *bs)
{
BDRVQuorumState *s = bs->opaque;
int64_t result;
int i;
/* check that all file have the same length */
result = bdrv_getlength(s->bs[0]);
if (result < 0) {
return result;
}
for (i = 1; i < s->num_children; i++) {
int64_t value = bdrv_getlength(s->bs[i]);
if (value < 0) {
return value;
}
if (value != result) {
return -EIO;
}
}
return result;
}
static void quorum_invalidate_cache(BlockDriverState *bs)
{
BDRVQuorumState *s = bs->opaque;
int i;
for (i = 0; i < s->num_children; i++) {
bdrv_invalidate_cache(s->bs[i]);
}
}
static coroutine_fn int quorum_co_flush(BlockDriverState *bs)
{
BDRVQuorumState *s = bs->opaque;
QuorumVoteVersion *winner = NULL;
QuorumVotes error_votes;
QuorumVoteValue result_value;
int i;
int result = 0;
QLIST_INIT(&error_votes.vote_list);
error_votes.compare = quorum_64bits_compare;
for (i = 0; i < s->num_children; i++) {
result = bdrv_co_flush(s->bs[i]);
result_value.l = result;
quorum_count_vote(&error_votes, &result_value, i);
}
winner = quorum_get_vote_winner(&error_votes);
result = winner->value.l;
quorum_free_vote_list(&error_votes);
return result;
}
static bool quorum_recurse_is_first_non_filter(BlockDriverState *bs,
BlockDriverState *candidate)
{
BDRVQuorumState *s = bs->opaque;
int i;
for (i = 0; i < s->num_children; i++) {
bool perm = bdrv_recurse_is_first_non_filter(s->bs[i],
candidate);
if (perm) {
return true;
}
}
return false;
}
static int quorum_valid_threshold(int threshold, int num_children, Error **errp)
{
if (threshold < 1) {
error_set(errp, QERR_INVALID_PARAMETER_VALUE,
"vote-threshold", "value >= 1");
return -ERANGE;
}
if (threshold > num_children) {
error_setg(errp, "threshold may not exceed children count");
return -ERANGE;
}
return 0;
}
static QemuOptsList quorum_runtime_opts = {
.name = "quorum",
.head = QTAILQ_HEAD_INITIALIZER(quorum_runtime_opts.head),
.desc = {
{
.name = QUORUM_OPT_VOTE_THRESHOLD,
.type = QEMU_OPT_NUMBER,
.help = "The number of vote needed for reaching quorum",
},
{
.name = QUORUM_OPT_BLKVERIFY,
.type = QEMU_OPT_BOOL,
.help = "Trigger block verify mode if set",
},
{ /* end of list */ }
},
};
static int quorum_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVQuorumState *s = bs->opaque;
Error *local_err = NULL;
QemuOpts *opts;
bool *opened;
QDict *sub = NULL;
QList *list = NULL;
const QListEntry *lentry;
int i;
int ret = 0;
qdict_flatten(options);
qdict_extract_subqdict(options, &sub, "children.");
qdict_array_split(sub, &list);
if (qdict_size(sub)) {
error_setg(&local_err, "Invalid option children.%s",
qdict_first(sub)->key);
ret = -EINVAL;
goto exit;
}
/* count how many different children are present */
s->num_children = qlist_size(list);
if (s->num_children < 2) {
error_setg(&local_err,
"Number of provided children must be greater than 1");
ret = -EINVAL;
goto exit;
}
opts = qemu_opts_create(&quorum_runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (error_is_set(&local_err)) {
ret = -EINVAL;
goto exit;
}
s->threshold = qemu_opt_get_number(opts, QUORUM_OPT_VOTE_THRESHOLD, 0);
/* and validate it against s->num_children */
ret = quorum_valid_threshold(s->threshold, s->num_children, &local_err);
if (ret < 0) {
goto exit;
}
/* is the driver in blkverify mode */
if (qemu_opt_get_bool(opts, QUORUM_OPT_BLKVERIFY, false) &&
s->num_children == 2 && s->threshold == 2) {
s->is_blkverify = true;
} else if (qemu_opt_get_bool(opts, QUORUM_OPT_BLKVERIFY, false)) {
fprintf(stderr, "blkverify mode is set by setting blkverify=on "
"and using two files with vote_threshold=2\n");
}
/* allocate the children BlockDriverState array */
s->bs = g_new0(BlockDriverState *, s->num_children);
opened = g_new0(bool, s->num_children);
for (i = 0, lentry = qlist_first(list); lentry;
lentry = qlist_next(lentry), i++) {
QDict *d;
QString *string;
switch (qobject_type(lentry->value))
{
/* List of options */
case QTYPE_QDICT:
d = qobject_to_qdict(lentry->value);
QINCREF(d);
ret = bdrv_open(&s->bs[i], NULL, NULL, d, flags, NULL,
&local_err);
break;
/* QMP reference */
case QTYPE_QSTRING:
string = qobject_to_qstring(lentry->value);
ret = bdrv_open(&s->bs[i], NULL, qstring_get_str(string), NULL,
flags, NULL, &local_err);
break;
default:
error_setg(&local_err, "Specification of child block device %i "
"is invalid", i);
ret = -EINVAL;
}
if (ret < 0) {
goto close_exit;
}
opened[i] = true;
}
g_free(opened);
goto exit;
close_exit:
/* cleanup on error */
for (i = 0; i < s->num_children; i++) {
if (!opened[i]) {
continue;
}
bdrv_unref(s->bs[i]);
}
g_free(s->bs);
g_free(opened);
exit:
/* propagate error */
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
}
QDECREF(list);
QDECREF(sub);
return ret;
}
static void quorum_close(BlockDriverState *bs)
{
BDRVQuorumState *s = bs->opaque;
int i;
for (i = 0; i < s->num_children; i++) {
bdrv_unref(s->bs[i]);
}
g_free(s->bs);
}
static BlockDriver bdrv_quorum = {
.format_name = "quorum",
.protocol_name = "quorum",
.instance_size = sizeof(BDRVQuorumState),
.bdrv_file_open = quorum_open,
.bdrv_close = quorum_close,
.bdrv_co_flush_to_disk = quorum_co_flush,
.bdrv_getlength = quorum_getlength,
.bdrv_aio_readv = quorum_aio_readv,
.bdrv_aio_writev = quorum_aio_writev,
.bdrv_invalidate_cache = quorum_invalidate_cache,
.is_filter = true,
.bdrv_recurse_is_first_non_filter = quorum_recurse_is_first_non_filter,
};
static void bdrv_quorum_init(void)
{
bdrv_register(&bdrv_quorum);
}
block_init(bdrv_quorum_init);

View File

@@ -21,10 +21,9 @@
#define QEMU_AIO_IOCTL 0x0004
#define QEMU_AIO_FLUSH 0x0008
#define QEMU_AIO_DISCARD 0x0010
#define QEMU_AIO_WRITE_ZEROES 0x0020
#define QEMU_AIO_TYPE_MASK \
(QEMU_AIO_READ|QEMU_AIO_WRITE|QEMU_AIO_IOCTL|QEMU_AIO_FLUSH| \
QEMU_AIO_DISCARD|QEMU_AIO_WRITE_ZEROES)
QEMU_AIO_DISCARD)
/* AIO flags */
#define QEMU_AIO_MISALIGNED 0x1000

View File

@@ -127,8 +127,6 @@ typedef struct BDRVRawState {
int fd;
int type;
int open_flags;
size_t buf_align;
#if defined(__linux__)
/* linux floppy specific */
int64_t fd_open_time;
@@ -141,11 +139,9 @@ typedef struct BDRVRawState {
void *aio_ctx;
#endif
#ifdef CONFIG_XFS
bool is_xfs:1;
bool is_xfs : 1;
#endif
bool has_discard:1;
bool has_write_zeroes:1;
bool discard_zeroes:1;
bool has_discard : 1;
} BDRVRawState;
typedef struct BDRVRawReopenState {
@@ -215,76 +211,6 @@ static int raw_normalize_devicepath(const char **filename)
}
#endif
static void raw_probe_alignment(BlockDriverState *bs)
{
BDRVRawState *s = bs->opaque;
char *buf;
unsigned int sector_size;
/* For /dev/sg devices the alignment is not really used.
With buffered I/O, we don't have any restrictions. */
if (bs->sg || !(s->open_flags & O_DIRECT)) {
bs->request_alignment = 1;
s->buf_align = 1;
return;
}
/* Try a few ioctls to get the right size */
bs->request_alignment = 0;
s->buf_align = 0;
#ifdef BLKSSZGET
if (ioctl(s->fd, BLKSSZGET, &sector_size) >= 0) {
bs->request_alignment = sector_size;
}
#endif
#ifdef DKIOCGETBLOCKSIZE
if (ioctl(s->fd, DKIOCGETBLOCKSIZE, &sector_size) >= 0) {
bs->request_alignment = sector_size;
}
#endif
#ifdef DIOCGSECTORSIZE
if (ioctl(s->fd, DIOCGSECTORSIZE, &sector_size) >= 0) {
bs->request_alignment = sector_size;
}
#endif
#ifdef CONFIG_XFS
if (s->is_xfs) {
struct dioattr da;
if (xfsctl(NULL, s->fd, XFS_IOC_DIOINFO, &da) >= 0) {
bs->request_alignment = da.d_miniosz;
/* The kernel returns wrong information for d_mem */
/* s->buf_align = da.d_mem; */
}
}
#endif
/* If we could not get the sizes so far, we can only guess them */
if (!s->buf_align) {
size_t align;
buf = qemu_memalign(MAX_BLOCKSIZE, 2 * MAX_BLOCKSIZE);
for (align = 512; align <= MAX_BLOCKSIZE; align <<= 1) {
if (pread(s->fd, buf + align, MAX_BLOCKSIZE, 0) >= 0) {
s->buf_align = align;
break;
}
}
qemu_vfree(buf);
}
if (!bs->request_alignment) {
size_t align;
buf = qemu_memalign(s->buf_align, MAX_BLOCKSIZE);
for (align = 512; align <= MAX_BLOCKSIZE; align <<= 1) {
if (pread(s->fd, buf, align, 0) >= 0) {
bs->request_alignment = align;
break;
}
}
qemu_vfree(buf);
}
}
static void raw_parse_flags(int bdrv_flags, int *open_flags)
{
assert(open_flags != NULL);
@@ -336,17 +262,6 @@ error:
}
#endif
static void raw_parse_filename(const char *filename, QDict *options,
Error **errp)
{
/* The filename does not have to be prefixed by the protocol name, since
* "file" is the default protocol; therefore, the return value of this
* function call can be ignored. */
strstart(filename, "file:", &filename);
qdict_put_obj(options, "filename", QOBJECT(qstring_from_str(filename)));
}
static QemuOptsList raw_runtime_opts = {
.name = "raw",
.head = QTAILQ_HEAD_INITIALIZER(raw_runtime_opts.head),
@@ -368,11 +283,10 @@ static int raw_open_common(BlockDriverState *bs, QDict *options,
Error *local_err = NULL;
const char *filename;
int fd, ret;
struct stat st;
opts = qemu_opts_create(&raw_runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&raw_runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
@@ -409,38 +323,10 @@ static int raw_open_common(BlockDriverState *bs, QDict *options,
}
#endif
s->has_discard = true;
s->has_write_zeroes = true;
if (fstat(s->fd, &st) < 0) {
error_setg_errno(errp, errno, "Could not stat file");
goto fail;
}
if (S_ISREG(st.st_mode)) {
s->discard_zeroes = true;
}
if (S_ISBLK(st.st_mode)) {
#ifdef BLKDISCARDZEROES
unsigned int arg;
if (ioctl(s->fd, BLKDISCARDZEROES, &arg) == 0 && arg) {
s->discard_zeroes = true;
}
#endif
#ifdef __linux__
/* On Linux 3.10, BLKDISCARD leaves stale data in the page cache. Do
* not rely on the contents of discarded blocks unless using O_DIRECT.
* Same for BLKZEROOUT.
*/
if (!(bs->open_flags & BDRV_O_NOCACHE)) {
s->discard_zeroes = false;
s->has_write_zeroes = false;
}
#endif
}
s->has_discard = 1;
#ifdef CONFIG_XFS
if (platform_test_xfs_fd(s->fd)) {
s->is_xfs = true;
s->is_xfs = 1;
}
#endif
@@ -459,7 +345,7 @@ static int raw_open(BlockDriverState *bs, QDict *options, int flags,
s->type = FTYPE_FILE;
ret = raw_open_common(bs, options, flags, 0, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
}
return ret;
@@ -546,6 +432,7 @@ static int raw_reopen_prepare(BDRVReopenState *state,
return ret;
}
static void raw_reopen_commit(BDRVReopenState *state)
{
BDRVRawReopenState *raw_s = state->opaque;
@@ -581,15 +468,23 @@ static void raw_reopen_abort(BDRVReopenState *state)
state->opaque = NULL;
}
static int raw_refresh_limits(BlockDriverState *bs)
{
BDRVRawState *s = bs->opaque;
raw_probe_alignment(bs);
bs->bl.opt_mem_alignment = s->buf_align;
return 0;
}
/* XXX: use host sector size if necessary with:
#ifdef DIOCGSECTORSIZE
{
unsigned int sectorsize = 512;
if (!ioctl(fd, DIOCGSECTORSIZE, &sectorsize) &&
sectorsize > bufsize)
bufsize = sectorsize;
}
#endif
#ifdef CONFIG_COCOA
uint32_t blockSize = 512;
if ( !ioctl( fd, DKIOCGETBLOCKSIZE, &blockSize ) && blockSize > bufsize) {
bufsize = blockSize;
}
#endif
*/
static ssize_t handle_aiocb_ioctl(RawPosixAIOData *aiocb)
{
@@ -780,23 +675,6 @@ static ssize_t handle_aiocb_rw(RawPosixAIOData *aiocb)
}
#ifdef CONFIG_XFS
static int xfs_write_zeroes(BDRVRawState *s, int64_t offset, uint64_t bytes)
{
struct xfs_flock64 fl;
memset(&fl, 0, sizeof(fl));
fl.l_whence = SEEK_SET;
fl.l_start = offset;
fl.l_len = bytes;
if (xfsctl(NULL, s->fd, XFS_IOC_ZERO_RANGE, &fl) < 0) {
DEBUG_BLOCK_PRINT("cannot write zero range (%s)\n", strerror(errno));
return -errno;
}
return 0;
}
static int xfs_discard(BDRVRawState *s, int64_t offset, uint64_t bytes)
{
struct xfs_flock64 fl;
@@ -815,49 +693,13 @@ static int xfs_discard(BDRVRawState *s, int64_t offset, uint64_t bytes)
}
#endif
static ssize_t handle_aiocb_write_zeroes(RawPosixAIOData *aiocb)
{
int ret = -EOPNOTSUPP;
BDRVRawState *s = aiocb->bs->opaque;
if (s->has_write_zeroes == 0) {
return -ENOTSUP;
}
if (aiocb->aio_type & QEMU_AIO_BLKDEV) {
#ifdef BLKZEROOUT
do {
uint64_t range[2] = { aiocb->aio_offset, aiocb->aio_nbytes };
if (ioctl(aiocb->aio_fildes, BLKZEROOUT, range) == 0) {
return 0;
}
} while (errno == EINTR);
ret = -errno;
#endif
} else {
#ifdef CONFIG_XFS
if (s->is_xfs) {
return xfs_write_zeroes(s, aiocb->aio_offset, aiocb->aio_nbytes);
}
#endif
}
if (ret == -ENODEV || ret == -ENOSYS || ret == -EOPNOTSUPP ||
ret == -ENOTTY) {
s->has_write_zeroes = false;
ret = -ENOTSUP;
}
return ret;
}
static ssize_t handle_aiocb_discard(RawPosixAIOData *aiocb)
{
int ret = -EOPNOTSUPP;
BDRVRawState *s = aiocb->bs->opaque;
if (!s->has_discard) {
return -ENOTSUP;
if (s->has_discard == 0) {
return 0;
}
if (aiocb->aio_type & QEMU_AIO_BLKDEV) {
@@ -892,8 +734,8 @@ static ssize_t handle_aiocb_discard(RawPosixAIOData *aiocb)
if (ret == -ENODEV || ret == -ENOSYS || ret == -EOPNOTSUPP ||
ret == -ENOTTY) {
s->has_discard = false;
ret = -ENOTSUP;
s->has_discard = 0;
ret = 0;
}
return ret;
}
@@ -935,9 +777,6 @@ static int aio_worker(void *arg)
case QEMU_AIO_DISCARD:
ret = handle_aiocb_discard(aiocb);
break;
case QEMU_AIO_WRITE_ZEROES:
ret = handle_aiocb_write_zeroes(aiocb);
break;
default:
fprintf(stderr, "invalid aio request (0x%x)\n", aiocb->aio_type);
ret = -EINVAL;
@@ -948,29 +787,6 @@ static int aio_worker(void *arg)
return ret;
}
static int paio_submit_co(BlockDriverState *bs, int fd,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
int type)
{
RawPosixAIOData *acb = g_slice_new(RawPosixAIOData);
ThreadPool *pool;
acb->bs = bs;
acb->aio_type = type;
acb->aio_fildes = fd;
if (qiov) {
acb->aio_iov = qiov->iov;
acb->aio_niov = qiov->niov;
}
acb->aio_nbytes = nb_sectors * 512;
acb->aio_offset = sector_num * 512;
trace_paio_submit_co(sector_num, nb_sectors, type);
pool = aio_get_thread_pool(bdrv_get_aio_context(bs));
return thread_pool_submit_co(pool, aio_worker, acb);
}
static BlockDriverAIOCB *paio_submit(BlockDriverState *bs, int fd,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque, int type)
@@ -1241,8 +1057,6 @@ static int raw_create(const char *filename, QEMUOptionParameter *options,
int result = 0;
int64_t total_size = 0;
strstart(filename, "file:", &filename);
/* Read out options */
while (options && options->name) {
if (!strcmp(options->name, BLOCK_OPT_SIZE)) {
@@ -1385,31 +1199,6 @@ static coroutine_fn BlockDriverAIOCB *raw_aio_discard(BlockDriverState *bs,
cb, opaque, QEMU_AIO_DISCARD);
}
static int coroutine_fn raw_co_write_zeroes(
BlockDriverState *bs, int64_t sector_num,
int nb_sectors, BdrvRequestFlags flags)
{
BDRVRawState *s = bs->opaque;
if (!(flags & BDRV_REQ_MAY_UNMAP)) {
return paio_submit_co(bs, s->fd, sector_num, NULL, nb_sectors,
QEMU_AIO_WRITE_ZEROES);
} else if (s->discard_zeroes) {
return paio_submit_co(bs, s->fd, sector_num, NULL, nb_sectors,
QEMU_AIO_DISCARD);
}
return -ENOTSUP;
}
static int raw_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
{
BDRVRawState *s = bs->opaque;
bdi->unallocated_blocks_are_zero = s->discard_zeroes;
bdi->can_write_zeroes_with_unmap = s->discard_zeroes;
return 0;
}
static QEMUOptionParameter raw_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
@@ -1425,7 +1214,6 @@ static BlockDriver bdrv_file = {
.instance_size = sizeof(BDRVRawState),
.bdrv_needs_filename = true,
.bdrv_probe = NULL, /* no probe for protocols */
.bdrv_parse_filename = raw_parse_filename,
.bdrv_file_open = raw_open,
.bdrv_reopen_prepare = raw_reopen_prepare,
.bdrv_reopen_commit = raw_reopen_commit,
@@ -1434,17 +1222,14 @@ static BlockDriver bdrv_file = {
.bdrv_create = raw_create,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_co_get_block_status = raw_co_get_block_status,
.bdrv_co_write_zeroes = raw_co_write_zeroes,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_aio_discard = raw_aio_discard,
.bdrv_refresh_limits = raw_refresh_limits,
.bdrv_truncate = raw_truncate,
.bdrv_getlength = raw_getlength,
.bdrv_get_info = raw_get_info,
.bdrv_get_allocated_file_size
= raw_get_allocated_file_size,
@@ -1561,15 +1346,6 @@ static int check_hdev_writable(BDRVRawState *s)
return 0;
}
static void hdev_parse_filename(const char *filename, QDict *options,
Error **errp)
{
/* The prefix is optional, just as for "file". */
strstart(filename, "host_device:", &filename);
qdict_put_obj(options, "filename", QOBJECT(qstring_from_str(filename)));
}
static int hdev_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
@@ -1620,7 +1396,7 @@ static int hdev_open(BlockDriverState *bs, QDict *options, int flags,
ret = raw_open_common(bs, options, flags, 0, &local_err);
if (ret < 0) {
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
}
return ret;
@@ -1749,26 +1525,6 @@ static coroutine_fn BlockDriverAIOCB *hdev_aio_discard(BlockDriverState *bs,
cb, opaque, QEMU_AIO_DISCARD|QEMU_AIO_BLKDEV);
}
static coroutine_fn int hdev_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, BdrvRequestFlags flags)
{
BDRVRawState *s = bs->opaque;
int rc;
rc = fd_open(bs);
if (rc < 0) {
return rc;
}
if (!(flags & BDRV_REQ_MAY_UNMAP)) {
return paio_submit_co(bs, s->fd, sector_num, NULL, nb_sectors,
QEMU_AIO_WRITE_ZEROES|QEMU_AIO_BLKDEV);
} else if (s->discard_zeroes) {
return paio_submit_co(bs, s->fd, sector_num, NULL, nb_sectors,
QEMU_AIO_DISCARD|QEMU_AIO_BLKDEV);
}
return -ENOTSUP;
}
static int hdev_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
{
@@ -1776,18 +1532,6 @@ static int hdev_create(const char *filename, QEMUOptionParameter *options,
int ret = 0;
struct stat stat_buf;
int64_t total_size = 0;
bool has_prefix;
/* This function is used by all three protocol block drivers and therefore
* any of these three prefixes may be given.
* The return value has to be stored somewhere, otherwise this is an error
* due to -Werror=unused-value. */
has_prefix =
strstart(filename, "host_device:", &filename) ||
strstart(filename, "host_cdrom:" , &filename) ||
strstart(filename, "host_floppy:", &filename);
(void)has_prefix;
/* Read out options */
while (options && options->name) {
@@ -1826,7 +1570,6 @@ static BlockDriver bdrv_host_device = {
.instance_size = sizeof(BDRVRawState),
.bdrv_needs_filename = true,
.bdrv_probe_device = hdev_probe_device,
.bdrv_parse_filename = hdev_parse_filename,
.bdrv_file_open = hdev_open,
.bdrv_close = raw_close,
.bdrv_reopen_prepare = raw_reopen_prepare,
@@ -1834,17 +1577,14 @@ static BlockDriver bdrv_host_device = {
.bdrv_reopen_abort = raw_reopen_abort,
.bdrv_create = hdev_create,
.create_options = raw_create_options,
.bdrv_co_write_zeroes = hdev_co_write_zeroes,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_aio_discard = hdev_aio_discard,
.bdrv_refresh_limits = raw_refresh_limits,
.bdrv_truncate = raw_truncate,
.bdrv_getlength = raw_getlength,
.bdrv_get_info = raw_get_info,
.bdrv_get_allocated_file_size
= raw_get_allocated_file_size,
@@ -1856,15 +1596,6 @@ static BlockDriver bdrv_host_device = {
};
#ifdef __linux__
static void floppy_parse_filename(const char *filename, QDict *options,
Error **errp)
{
/* The prefix is optional, just as for "file". */
strstart(filename, "host_floppy:", &filename);
qdict_put_obj(options, "filename", QOBJECT(qstring_from_str(filename)));
}
static int floppy_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
@@ -1877,7 +1608,7 @@ static int floppy_open(BlockDriverState *bs, QDict *options, int flags,
/* open will not fail even if no floppy is inserted, so add O_NONBLOCK */
ret = raw_open_common(bs, options, flags, O_NONBLOCK, &local_err);
if (ret) {
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
}
return ret;
@@ -1970,7 +1701,6 @@ static BlockDriver bdrv_host_floppy = {
.instance_size = sizeof(BDRVRawState),
.bdrv_needs_filename = true,
.bdrv_probe_device = floppy_probe_device,
.bdrv_parse_filename = floppy_parse_filename,
.bdrv_file_open = floppy_open,
.bdrv_close = raw_close,
.bdrv_reopen_prepare = raw_reopen_prepare,
@@ -1982,7 +1712,6 @@ static BlockDriver bdrv_host_floppy = {
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_refresh_limits = raw_refresh_limits,
.bdrv_truncate = raw_truncate,
.bdrv_getlength = raw_getlength,
@@ -1995,20 +1724,7 @@ static BlockDriver bdrv_host_floppy = {
.bdrv_media_changed = floppy_media_changed,
.bdrv_eject = floppy_eject,
};
#endif
#if defined(__linux__) || defined(__FreeBSD__) || defined(__FreeBSD_kernel__)
static void cdrom_parse_filename(const char *filename, QDict *options,
Error **errp)
{
/* The prefix is optional, just as for "file". */
strstart(filename, "host_cdrom:", &filename);
qdict_put_obj(options, "filename", QOBJECT(qstring_from_str(filename)));
}
#endif
#ifdef __linux__
static int cdrom_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
@@ -2020,7 +1736,7 @@ static int cdrom_open(BlockDriverState *bs, QDict *options, int flags,
/* open will not fail even if no CD is inserted, so add O_NONBLOCK */
ret = raw_open_common(bs, options, flags, O_NONBLOCK, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
}
return ret;
@@ -2095,7 +1811,6 @@ static BlockDriver bdrv_host_cdrom = {
.instance_size = sizeof(BDRVRawState),
.bdrv_needs_filename = true,
.bdrv_probe_device = cdrom_probe_device,
.bdrv_parse_filename = cdrom_parse_filename,
.bdrv_file_open = cdrom_open,
.bdrv_close = raw_close,
.bdrv_reopen_prepare = raw_reopen_prepare,
@@ -2107,7 +1822,6 @@ static BlockDriver bdrv_host_cdrom = {
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_refresh_limits = raw_refresh_limits,
.bdrv_truncate = raw_truncate,
.bdrv_getlength = raw_getlength,
@@ -2138,7 +1852,7 @@ static int cdrom_open(BlockDriverState *bs, QDict *options, int flags,
ret = raw_open_common(bs, options, flags, 0, &local_err);
if (ret) {
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
}
return ret;
@@ -2226,7 +1940,6 @@ static BlockDriver bdrv_host_cdrom = {
.instance_size = sizeof(BDRVRawState),
.bdrv_needs_filename = true,
.bdrv_probe_device = cdrom_probe_device,
.bdrv_parse_filename = cdrom_parse_filename,
.bdrv_file_open = cdrom_open,
.bdrv_close = raw_close,
.bdrv_reopen_prepare = raw_reopen_prepare,
@@ -2238,7 +1951,6 @@ static BlockDriver bdrv_host_cdrom = {
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_refresh_limits = raw_refresh_limits,
.bdrv_truncate = raw_truncate,
.bdrv_getlength = raw_getlength,

View File

@@ -202,35 +202,6 @@ static int set_sparse(int fd)
NULL, 0, NULL, 0, &returned, NULL);
}
static void raw_probe_alignment(BlockDriverState *bs)
{
BDRVRawState *s = bs->opaque;
DWORD sectorsPerCluster, freeClusters, totalClusters, count;
DISK_GEOMETRY_EX dg;
BOOL status;
if (s->type == FTYPE_CD) {
bs->request_alignment = 2048;
return;
}
if (s->type == FTYPE_HARDDISK) {
status = DeviceIoControl(s->hfile, IOCTL_DISK_GET_DRIVE_GEOMETRY_EX,
NULL, 0, &dg, sizeof(dg), &count, NULL);
if (status != 0) {
bs->request_alignment = dg.Geometry.BytesPerSector;
return;
}
/* try GetDiskFreeSpace too */
}
if (s->drive_path[0]) {
GetDiskFreeSpace(s->drive_path, &sectorsPerCluster,
&dg.Geometry.BytesPerSector,
&freeClusters, &totalClusters);
bs->request_alignment = dg.Geometry.BytesPerSector;
}
}
static void raw_parse_flags(int flags, int *access_flags, DWORD *overlapped)
{
assert(access_flags != NULL);
@@ -251,17 +222,6 @@ static void raw_parse_flags(int flags, int *access_flags, DWORD *overlapped)
}
}
static void raw_parse_filename(const char *filename, QDict *options,
Error **errp)
{
/* The filename does not have to be prefixed by the protocol name, since
* "file" is the default protocol; therefore, the return value of this
* function call can be ignored. */
strstart(filename, "file:", &filename);
qdict_put_obj(options, "filename", QOBJECT(qstring_from_str(filename)));
}
static QemuOptsList raw_runtime_opts = {
.name = "raw",
.head = QTAILQ_HEAD_INITIALIZER(raw_runtime_opts.head),
@@ -288,9 +248,9 @@ static int raw_open(BlockDriverState *bs, QDict *options, int flags,
s->type = FTYPE_FILE;
opts = qemu_opts_create(&raw_runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&raw_runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
@@ -309,17 +269,6 @@ static int raw_open(BlockDriverState *bs, QDict *options, int flags,
}
}
if (filename[0] && filename[1] == ':') {
snprintf(s->drive_path, sizeof(s->drive_path), "%c:\\", filename[0]);
} else if (filename[0] == '\\' && filename[1] == '\\') {
s->drive_path[0] = 0;
} else {
/* Relative path. */
char buf[MAX_PATH];
GetCurrentDirectory(MAX_PATH, buf);
snprintf(s->drive_path, sizeof(s->drive_path), "%c:\\", buf[0]);
}
s->hfile = CreateFile(filename, access_flags,
FILE_SHARE_READ, NULL,
OPEN_EXISTING, overlapped, NULL);
@@ -344,7 +293,6 @@ static int raw_open(BlockDriverState *bs, QDict *options, int flags,
s->aio = aio;
}
raw_probe_alignment(bs);
ret = 0;
fail:
qemu_opts_del(opts);
@@ -481,8 +429,6 @@ static int raw_create(const char *filename, QEMUOptionParameter *options,
int fd;
int64_t total_size = 0;
strstart(filename, "file:", &filename);
/* Read out options */
while (options && options->name) {
if (!strcmp(options->name, BLOCK_OPT_SIZE)) {
@@ -517,7 +463,6 @@ static BlockDriver bdrv_file = {
.protocol_name = "file",
.instance_size = sizeof(BDRVRawState),
.bdrv_needs_filename = true,
.bdrv_parse_filename = raw_parse_filename,
.bdrv_file_open = raw_open,
.bdrv_close = raw_close,
.bdrv_create = raw_create,
@@ -593,15 +538,6 @@ static int hdev_probe_device(const char *filename)
return 0;
}
static void hdev_parse_filename(const char *filename, QDict *options,
Error **errp)
{
/* The prefix is optional, just as for "file". */
strstart(filename, "host_device:", &filename);
qdict_put_obj(options, "filename", QOBJECT(qstring_from_str(filename)));
}
static int hdev_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
@@ -614,10 +550,9 @@ static int hdev_open(BlockDriverState *bs, QDict *options, int flags,
Error *local_err = NULL;
const char *filename;
QemuOpts *opts = qemu_opts_create(&raw_runtime_opts, NULL, 0,
&error_abort);
QemuOpts *opts = qemu_opts_create_nofail(&raw_runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto done;
@@ -672,7 +607,6 @@ static BlockDriver bdrv_host_device = {
.protocol_name = "host_device",
.instance_size = sizeof(BDRVRawState),
.bdrv_needs_filename = true,
.bdrv_parse_filename = hdev_parse_filename,
.bdrv_probe_device = hdev_probe_device,
.bdrv_file_open = hdev_open,
.bdrv_close = raw_close,

View File

@@ -68,10 +68,9 @@ static int64_t coroutine_fn raw_co_get_block_status(BlockDriverState *bs,
}
static int coroutine_fn raw_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
BdrvRequestFlags flags)
int64_t sector_num, int nb_sectors)
{
return bdrv_co_write_zeroes(bs->file, sector_num, nb_sectors, flags);
return bdrv_co_write_zeroes(bs->file, sector_num, nb_sectors);
}
static int coroutine_fn raw_co_discard(BlockDriverState *bs,
@@ -90,12 +89,6 @@ static int raw_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
return bdrv_get_info(bs->file, bdi);
}
static int raw_refresh_limits(BlockDriverState *bs)
{
bs->bl = bs->file->bl;
return 0;
}
static int raw_truncate(BlockDriverState *bs, int64_t offset)
{
return bdrv_truncate(bs->file, offset);
@@ -146,7 +139,7 @@ static int raw_create(const char *filename, QEMUOptionParameter *options,
int ret;
ret = bdrv_create_file(filename, options, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
}
return ret;
@@ -187,7 +180,6 @@ static BlockDriver bdrv_raw = {
.bdrv_getlength = &raw_getlength,
.has_variable_length = true,
.bdrv_get_info = &raw_get_info,
.bdrv_refresh_limits = &raw_refresh_limits,
.bdrv_is_inserted = &raw_is_inserted,
.bdrv_media_changed = &raw_media_changed,
.bdrv_eject = &raw_eject,

View File

@@ -95,13 +95,18 @@ typedef struct RADOSCB {
#define RBD_FD_WRITE 1
typedef struct BDRVRBDState {
int fds[2];
rados_t cluster;
rados_ioctx_t io_ctx;
rbd_image_t image;
char name[RBD_MAX_IMAGE_NAME_SIZE];
char *snap;
int event_reader_pos;
RADOSCB *event_rcb;
} BDRVRBDState;
static void rbd_aio_bh_cb(void *opaque);
static int qemu_rbd_next_tok(char *dst, int dst_len,
char *src, char delim,
const char *name,
@@ -364,8 +369,9 @@ static int qemu_rbd_create(const char *filename, QEMUOptionParameter *options,
}
/*
* This aio completion is being called from rbd_finish_bh() and runs in qemu
* BH context.
* This aio completion is being called from qemu_rbd_aio_event_reader()
* and runs in qemu context. It schedules a bh, but just in case the aio
* was not cancelled before.
*/
static void qemu_rbd_complete_aio(RADOSCB *rcb)
{
@@ -395,19 +401,36 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb)
acb->ret = r;
}
}
/* Note that acb->bh can be NULL in case where the aio was cancelled */
acb->bh = qemu_bh_new(rbd_aio_bh_cb, acb);
qemu_bh_schedule(acb->bh);
g_free(rcb);
}
if (acb->cmd == RBD_AIO_READ) {
qemu_iovec_from_buf(acb->qiov, 0, acb->bounce, acb->qiov->size);
}
qemu_vfree(acb->bounce);
acb->common.cb(acb->common.opaque, (acb->ret > 0 ? 0 : acb->ret));
acb->status = 0;
/*
* aio fd read handler. It runs in the qemu context and calls the
* completion handling of completed rados aio operations.
*/
static void qemu_rbd_aio_event_reader(void *opaque)
{
BDRVRBDState *s = opaque;
if (!acb->cancelled) {
qemu_aio_release(acb);
}
ssize_t ret;
do {
char *p = (char *)&s->event_rcb;
/* now read the rcb pointer that was sent from a non qemu thread */
ret = read(s->fds[RBD_FD_READ], p + s->event_reader_pos,
sizeof(s->event_rcb) - s->event_reader_pos);
if (ret > 0) {
s->event_reader_pos += ret;
if (s->event_reader_pos == sizeof(s->event_rcb)) {
s->event_reader_pos = 0;
qemu_rbd_complete_aio(s->event_rcb);
}
}
} while (ret < 0 && errno == EINTR);
}
/* TODO Convert to fine grained options */
@@ -438,9 +461,9 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags,
const char *filename;
int r;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
qemu_opts_del(opts);
@@ -515,9 +538,23 @@ static int qemu_rbd_open(BlockDriverState *bs, QDict *options, int flags,
bs->read_only = (s->snap != NULL);
s->event_reader_pos = 0;
r = qemu_pipe(s->fds);
if (r < 0) {
error_report("error opening eventfd");
goto failed;
}
fcntl(s->fds[0], F_SETFL, O_NONBLOCK);
fcntl(s->fds[1], F_SETFL, O_NONBLOCK);
qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], qemu_rbd_aio_event_reader,
NULL, s);
qemu_opts_del(opts);
return 0;
failed:
rbd_close(s->image);
failed_open:
rados_ioctx_destroy(s->io_ctx);
failed_shutdown:
@@ -532,6 +569,10 @@ static void qemu_rbd_close(BlockDriverState *bs)
{
BDRVRBDState *s = bs->opaque;
close(s->fds[0]);
close(s->fds[1]);
qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], NULL, NULL, NULL);
rbd_close(s->image);
rados_ioctx_destroy(s->io_ctx);
g_free(s->snap);
@@ -559,11 +600,34 @@ static const AIOCBInfo rbd_aiocb_info = {
.cancel = qemu_rbd_aio_cancel,
};
static void rbd_finish_bh(void *opaque)
static int qemu_rbd_send_pipe(BDRVRBDState *s, RADOSCB *rcb)
{
RADOSCB *rcb = opaque;
qemu_bh_delete(rcb->acb->bh);
qemu_rbd_complete_aio(rcb);
int ret = 0;
while (1) {
fd_set wfd;
int fd = s->fds[RBD_FD_WRITE];
/* send the op pointer to the qemu thread that is responsible
for the aio/op completion. Must do it in a qemu thread context */
ret = write(fd, (void *)&rcb, sizeof(rcb));
if (ret >= 0) {
break;
}
if (errno == EINTR) {
continue;
}
if (errno != EAGAIN) {
break;
}
FD_ZERO(&wfd);
FD_SET(fd, &wfd);
do {
ret = select(fd + 1, NULL, &wfd, NULL, NULL);
} while (ret < 0 && errno == EINTR);
}
return ret;
}
/*
@@ -571,18 +635,40 @@ static void rbd_finish_bh(void *opaque)
*
* Note: this function is being called from a non qemu thread so
* we need to be careful about what we do here. Generally we only
* schedule a BH, and do the rest of the io completion handling
* from rbd_finish_bh() which runs in a qemu context.
* write to the block notification pipe, and do the rest of the
* io completion handling from qemu_rbd_aio_event_reader() which
* runs in a qemu context.
*/
static void rbd_finish_aiocb(rbd_completion_t c, RADOSCB *rcb)
{
RBDAIOCB *acb = rcb->acb;
int ret;
rcb->ret = rbd_aio_get_return_value(c);
rbd_aio_release(c);
ret = qemu_rbd_send_pipe(rcb->s, rcb);
if (ret < 0) {
error_report("failed writing to acb->s->fds");
g_free(rcb);
}
}
acb->bh = qemu_bh_new(rbd_finish_bh, rcb);
qemu_bh_schedule(acb->bh);
/* Callback when all queued rbd_aio requests are complete */
static void rbd_aio_bh_cb(void *opaque)
{
RBDAIOCB *acb = opaque;
if (acb->cmd == RBD_AIO_READ) {
qemu_iovec_from_buf(acb->qiov, 0, acb->bounce, acb->qiov->size);
}
qemu_vfree(acb->bounce);
acb->common.cb(acb->common.opaque, (acb->ret > 0 ? 0 : acb->ret));
qemu_bh_delete(acb->bh);
acb->bh = NULL;
acb->status = 0;
if (!acb->cancelled) {
qemu_aio_release(acb);
}
}
static int rbd_aio_discard_wrapper(rbd_image_t image,

View File

@@ -91,14 +91,6 @@
#define SD_NR_VDIS (1U << 24)
#define SD_DATA_OBJ_SIZE (UINT64_C(1) << 22)
#define SD_MAX_VDI_SIZE (SD_DATA_OBJ_SIZE * MAX_DATA_OBJS)
/*
* For erasure coding, we use at most SD_EC_MAX_STRIP for data strips and
* (SD_EC_MAX_STRIP - 1) for parity strips
*
* SD_MAX_COPIES is sum of number of data strips and parity strips.
*/
#define SD_EC_MAX_STRIP 16
#define SD_MAX_COPIES (SD_EC_MAX_STRIP * 2 - 1)
#define SD_INODE_SIZE (sizeof(SheepdogInode))
#define CURRENT_VDI_ID 0
@@ -161,7 +153,7 @@ typedef struct SheepdogVdiReq {
uint32_t id;
uint32_t data_length;
uint64_t vdi_size;
uint32_t base_vdi_id;
uint32_t vdi_id;
uint8_t copies;
uint8_t copy_policy;
uint8_t reserved[2];
@@ -1383,9 +1375,9 @@ static int sd_open(BlockDriverState *bs, QDict *options, int flags,
s->bs = bs;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
ret = -EINVAL;
@@ -1472,7 +1464,9 @@ out:
return ret;
}
static int do_sd_create(BDRVSheepdogState *s, uint32_t *vdi_id, int snapshot)
static int do_sd_create(BDRVSheepdogState *s, char *filename, int64_t vdi_size,
uint32_t base_vid, uint32_t *vdi_id, int snapshot,
uint8_t copy_policy)
{
SheepdogVdiReq hdr;
SheepdogVdiRsp *rsp = (SheepdogVdiRsp *)&hdr;
@@ -1489,11 +1483,11 @@ static int do_sd_create(BDRVSheepdogState *s, uint32_t *vdi_id, int snapshot)
* does not fit in buf? For now, just truncate and avoid buffer overrun.
*/
memset(buf, 0, sizeof(buf));
pstrcpy(buf, sizeof(buf), s->name);
pstrcpy(buf, sizeof(buf), filename);
memset(&hdr, 0, sizeof(hdr));
hdr.opcode = SD_OP_NEW_VDI;
hdr.base_vdi_id = s->inode.vdi_id;
hdr.vdi_id = base_vid;
wlen = SD_MAX_VDI_LEN;
@@ -1501,9 +1495,8 @@ static int do_sd_create(BDRVSheepdogState *s, uint32_t *vdi_id, int snapshot)
hdr.snapid = snapshot;
hdr.data_length = wlen;
hdr.vdi_size = s->inode.vdi_size;
hdr.copy_policy = s->inode.copy_policy;
hdr.copies = s->inode.nr_copies;
hdr.vdi_size = vdi_size;
hdr.copy_policy = copy_policy;
ret = do_req(fd, (SheepdogReq *)&hdr, buf, &wlen, &rlen);
@@ -1514,7 +1507,7 @@ static int do_sd_create(BDRVSheepdogState *s, uint32_t *vdi_id, int snapshot)
}
if (rsp->result != SD_RES_SUCCESS) {
error_report("%s, %s", sd_strerror(rsp->result), s->inode.name);
error_report("%s, %s", sd_strerror(rsp->result), filename);
return -EIO;
}
@@ -1534,8 +1527,7 @@ static int sd_prealloc(const char *filename)
Error *local_err = NULL;
int ret;
ret = bdrv_open(&bs, filename, NULL, NULL, BDRV_O_RDWR | BDRV_O_PROTOCOL,
NULL, &local_err);
ret = bdrv_file_open(&bs, filename, NULL, BDRV_O_RDWR, &local_err);
if (ret < 0) {
qerror_report_err(local_err);
error_free(local_err);
@@ -1572,79 +1564,27 @@ out:
return ret;
}
/*
* Sheepdog support two kinds of redundancy, full replication and erasure
* coding.
*
* # create a fully replicated vdi with x copies
* -o redundancy=x (1 <= x <= SD_MAX_COPIES)
*
* # create a erasure coded vdi with x data strips and y parity strips
* -o redundancy=x:y (x must be one of {2,4,8,16} and 1 <= y < SD_EC_MAX_STRIP)
*/
static int parse_redundancy(BDRVSheepdogState *s, const char *opt)
{
struct SheepdogInode *inode = &s->inode;
const char *n1, *n2;
long copy, parity;
char p[10];
pstrcpy(p, sizeof(p), opt);
n1 = strtok(p, ":");
n2 = strtok(NULL, ":");
if (!n1) {
return -EINVAL;
}
copy = strtol(n1, NULL, 10);
if (copy > SD_MAX_COPIES || copy < 1) {
return -EINVAL;
}
if (!n2) {
inode->copy_policy = 0;
inode->nr_copies = copy;
return 0;
}
if (copy != 2 && copy != 4 && copy != 8 && copy != 16) {
return -EINVAL;
}
parity = strtol(n2, NULL, 10);
if (parity >= SD_EC_MAX_STRIP || parity < 1) {
return -EINVAL;
}
/*
* 4 bits for parity and 4 bits for data.
* We have to compress upper data bits because it can't represent 16
*/
inode->copy_policy = ((copy / 2) << 4) + parity;
inode->nr_copies = copy + parity;
return 0;
}
static int sd_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
{
int ret = 0;
uint32_t vid = 0;
uint32_t vid = 0, base_vid = 0;
int64_t vdi_size = 0;
char *backing_file = NULL;
BDRVSheepdogState *s;
char tag[SD_MAX_VDI_TAG_LEN];
char vdi[SD_MAX_VDI_LEN], tag[SD_MAX_VDI_TAG_LEN];
uint32_t snapid;
bool prealloc = false;
Error *local_err = NULL;
s = g_malloc0(sizeof(BDRVSheepdogState));
memset(vdi, 0, sizeof(vdi));
memset(tag, 0, sizeof(tag));
if (strstr(filename, "://")) {
ret = sd_parse_uri(s, filename, s->name, &snapid, tag);
ret = sd_parse_uri(s, filename, vdi, &snapid, tag);
} else {
ret = parse_vdiname(s, filename, s->name, &snapid, tag);
ret = parse_vdiname(s, filename, vdi, &snapid, tag);
}
if (ret < 0) {
goto out;
@@ -1652,7 +1592,7 @@ static int sd_create(const char *filename, QEMUOptionParameter *options,
while (options && options->name) {
if (!strcmp(options->name, BLOCK_OPT_SIZE)) {
s->inode.vdi_size = options->value.n;
vdi_size = options->value.n;
} else if (!strcmp(options->name, BLOCK_OPT_BACKING_FILE)) {
backing_file = options->value.s;
} else if (!strcmp(options->name, BLOCK_OPT_PREALLOC)) {
@@ -1666,18 +1606,11 @@ static int sd_create(const char *filename, QEMUOptionParameter *options,
ret = -EINVAL;
goto out;
}
} else if (!strcmp(options->name, BLOCK_OPT_REDUNDANCY)) {
if (options->value.s) {
ret = parse_redundancy(s, options->value.s);
if (ret < 0) {
goto out;
}
}
}
options++;
}
if (s->inode.vdi_size > SD_MAX_VDI_SIZE) {
if (vdi_size > SD_MAX_VDI_SIZE) {
error_report("too big image size");
ret = -EINVAL;
goto out;
@@ -1685,7 +1618,7 @@ static int sd_create(const char *filename, QEMUOptionParameter *options,
if (backing_file) {
BlockDriverState *bs;
BDRVSheepdogState *base;
BDRVSheepdogState *s;
BlockDriver *drv;
/* Currently, only Sheepdog backing image is supported. */
@@ -1696,28 +1629,28 @@ static int sd_create(const char *filename, QEMUOptionParameter *options,
goto out;
}
bs = NULL;
ret = bdrv_open(&bs, backing_file, NULL, NULL, BDRV_O_PROTOCOL, NULL,
&local_err);
ret = bdrv_file_open(&bs, backing_file, NULL, 0, &local_err);
if (ret < 0) {
qerror_report_err(local_err);
error_free(local_err);
goto out;
}
base = bs->opaque;
s = bs->opaque;
if (!is_snapshot(&base->inode)) {
if (!is_snapshot(&s->inode)) {
error_report("cannot clone from a non snapshot vdi");
bdrv_unref(bs);
ret = -EINVAL;
goto out;
}
s->inode.vdi_id = base->inode.vdi_id;
base_vid = s->inode.vdi_id;
bdrv_unref(bs);
}
ret = do_sd_create(s, &vid, 0);
/* TODO: allow users to specify copy number */
ret = do_sd_create(s, vdi, vdi_size, base_vid, &vid, 0, 0);
if (!prealloc || ret) {
goto out;
}
@@ -1746,7 +1679,7 @@ static void sd_close(BlockDriverState *bs)
memset(&hdr, 0, sizeof(hdr));
hdr.opcode = SD_OP_RELEASE_VDI;
hdr.base_vdi_id = s->inode.vdi_id;
hdr.vdi_id = s->inode.vdi_id;
wlen = strlen(s->name) + 1;
hdr.data_length = wlen;
hdr.flags = SD_FLAG_CMD_WRITE;
@@ -1849,7 +1782,7 @@ static bool sd_delete(BDRVSheepdogState *s)
unsigned int wlen = SD_MAX_VDI_LEN, rlen = 0;
SheepdogVdiReq hdr = {
.opcode = SD_OP_DEL_VDI,
.base_vdi_id = s->inode.vdi_id,
.vdi_id = s->inode.vdi_id,
.data_length = wlen,
.flags = SD_FLAG_CMD_WRITE,
};
@@ -1900,7 +1833,8 @@ static int sd_create_branch(BDRVSheepdogState *s)
* false bail out.
*/
deleted = sd_delete(s);
ret = do_sd_create(s, &vid, !deleted);
ret = do_sd_create(s, s->name, s->inode.vdi_size, s->inode.vdi_id, &vid,
!deleted, s->inode.copy_policy);
if (ret) {
goto out;
}
@@ -2051,14 +1985,13 @@ static coroutine_fn int sd_co_writev(BlockDriverState *bs, int64_t sector_num,
{
SheepdogAIOCB *acb;
int ret;
int64_t offset = (sector_num + nb_sectors) * BDRV_SECTOR_SIZE;
BDRVSheepdogState *s = bs->opaque;
if (bs->growable && offset > s->inode.vdi_size) {
ret = sd_truncate(bs, offset);
if (bs->growable && sector_num + nb_sectors > bs->total_sectors) {
ret = sd_truncate(bs, (sector_num + nb_sectors) * BDRV_SECTOR_SIZE);
if (ret < 0) {
return ret;
}
bs->total_sectors = sector_num + nb_sectors;
}
acb = sd_aio_setup(bs, qiov, sector_num, nb_sectors);
@@ -2164,7 +2097,8 @@ static int sd_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
goto cleanup;
}
ret = do_sd_create(s, &new_vid, 1);
ret = do_sd_create(s, s->name, s->inode.vdi_size, s->inode.vdi_id, &new_vid,
1, s->inode.copy_policy);
if (ret < 0) {
error_report("failed to create inode for snapshot. %s",
strerror(errno));
@@ -2445,12 +2379,11 @@ sd_co_get_block_status(BlockDriverState *bs, int64_t sector_num, int nb_sectors,
{
BDRVSheepdogState *s = bs->opaque;
SheepdogInode *inode = &s->inode;
uint64_t offset = sector_num * BDRV_SECTOR_SIZE;
unsigned long start = offset / SD_DATA_OBJ_SIZE,
unsigned long start = sector_num * BDRV_SECTOR_SIZE / SD_DATA_OBJ_SIZE,
end = DIV_ROUND_UP((sector_num + nb_sectors) *
BDRV_SECTOR_SIZE, SD_DATA_OBJ_SIZE);
unsigned long idx;
int64_t ret = BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID | offset;
int64_t ret = BDRV_BLOCK_DATA;
for (idx = start; idx < end; idx++) {
if (inode->data_vdi_id[idx] == 0) {
@@ -2474,22 +2407,6 @@ sd_co_get_block_status(BlockDriverState *bs, int64_t sector_num, int nb_sectors,
return ret;
}
static int64_t sd_get_allocated_file_size(BlockDriverState *bs)
{
BDRVSheepdogState *s = bs->opaque;
SheepdogInode *inode = &s->inode;
unsigned long i, last = DIV_ROUND_UP(inode->vdi_size, SD_DATA_OBJ_SIZE);
uint64_t size = 0;
for (i = 0; i < last; i++) {
if (inode->data_vdi_id[i] == 0) {
continue;
}
size += SD_DATA_OBJ_SIZE;
}
return size;
}
static QEMUOptionParameter sd_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
@@ -2506,11 +2423,6 @@ static QEMUOptionParameter sd_create_options[] = {
.type = OPT_STRING,
.help = "Preallocation mode (allowed values: off, full)"
},
{
.name = BLOCK_OPT_REDUNDANCY,
.type = OPT_STRING,
.help = "Redundancy of the image"
},
{ NULL }
};
@@ -2524,7 +2436,6 @@ static BlockDriver bdrv_sheepdog = {
.bdrv_create = sd_create,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_getlength = sd_getlength,
.bdrv_get_allocated_file_size = sd_get_allocated_file_size,
.bdrv_truncate = sd_truncate,
.bdrv_co_readv = sd_co_readv,
@@ -2554,7 +2465,6 @@ static BlockDriver bdrv_sheepdog_tcp = {
.bdrv_create = sd_create,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_getlength = sd_getlength,
.bdrv_get_allocated_file_size = sd_get_allocated_file_size,
.bdrv_truncate = sd_truncate,
.bdrv_co_readv = sd_co_readv,
@@ -2584,7 +2494,6 @@ static BlockDriver bdrv_sheepdog_unix = {
.bdrv_create = sd_create,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_getlength = sd_getlength,
.bdrv_get_allocated_file_size = sd_get_allocated_file_size,
.bdrv_truncate = sd_truncate,
.bdrv_co_readv = sd_co_readv,

View File

@@ -25,24 +25,6 @@
#include "block/snapshot.h"
#include "block/block_int.h"
QemuOptsList internal_snapshot_opts = {
.name = "snapshot",
.head = QTAILQ_HEAD_INITIALIZER(internal_snapshot_opts.head),
.desc = {
{
.name = SNAPSHOT_OPT_ID,
.type = QEMU_OPT_STRING,
.help = "snapshot id"
},{
.name = SNAPSHOT_OPT_NAME,
.type = QEMU_OPT_STRING,
.help = "snapshot name"
},{
/* end of list */
}
},
};
int bdrv_snapshot_find(BlockDriverState *bs, QEMUSnapshotInfo *sn_info,
const char *name)
{
@@ -212,7 +194,7 @@ int bdrv_snapshot_goto(BlockDriverState *bs,
* If only @snapshot_id is specified, delete the first one with id
* @snapshot_id.
* If only @name is specified, delete the first one with name @name.
* if none is specified, return -EINVAL.
* if none is specified, return -ENINVAL.
*
* Returns: 0 on success, -errno on failure. If @bs is not inserted, return
* -ENOMEDIUM. If @snapshot_id and @name are both NULL, return -EINVAL. If @bs
@@ -283,71 +265,18 @@ int bdrv_snapshot_list(BlockDriverState *bs,
return -ENOTSUP;
}
/**
* Temporarily load an internal snapshot by @snapshot_id and @name.
* @bs: block device used in the operation
* @snapshot_id: unique snapshot ID, or NULL
* @name: snapshot name, or NULL
* @errp: location to store error
*
* If both @snapshot_id and @name are specified, load the first one with
* id @snapshot_id and name @name.
* If only @snapshot_id is specified, load the first one with id
* @snapshot_id.
* If only @name is specified, load the first one with name @name.
* if none is specified, return -EINVAL.
*
* Returns: 0 on success, -errno on fail. If @bs is not inserted, return
* -ENOMEDIUM. If @bs is not readonly, return -EINVAL. If @bs did not support
* internal snapshot, return -ENOTSUP. If qemu can't find a matching @id and
* @name, return -ENOENT. If @errp != NULL, it will always be filled on
* failure.
*/
int bdrv_snapshot_load_tmp(BlockDriverState *bs,
const char *snapshot_id,
const char *name,
Error **errp)
const char *snapshot_name)
{
BlockDriver *drv = bs->drv;
if (!drv) {
error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, bdrv_get_device_name(bs));
return -ENOMEDIUM;
}
if (!snapshot_id && !name) {
error_setg(errp, "snapshot_id and name are both NULL");
return -EINVAL;
}
if (!bs->read_only) {
error_setg(errp, "Device is not readonly");
return -EINVAL;
}
if (drv->bdrv_snapshot_load_tmp) {
return drv->bdrv_snapshot_load_tmp(bs, snapshot_id, name, errp);
return drv->bdrv_snapshot_load_tmp(bs, snapshot_name);
}
error_set(errp, QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
drv->format_name, bdrv_get_device_name(bs),
"temporarily load internal snapshot");
return -ENOTSUP;
}
int bdrv_snapshot_load_tmp_by_id_or_name(BlockDriverState *bs,
const char *id_or_name,
Error **errp)
{
int ret;
Error *local_err = NULL;
ret = bdrv_snapshot_load_tmp(bs, id_or_name, NULL, &local_err);
if (ret == -ENOENT || ret == -EINVAL) {
error_free(local_err);
local_err = NULL;
ret = bdrv_snapshot_load_tmp(bs, NULL, id_or_name, &local_err);
}
if (local_err) {
error_propagate(errp, local_err);
}
return ret;
}

View File

@@ -75,8 +75,6 @@ static void close_unused_images(BlockDriverState *top, BlockDriverState *base,
unused->backing_hd = NULL;
bdrv_unref(unused);
}
bdrv_refresh_limits(top);
}
static void coroutine_fn stream_run(void *opaque)
@@ -90,11 +88,6 @@ static void coroutine_fn stream_run(void *opaque)
int n = 0;
void *buf;
if (!bs->backing_hd) {
block_job_completed(&s->common, 0);
return;
}
s->common.len = bdrv_getlength(bs);
if (s->common.len < 0) {
block_job_completed(&s->common, s->common.len);

View File

@@ -331,7 +331,6 @@ static int vdi_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
logout("\n");
bdi->cluster_size = s->block_size;
bdi->vm_state_offset = 0;
bdi->unallocated_blocks_are_zero = true;
return 0;
}
@@ -395,50 +394,43 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
}
if (header.signature != VDI_SIGNATURE) {
error_setg(errp, "Image not in VDI format (bad signature %08x)", header.signature);
ret = -EINVAL;
logout("bad vdi signature %08x\n", header.signature);
ret = -EMEDIUMTYPE;
goto fail;
} else if (header.version != VDI_VERSION_1_1) {
error_setg(errp, "unsupported VDI image (version %u.%u)",
header.version >> 16, header.version & 0xffff);
logout("unsupported version %u.%u\n",
header.version >> 16, header.version & 0xffff);
ret = -ENOTSUP;
goto fail;
} else if (header.offset_bmap % SECTOR_SIZE != 0) {
/* We only support block maps which start on a sector boundary. */
error_setg(errp, "unsupported VDI image (unaligned block map offset "
"0x%x)", header.offset_bmap);
logout("unsupported block map offset 0x%x B\n", header.offset_bmap);
ret = -ENOTSUP;
goto fail;
} else if (header.offset_data % SECTOR_SIZE != 0) {
/* We only support data blocks which start on a sector boundary. */
error_setg(errp, "unsupported VDI image (unaligned data offset 0x%x)",
header.offset_data);
logout("unsupported data offset 0x%x B\n", header.offset_data);
ret = -ENOTSUP;
goto fail;
} else if (header.sector_size != SECTOR_SIZE) {
error_setg(errp, "unsupported VDI image (sector size %u is not %u)",
header.sector_size, SECTOR_SIZE);
logout("unsupported sector size %u B\n", header.sector_size);
ret = -ENOTSUP;
goto fail;
} else if (header.block_size != 1 * MiB) {
error_setg(errp, "unsupported VDI image (sector size %u is not %u)",
header.block_size, 1 * MiB);
logout("unsupported block size %u B\n", header.block_size);
ret = -ENOTSUP;
goto fail;
} else if (header.disk_size >
(uint64_t)header.blocks_in_image * header.block_size) {
error_setg(errp, "unsupported VDI image (disk size %" PRIu64 ", "
"image bitmap has room for %" PRIu64 ")",
header.disk_size,
(uint64_t)header.blocks_in_image * header.block_size);
logout("unsupported disk size %" PRIu64 " B\n", header.disk_size);
ret = -ENOTSUP;
goto fail;
} else if (!uuid_is_null(header.uuid_link)) {
error_setg(errp, "unsupported VDI image (non-NULL link UUID)");
logout("link uuid != 0, unsupported\n");
ret = -ENOTSUP;
goto fail;
} else if (!uuid_is_null(header.uuid_parent)) {
error_setg(errp, "unsupported VDI image (non-NULL parent UUID)");
logout("parent uuid != 0, unsupported\n");
ret = -ENOTSUP;
goto fail;
}

View File

@@ -706,8 +706,7 @@ exit:
*
* If read-only, we must replay the log in RAM (or refuse to open
* a dirty VHDX file read-only) */
int vhdx_parse_log(BlockDriverState *bs, BDRVVHDXState *s, bool *flushed,
Error **errp)
int vhdx_parse_log(BlockDriverState *bs, BDRVVHDXState *s, bool *flushed)
{
int ret = 0;
VHDXHeader *hdr;
@@ -762,16 +761,6 @@ int vhdx_parse_log(BlockDriverState *bs, BDRVVHDXState *s, bool *flushed,
}
if (logs.valid) {
if (bs->read_only) {
ret = -EPERM;
error_setg_errno(errp, EPERM,
"VHDX image file '%s' opened read-only, but "
"contains a log that needs to be replayed. To "
"replay the log, execute:\n qemu-img check -r "
"all '%s'",
bs->filename, bs->filename);
goto exit;
}
/* now flush the log */
ret = vhdx_log_flush(bs, s, &logs);
if (ret < 0) {
@@ -965,8 +954,8 @@ static int vhdx_log_write(BlockDriverState *bs, BDRVVHDXState *s,
cpu_to_le32s((uint32_t *)(buffer + 4));
/* now write to the log */
ret = vhdx_log_write_sectors(bs, &s->log, &sectors_written, buffer,
desc_sectors + sectors);
vhdx_log_write_sectors(bs, &s->log, &sectors_written, buffer,
desc_sectors + sectors);
if (ret < 0) {
goto exit;
}

View File

@@ -374,7 +374,7 @@ static int vhdx_update_header(BlockDriverState *bs, BDRVVHDXState *s,
inactive_header->log_guid = *log_guid;
}
ret = vhdx_write_header(bs->file, inactive_header, header_offset, true);
vhdx_write_header(bs->file, inactive_header, header_offset, true);
if (ret < 0) {
goto exit;
}
@@ -402,10 +402,9 @@ int vhdx_update_headers(BlockDriverState *bs, BDRVVHDXState *s,
}
/* opens the specified header block from the VHDX file header section */
static void vhdx_parse_header(BlockDriverState *bs, BDRVVHDXState *s,
Error **errp)
static int vhdx_parse_header(BlockDriverState *bs, BDRVVHDXState *s)
{
int ret;
int ret = 0;
VHDXHeader *header1;
VHDXHeader *header2;
bool h1_valid = false;
@@ -463,6 +462,7 @@ static void vhdx_parse_header(BlockDriverState *bs, BDRVVHDXState *s,
} else if (!h1_valid && h2_valid) {
s->curr_header = 1;
} else if (!h1_valid && !h2_valid) {
ret = -EINVAL;
goto fail;
} else {
/* If both headers are valid, then we choose the active one by the
@@ -473,22 +473,27 @@ static void vhdx_parse_header(BlockDriverState *bs, BDRVVHDXState *s,
} else if (h2_seq > h1_seq) {
s->curr_header = 1;
} else {
ret = -EINVAL;
goto fail;
}
}
vhdx_region_register(s, s->headers[s->curr_header]->log_offset,
s->headers[s->curr_header]->log_length);
ret = 0;
goto exit;
fail:
error_setg_errno(errp, -ret, "No valid VHDX header found");
qerror_report(ERROR_CLASS_GENERIC_ERROR, "No valid VHDX header found");
qemu_vfree(header1);
qemu_vfree(header2);
s->headers[0] = NULL;
s->headers[1] = NULL;
exit:
qemu_vfree(buffer);
return ret;
}
@@ -873,7 +878,8 @@ static int vhdx_open(BlockDriverState *bs, QDict *options, int flags,
int ret = 0;
uint32_t i;
uint64_t signature;
Error *local_err = NULL;
bool log_flushed = false;
s->bat = NULL;
s->first_visible_write = true;
@@ -896,14 +902,12 @@ static int vhdx_open(BlockDriverState *bs, QDict *options, int flags,
* header update */
vhdx_guid_generate(&s->session_guid);
vhdx_parse_header(bs, s, &local_err);
if (local_err != NULL) {
error_propagate(errp, local_err);
ret = -EINVAL;
ret = vhdx_parse_header(bs, s);
if (ret < 0) {
goto fail;
}
ret = vhdx_parse_log(bs, s, &s->log_replayed_on_open, errp);
ret = vhdx_parse_log(bs, s, &log_flushed);
if (ret < 0) {
goto fail;
}
@@ -1039,18 +1043,6 @@ static void vhdx_block_translate(BDRVVHDXState *s, int64_t sector_num,
}
static int vhdx_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
{
BDRVVHDXState *s = bs->opaque;
bdi->cluster_size = s->block_size;
bdi->unallocated_blocks_are_zero =
(s->params.data_bits & VHDX_PARAMS_HAS_PARENT) == 0;
return 0;
}
static coroutine_fn int vhdx_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
@@ -1794,9 +1786,7 @@ static int vhdx_create(const char *filename, QEMUOptionParameter *options,
goto exit;
}
bs = NULL;
ret = bdrv_open(&bs, filename, NULL, NULL, BDRV_O_RDWR | BDRV_O_PROTOCOL,
NULL, &local_err);
ret = bdrv_file_open(&bs, filename, NULL, BDRV_O_RDWR, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto exit;
@@ -1809,13 +1799,13 @@ static int vhdx_create(const char *filename, QEMUOptionParameter *options,
creator = g_utf8_to_utf16("QEMU v" QEMU_VERSION, -1, NULL,
&creator_items, NULL);
signature = cpu_to_le64(VHDX_FILE_SIGNATURE);
ret = bdrv_pwrite(bs, VHDX_FILE_ID_OFFSET, &signature, sizeof(signature));
bdrv_pwrite(bs, VHDX_FILE_ID_OFFSET, &signature, sizeof(signature));
if (ret < 0) {
goto delete_and_exit;
}
if (creator) {
ret = bdrv_pwrite(bs, VHDX_FILE_ID_OFFSET + sizeof(signature),
creator, creator_items * sizeof(gunichar2));
bdrv_pwrite(bs, VHDX_FILE_ID_OFFSET + sizeof(signature), creator,
creator_items * sizeof(gunichar2));
if (ret < 0) {
goto delete_and_exit;
}
@@ -1852,24 +1842,6 @@ exit:
return ret;
}
/* If opened r/w, the VHDX driver will automatically replay the log,
* if one is present, inside the vhdx_open() call.
*
* If qemu-img check -r all is called, the image is automatically opened
* r/w and any log has already been replayed, so there is nothing (currently)
* for us to do here
*/
static int vhdx_check(BlockDriverState *bs, BdrvCheckResult *result,
BdrvCheckMode fix)
{
BDRVVHDXState *s = bs->opaque;
if (s->log_replayed_on_open) {
result->corruptions_fixed++;
}
return 0;
}
static QEMUOptionParameter vhdx_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
@@ -1913,8 +1885,6 @@ static BlockDriver bdrv_vhdx = {
.bdrv_co_readv = vhdx_co_readv,
.bdrv_co_writev = vhdx_co_writev,
.bdrv_create = vhdx_create,
.bdrv_get_info = vhdx_get_info,
.bdrv_check = vhdx_check,
.create_options = vhdx_create_options,
};

View File

@@ -61,7 +61,7 @@
/* These structures are ones that are defined in the VHDX specification
* document */
#define VHDX_FILE_SIGNATURE 0x656C696678646876ULL /* "vhdxfile" in ASCII */
#define VHDX_FILE_SIGNATURE 0x656C696678646876 /* "vhdxfile" in ASCII */
typedef struct VHDXFileIdentifier {
uint64_t signature; /* "vhdxfile" in ASCII */
uint16_t creator[256]; /* optional; utf-16 string to identify
@@ -238,7 +238,7 @@ typedef struct QEMU_PACKED VHDXLogDataSector {
/* upper 44 bits are the file offset in 1MB units lower 3 bits are the state
other bits are reserved */
#define VHDX_BAT_STATE_BIT_MASK 0x07
#define VHDX_BAT_FILE_OFF_MASK 0xFFFFFFFFFFF00000ULL /* upper 44 bits */
#define VHDX_BAT_FILE_OFF_MASK 0xFFFFFFFFFFF00000 /* upper 44 bits */
typedef uint64_t VHDXBatEntry;
/* ---- METADATA REGION STRUCTURES ---- */
@@ -247,7 +247,7 @@ typedef uint64_t VHDXBatEntry;
#define VHDX_METADATA_MAX_ENTRIES 2047 /* not including the header */
#define VHDX_METADATA_TABLE_MAX_SIZE \
(VHDX_METADATA_ENTRY_SIZE * (VHDX_METADATA_MAX_ENTRIES+1))
#define VHDX_METADATA_SIGNATURE 0x617461646174656DULL /* "metadata" in ASCII */
#define VHDX_METADATA_SIGNATURE 0x617461646174656D /* "metadata" in ASCII */
typedef struct QEMU_PACKED VHDXMetadataTableHeader {
uint64_t signature; /* "metadata" in ASCII */
uint16_t reserved;
@@ -394,8 +394,6 @@ typedef struct BDRVVHDXState {
Error *migration_blocker;
bool log_replayed_on_open;
QLIST_HEAD(VHDXRegionHead, VHDXRegionEntry) regions;
} BDRVVHDXState;
@@ -410,8 +408,7 @@ uint32_t vhdx_checksum_calc(uint32_t crc, uint8_t *buf, size_t size,
bool vhdx_checksum_is_valid(uint8_t *buf, size_t size, int crc_offset);
int vhdx_parse_log(BlockDriverState *bs, BDRVVHDXState *s, bool *flushed,
Error **errp);
int vhdx_parse_log(BlockDriverState *bs, BDRVVHDXState *s, bool *flushed);
int vhdx_log_write_and_flush(BlockDriverState *bs, BDRVVHDXState *s,
void *data, uint32_t length, uint64_t offset);

View File

@@ -526,34 +526,8 @@ static int vmdk_open_vmfs_sparse(BlockDriverState *bs,
return ret;
}
static int vmdk_open_desc_file(BlockDriverState *bs, int flags, char *buf,
Error **errp);
static char *vmdk_read_desc(BlockDriverState *file, uint64_t desc_offset,
Error **errp)
{
int64_t size;
char *buf;
int ret;
size = bdrv_getlength(file);
if (size < 0) {
error_setg_errno(errp, -size, "Could not access file");
return NULL;
}
size = MIN(size, 1 << 20); /* avoid unbounded allocation */
buf = g_malloc0(size + 1);
ret = bdrv_pread(file, desc_offset, buf, size);
if (ret < 0) {
error_setg_errno(errp, -ret, "Could not read from file");
g_free(buf);
return NULL;
}
return buf;
}
static int vmdk_open_desc_file(BlockDriverState *bs, int flags,
uint64_t desc_offset, Error **errp);
static int vmdk_open_vmdk4(BlockDriverState *bs,
BlockDriverState *file,
@@ -572,18 +546,11 @@ static int vmdk_open_vmdk4(BlockDriverState *bs,
error_setg_errno(errp, -ret,
"Could not read header from file '%s'",
file->filename);
return -EINVAL;
}
if (header.capacity == 0) {
uint64_t desc_offset = le64_to_cpu(header.desc_offset);
if (desc_offset) {
char *buf = vmdk_read_desc(file, desc_offset << 9, errp);
if (!buf) {
return -EINVAL;
}
ret = vmdk_open_desc_file(bs, flags, buf, errp);
g_free(buf);
return ret;
return vmdk_open_desc_file(bs, flags, desc_offset << 9, errp);
}
}
@@ -638,24 +605,17 @@ static int vmdk_open_vmdk4(BlockDriverState *bs,
header = footer.header;
}
if (le32_to_cpu(header.version) > 3) {
if (le32_to_cpu(header.version) >= 3) {
char buf[64];
snprintf(buf, sizeof(buf), "VMDK version %d",
le32_to_cpu(header.version));
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "vmdk", buf);
qerror_report(QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "vmdk", buf);
return -ENOTSUP;
} else if (le32_to_cpu(header.version) == 3 && (flags & BDRV_O_RDWR)) {
/* VMware KB 2064959 explains that version 3 added support for
* persistent changed block tracking (CBT), and backup software can
* read it as version=1 if it doesn't care about the changed area
* information. So we are safe to enable read only. */
error_setg(errp, "VMDK version 3 must be read only");
return -EINVAL;
}
if (le32_to_cpu(header.num_gtes_per_gt) > 512) {
error_setg(errp, "L2 table size too big");
error_report("L2 table size too big");
return -EINVAL;
}
@@ -669,13 +629,6 @@ static int vmdk_open_vmdk4(BlockDriverState *bs,
if (le32_to_cpu(header.flags) & VMDK4_FLAG_RGD) {
l1_backup_offset = le64_to_cpu(header.rgd_offset) << 9;
}
if (bdrv_getlength(file) <
le64_to_cpu(header.grain_offset) * BDRV_SECTOR_SIZE) {
error_setg(errp, "File truncated, expecting at least %lld bytes",
le64_to_cpu(header.grain_offset) * BDRV_SECTOR_SIZE);
return -EINVAL;
}
ret = vmdk_add_extent(bs, file, false,
le64_to_cpu(header.capacity),
le64_to_cpu(header.gd_offset) << 9,
@@ -690,10 +643,6 @@ static int vmdk_open_vmdk4(BlockDriverState *bs,
}
extent->compressed =
le16_to_cpu(header.compressAlgorithm) == VMDK4_COMPRESSION_DEFLATE;
if (extent->compressed) {
g_free(s->create_type);
s->create_type = g_strdup("streamOptimized");
}
extent->has_marker = le32_to_cpu(header.flags) & VMDK4_FLAG_MARKER;
extent->version = le32_to_cpu(header.version);
extent->has_zero_grain = le32_to_cpu(header.flags) & VMDK4_FLAG_ZERO_GRAIN;
@@ -734,12 +683,16 @@ static int vmdk_parse_description(const char *desc, const char *opt_name,
/* Open an extent file and append to bs array */
static int vmdk_open_sparse(BlockDriverState *bs,
BlockDriverState *file, int flags,
char *buf, Error **errp)
BlockDriverState *file,
int flags, Error **errp)
{
uint32_t magic;
magic = ldl_be_p(buf);
if (bdrv_pread(file, 0, &magic, sizeof(magic)) != sizeof(magic)) {
return -EIO;
}
magic = be32_to_cpu(magic);
switch (magic) {
case VMDK3_MAGIC:
return vmdk_open_vmfs_sparse(bs, file, flags, errp);
@@ -748,8 +701,7 @@ static int vmdk_open_sparse(BlockDriverState *bs,
return vmdk_open_vmdk4(bs, file, flags, errp);
break;
default:
error_setg(errp, "Image not in VMDK format");
return -EINVAL;
return -EMEDIUMTYPE;
break;
}
}
@@ -786,14 +738,9 @@ static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
return -EINVAL;
}
} else if (!strcmp(type, "VMFS")) {
if (ret == 4) {
flat_offset = 0;
} else {
error_setg(errp, "Invalid extent lines:\n%s", p);
return -EINVAL;
}
flat_offset = 0;
} else if (ret != 4) {
error_setg(errp, "Invalid extent lines:\n%s", p);
error_setg(errp, "Invalid extent lines: \n%s", p);
return -EINVAL;
}
@@ -806,9 +753,8 @@ static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
path_combine(extent_path, sizeof(extent_path),
desc_file_path, fname);
extent_file = NULL;
ret = bdrv_open(&extent_file, extent_path, NULL, NULL,
bs->open_flags | BDRV_O_PROTOCOL, NULL, errp);
ret = bdrv_file_open(&extent_file, extent_path, NULL, bs->open_flags,
errp);
if (ret) {
return ret;
}
@@ -825,14 +771,8 @@ static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
extent->flat_start_offset = flat_offset << 9;
} else if (!strcmp(type, "SPARSE") || !strcmp(type, "VMFSSPARSE")) {
/* SPARSE extent and VMFSSPARSE extent are both "COWD" sparse file*/
char *buf = vmdk_read_desc(extent_file, 0, errp);
if (!buf) {
ret = -EINVAL;
} else {
ret = vmdk_open_sparse(bs, extent_file, bs->open_flags, buf, errp);
}
ret = vmdk_open_sparse(bs, extent_file, bs->open_flags, errp);
if (ret) {
g_free(buf);
bdrv_unref(extent_file);
return ret;
}
@@ -855,16 +795,29 @@ next_line:
return 0;
}
static int vmdk_open_desc_file(BlockDriverState *bs, int flags, char *buf,
Error **errp)
static int vmdk_open_desc_file(BlockDriverState *bs, int flags,
uint64_t desc_offset, Error **errp)
{
int ret;
char *buf = NULL;
char ct[128];
BDRVVmdkState *s = bs->opaque;
int64_t size;
size = bdrv_getlength(bs->file);
if (size < 0) {
return -EINVAL;
}
size = MIN(size, 1 << 20); /* avoid unbounded allocation */
buf = g_malloc0(size + 1);
ret = bdrv_pread(bs->file, desc_offset, buf, size);
if (ret < 0) {
goto exit;
}
if (vmdk_parse_description(buf, "createType", ct, sizeof(ct))) {
error_setg(errp, "invalid VMDK image descriptor");
ret = -EINVAL;
ret = -EMEDIUMTYPE;
goto exit;
}
if (strcmp(ct, "monolithicFlat") &&
@@ -880,37 +833,24 @@ static int vmdk_open_desc_file(BlockDriverState *bs, int flags, char *buf,
s->desc_offset = 0;
ret = vmdk_parse_extents(buf, bs, bs->file->filename, errp);
exit:
g_free(buf);
return ret;
}
static int vmdk_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
char *buf = NULL;
int ret;
BDRVVmdkState *s = bs->opaque;
uint32_t magic;
buf = vmdk_read_desc(bs->file, 0, errp);
if (!buf) {
return -EINVAL;
if (vmdk_open_sparse(bs, bs->file, flags, errp) == 0) {
s->desc_offset = 0x200;
} else {
ret = vmdk_open_desc_file(bs, flags, 0, errp);
if (ret) {
goto fail;
}
}
magic = ldl_be_p(buf);
switch (magic) {
case VMDK3_MAGIC:
case VMDK4_MAGIC:
ret = vmdk_open_sparse(bs, bs->file, flags, buf, errp);
s->desc_offset = 0x200;
break;
default:
ret = vmdk_open_desc_file(bs, flags, buf, errp);
break;
}
if (ret) {
goto fail;
}
/* try to open parent images, if exist */
ret = vmdk_parent_open(bs);
if (ret) {
@@ -925,34 +865,16 @@ static int vmdk_open(BlockDriverState *bs, QDict *options, int flags,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"vmdk", bs->device_name, "live migration");
migrate_add_blocker(s->migration_blocker);
g_free(buf);
return 0;
fail:
g_free(buf);
g_free(s->create_type);
s->create_type = NULL;
vmdk_free_extents(bs);
return ret;
}
static int vmdk_refresh_limits(BlockDriverState *bs)
{
BDRVVmdkState *s = bs->opaque;
int i;
for (i = 0; i < s->num_extents; i++) {
if (!s->extents[i].flat) {
bs->bl.write_zeroes_alignment =
MAX(bs->bl.write_zeroes_alignment,
s->extents[i].cluster_sectors);
}
}
return 0;
}
static int get_whole_cluster(BlockDriverState *bs,
VmdkExtent *extent,
uint64_t cluster_offset,
@@ -1184,7 +1106,7 @@ static int64_t coroutine_fn vmdk_co_get_block_status(BlockDriverState *bs,
break;
case VMDK_OK:
ret = BDRV_BLOCK_DATA;
if (extent->file == bs->file && !extent->compressed) {
if (extent->file == bs->file) {
ret |= BDRV_BLOCK_OFFSET_VALID | offset;
}
@@ -1387,8 +1309,8 @@ static int vmdk_write(BlockDriverState *bs, int64_t sector_num,
{
BDRVVmdkState *s = bs->opaque;
VmdkExtent *extent = NULL;
int ret;
int64_t index_in_cluster, n;
int n, ret;
int64_t index_in_cluster;
uint64_t extent_begin_sector, extent_relative_sector_num;
uint64_t cluster_offset;
VmdkMetaData m_data;
@@ -1497,8 +1419,7 @@ static coroutine_fn int vmdk_co_write(BlockDriverState *bs, int64_t sector_num,
static int coroutine_fn vmdk_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors,
BdrvRequestFlags flags)
int nb_sectors)
{
int ret;
BDRVVmdkState *s = bs->opaque;
@@ -1514,35 +1435,23 @@ static int coroutine_fn vmdk_co_write_zeroes(BlockDriverState *bs,
}
static int vmdk_create_extent(const char *filename, int64_t filesize,
bool flat, bool compress, bool zeroed_grain,
Error **errp)
bool flat, bool compress, bool zeroed_grain)
{
int ret, i;
BlockDriverState *bs = NULL;
int fd = 0;
VMDK4Header header;
Error *local_err;
uint32_t tmp, magic, grains, gd_sectors, gt_size, gt_count;
uint32_t *gd_buf = NULL;
int gd_buf_size;
uint32_t tmp, magic, grains, gd_size, gt_size, gt_count;
ret = bdrv_create_file(filename, NULL, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto exit;
fd = qemu_open(filename,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY | O_LARGEFILE,
0644);
if (fd < 0) {
return -errno;
}
assert(bs == NULL);
ret = bdrv_open(&bs, filename, NULL, NULL, BDRV_O_RDWR | BDRV_O_PROTOCOL,
NULL, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto exit;
}
if (flat) {
ret = bdrv_truncate(bs, filesize);
ret = ftruncate(fd, filesize);
if (ret < 0) {
error_setg_errno(errp, -ret, "Could not truncate file");
ret = -errno;
}
goto exit;
}
@@ -1553,23 +1462,24 @@ static int vmdk_create_extent(const char *filename, int64_t filesize,
| (compress ? VMDK4_FLAG_COMPRESS | VMDK4_FLAG_MARKER : 0)
| (zeroed_grain ? VMDK4_FLAG_ZERO_GRAIN : 0);
header.compressAlgorithm = compress ? VMDK4_COMPRESSION_DEFLATE : 0;
header.capacity = filesize / BDRV_SECTOR_SIZE;
header.capacity = filesize / 512;
header.granularity = 128;
header.num_gtes_per_gt = BDRV_SECTOR_SIZE;
header.num_gtes_per_gt = 512;
grains = DIV_ROUND_UP(filesize / BDRV_SECTOR_SIZE, header.granularity);
gt_size = DIV_ROUND_UP(header.num_gtes_per_gt * sizeof(uint32_t),
BDRV_SECTOR_SIZE);
gt_count = DIV_ROUND_UP(grains, header.num_gtes_per_gt);
gd_sectors = DIV_ROUND_UP(gt_count * sizeof(uint32_t), BDRV_SECTOR_SIZE);
grains = (filesize / 512 + header.granularity - 1) / header.granularity;
gt_size = ((header.num_gtes_per_gt * sizeof(uint32_t)) + 511) >> 9;
gt_count =
(grains + header.num_gtes_per_gt - 1) / header.num_gtes_per_gt;
gd_size = (gt_count * sizeof(uint32_t) + 511) >> 9;
header.desc_offset = 1;
header.desc_size = 20;
header.rgd_offset = header.desc_offset + header.desc_size;
header.gd_offset = header.rgd_offset + gd_sectors + (gt_size * gt_count);
header.gd_offset = header.rgd_offset + gd_size + (gt_size * gt_count);
header.grain_offset =
ROUND_UP(header.gd_offset + gd_sectors + (gt_size * gt_count),
header.granularity);
((header.gd_offset + gd_size + (gt_size * gt_count) +
header.granularity - 1) / header.granularity) *
header.granularity;
/* swap endianness for all header fields */
header.version = cpu_to_le32(header.version);
header.flags = cpu_to_le32(header.flags);
@@ -1589,55 +1499,48 @@ static int vmdk_create_extent(const char *filename, int64_t filesize,
header.check_bytes[3] = 0xa;
/* write all the data */
ret = bdrv_pwrite(bs, 0, &magic, sizeof(magic));
if (ret < 0) {
error_set(errp, QERR_IO_ERROR);
ret = qemu_write_full(fd, &magic, sizeof(magic));
if (ret != sizeof(magic)) {
ret = -errno;
goto exit;
}
ret = bdrv_pwrite(bs, sizeof(magic), &header, sizeof(header));
if (ret < 0) {
error_set(errp, QERR_IO_ERROR);
ret = qemu_write_full(fd, &header, sizeof(header));
if (ret != sizeof(header)) {
ret = -errno;
goto exit;
}
ret = bdrv_truncate(bs, le64_to_cpu(header.grain_offset) << 9);
ret = ftruncate(fd, le64_to_cpu(header.grain_offset) << 9);
if (ret < 0) {
error_setg_errno(errp, -ret, "Could not truncate file");
ret = -errno;
goto exit;
}
/* write grain directory */
gd_buf_size = gd_sectors * BDRV_SECTOR_SIZE;
gd_buf = g_malloc0(gd_buf_size);
for (i = 0, tmp = le64_to_cpu(header.rgd_offset) + gd_sectors;
lseek(fd, le64_to_cpu(header.rgd_offset) << 9, SEEK_SET);
for (i = 0, tmp = le64_to_cpu(header.rgd_offset) + gd_size;
i < gt_count; i++, tmp += gt_size) {
gd_buf[i] = cpu_to_le32(tmp);
}
ret = bdrv_pwrite(bs, le64_to_cpu(header.rgd_offset) * BDRV_SECTOR_SIZE,
gd_buf, gd_buf_size);
if (ret < 0) {
error_set(errp, QERR_IO_ERROR);
goto exit;
ret = qemu_write_full(fd, &tmp, sizeof(tmp));
if (ret != sizeof(tmp)) {
ret = -errno;
goto exit;
}
}
/* write backup grain directory */
for (i = 0, tmp = le64_to_cpu(header.gd_offset) + gd_sectors;
lseek(fd, le64_to_cpu(header.gd_offset) << 9, SEEK_SET);
for (i = 0, tmp = le64_to_cpu(header.gd_offset) + gd_size;
i < gt_count; i++, tmp += gt_size) {
gd_buf[i] = cpu_to_le32(tmp);
}
ret = bdrv_pwrite(bs, le64_to_cpu(header.gd_offset) * BDRV_SECTOR_SIZE,
gd_buf, gd_buf_size);
if (ret < 0) {
error_set(errp, QERR_IO_ERROR);
goto exit;
ret = qemu_write_full(fd, &tmp, sizeof(tmp));
if (ret != sizeof(tmp)) {
ret = -errno;
goto exit;
}
}
ret = 0;
exit:
if (bs) {
bdrv_unref(bs);
}
g_free(gd_buf);
exit:
qemu_close(fd);
return ret;
}
@@ -1684,10 +1587,8 @@ static int filename_decompose(const char *filename, char *path, char *prefix,
static int vmdk_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
{
int idx = 0;
BlockDriverState *new_bs = NULL;
Error *local_err;
char *desc = NULL;
int fd, idx = 0;
char desc[BUF_SIZE];
int64_t total_size = 0, filesize;
const char *adapter_type = NULL;
const char *backing_file = NULL;
@@ -1695,7 +1596,7 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options,
int flags = 0;
int ret = 0;
bool flat, split, compress;
GString *ext_desc_lines;
char ext_desc_lines[BUF_SIZE] = "";
char path[PATH_MAX], prefix[PATH_MAX], postfix[PATH_MAX];
const int64_t split_size = 0x80000000; /* VMDK has constant split size */
const char *desc_extent_line;
@@ -1703,7 +1604,6 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options,
uint32_t parent_cid = 0xffffffff;
uint32_t number_heads = 16;
bool zeroed_grain = false;
uint32_t desc_offset = 0, desc_len;
const char desc_template[] =
"# Disk DescriptorFile\n"
"version=1\n"
@@ -1724,11 +1624,8 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options,
"ddb.geometry.sectors = \"63\"\n"
"ddb.adapterType = \"%s\"\n";
ext_desc_lines = g_string_new(NULL);
if (filename_decompose(filename, path, prefix, postfix, PATH_MAX, errp)) {
ret = -EINVAL;
goto exit;
return -EINVAL;
}
/* Read out options */
while (options && options->name) {
@@ -1754,8 +1651,7 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options,
strcmp(adapter_type, "lsilogic") &&
strcmp(adapter_type, "legacyESX")) {
error_setg(errp, "Unknown adapter type: '%s'", adapter_type);
ret = -EINVAL;
goto exit;
return -EINVAL;
}
if (strcmp(adapter_type, "ide") != 0) {
/* that's the number of heads with which vmware operates when
@@ -1771,8 +1667,7 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options,
strcmp(fmt, "twoGbMaxExtentFlat") &&
strcmp(fmt, "streamOptimized")) {
error_setg(errp, "Unknown subformat: '%s'", fmt);
ret = -EINVAL;
goto exit;
return -EINVAL;
}
split = !(strcmp(fmt, "twoGbMaxExtentFlat") &&
strcmp(fmt, "twoGbMaxExtentSparse"));
@@ -1786,25 +1681,22 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options,
}
if (flat && backing_file) {
error_setg(errp, "Flat image can't have backing file");
ret = -ENOTSUP;
goto exit;
return -ENOTSUP;
}
if (flat && zeroed_grain) {
error_setg(errp, "Flat image can't enable zeroed grain");
ret = -ENOTSUP;
goto exit;
return -ENOTSUP;
}
if (backing_file) {
BlockDriverState *bs = NULL;
ret = bdrv_open(&bs, backing_file, NULL, NULL, BDRV_O_NO_BACKING, NULL,
errp);
BlockDriverState *bs = bdrv_new("");
ret = bdrv_open(bs, backing_file, NULL, 0, NULL, errp);
if (ret != 0) {
goto exit;
bdrv_unref(bs);
return ret;
}
if (strcmp(bs->drv->format_name, "vmdk")) {
bdrv_unref(bs);
ret = -EINVAL;
goto exit;
return -EINVAL;
}
parent_cid = vmdk_read_cid(bs, 0);
bdrv_unref(bs);
@@ -1837,66 +1729,51 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options,
path, desc_filename);
if (vmdk_create_extent(ext_filename, size,
flat, compress, zeroed_grain, errp)) {
ret = -EINVAL;
goto exit;
flat, compress, zeroed_grain)) {
return -EINVAL;
}
filesize -= size;
/* Format description line */
snprintf(desc_line, sizeof(desc_line),
desc_extent_line, size / BDRV_SECTOR_SIZE, desc_filename);
g_string_append(ext_desc_lines, desc_line);
desc_extent_line, size / 512, desc_filename);
pstrcat(ext_desc_lines, sizeof(ext_desc_lines), desc_line);
}
/* generate descriptor file */
desc = g_strdup_printf(desc_template,
(unsigned int)time(NULL),
parent_cid,
fmt,
parent_desc_line,
ext_desc_lines->str,
(flags & BLOCK_FLAG_COMPAT6 ? 6 : 4),
total_size /
(int64_t)(63 * number_heads * BDRV_SECTOR_SIZE),
number_heads,
adapter_type);
desc_len = strlen(desc);
/* the descriptor offset = 0x200 */
if (!split && !flat) {
desc_offset = 0x200;
snprintf(desc, sizeof(desc), desc_template,
(unsigned int)time(NULL),
parent_cid,
fmt,
parent_desc_line,
ext_desc_lines,
(flags & BLOCK_FLAG_COMPAT6 ? 6 : 4),
total_size / (int64_t)(63 * number_heads * 512), number_heads,
adapter_type);
if (split || flat) {
fd = qemu_open(filename,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY | O_LARGEFILE,
0644);
} else {
ret = bdrv_create_file(filename, options, &local_err);
if (ret < 0) {
error_setg_errno(errp, -ret, "Could not create image file");
goto exit;
}
fd = qemu_open(filename,
O_WRONLY | O_BINARY | O_LARGEFILE,
0644);
}
assert(new_bs == NULL);
ret = bdrv_open(&new_bs, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, NULL, &local_err);
if (ret < 0) {
error_setg_errno(errp, -ret, "Could not write description");
if (fd < 0) {
return -errno;
}
/* the descriptor offset = 0x200 */
if (!split && !flat && 0x200 != lseek(fd, 0x200, SEEK_SET)) {
ret = -errno;
goto exit;
}
ret = bdrv_pwrite(new_bs, desc_offset, desc, desc_len);
if (ret < 0) {
error_setg_errno(errp, -ret, "Could not write description");
ret = qemu_write_full(fd, desc, strlen(desc));
if (ret != strlen(desc)) {
ret = -errno;
goto exit;
}
/* bdrv_pwrite write padding zeros to align to sector, we don't need that
* for description file */
if (desc_offset == 0) {
ret = bdrv_truncate(new_bs, desc_len);
if (ret < 0) {
error_setg_errno(errp, -ret, "Could not truncate file");
}
}
ret = 0;
exit:
if (new_bs) {
bdrv_unref(new_bs);
}
g_free(desc);
g_string_free(ext_desc_lines, true);
qemu_close(fd);
return ret;
}
@@ -1984,53 +1861,6 @@ static ImageInfo *vmdk_get_extent_info(VmdkExtent *extent)
return info;
}
static int vmdk_check(BlockDriverState *bs, BdrvCheckResult *result,
BdrvCheckMode fix)
{
BDRVVmdkState *s = bs->opaque;
VmdkExtent *extent = NULL;
int64_t sector_num = 0;
int64_t total_sectors = bdrv_getlength(bs) / BDRV_SECTOR_SIZE;
int ret;
uint64_t cluster_offset;
if (fix) {
return -ENOTSUP;
}
for (;;) {
if (sector_num >= total_sectors) {
return 0;
}
extent = find_extent(s, sector_num, extent);
if (!extent) {
fprintf(stderr,
"ERROR: could not find extent for sector %" PRId64 "\n",
sector_num);
break;
}
ret = get_cluster_offset(bs, extent, NULL,
sector_num << BDRV_SECTOR_BITS,
0, &cluster_offset);
if (ret == VMDK_ERROR) {
fprintf(stderr,
"ERROR: could not get cluster_offset for sector %"
PRId64 "\n", sector_num);
break;
}
if (ret == VMDK_OK && cluster_offset >= bdrv_getlength(extent->file)) {
fprintf(stderr,
"ERROR: cluster offset for sector %"
PRId64 " points after EOF\n", sector_num);
break;
}
sector_num += extent->cluster_sectors;
}
result->corruptions++;
return 0;
}
static ImageInfoSpecific *vmdk_get_specific_info(BlockDriverState *bs)
{
int i;
@@ -2104,7 +1934,6 @@ static BlockDriver bdrv_vmdk = {
.instance_size = sizeof(BDRVVmdkState),
.bdrv_probe = vmdk_probe,
.bdrv_open = vmdk_open,
.bdrv_check = vmdk_check,
.bdrv_reopen_prepare = vmdk_reopen_prepare,
.bdrv_read = vmdk_co_read,
.bdrv_write = vmdk_co_write,
@@ -2116,7 +1945,6 @@ static BlockDriver bdrv_vmdk = {
.bdrv_get_allocated_file_size = vmdk_get_allocated_file_size,
.bdrv_has_zero_init = vmdk_has_zero_init,
.bdrv_get_specific_info = vmdk_get_specific_info,
.bdrv_refresh_limits = vmdk_refresh_limits,
.create_options = vmdk_create_options,
};

View File

@@ -190,8 +190,7 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
goto fail;
}
if (strncmp(footer->creator, "conectix", 8)) {
error_setg(errp, "invalid VPC image");
ret = -EINVAL;
ret = -EMEDIUMTYPE;
goto fail;
}
disk_type = VHD_FIXED;
@@ -456,19 +455,6 @@ fail:
return -1;
}
static int vpc_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
{
BDRVVPCState *s = (BDRVVPCState *)bs->opaque;
VHDFooter *footer = (VHDFooter *) s->footer_buf;
if (cpu_to_be32(footer->type) != VHD_FIXED) {
bdi->cluster_size = s->block_size;
}
bdi->unallocated_blocks_are_zero = true;
return 0;
}
static int vpc_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
@@ -871,8 +857,6 @@ static BlockDriver bdrv_vpc = {
.bdrv_read = vpc_co_read,
.bdrv_write = vpc_co_write,
.bdrv_get_info = vpc_get_info,
.create_options = vpc_create_options,
.bdrv_has_zero_init = vpc_has_zero_init,
};

View File

@@ -266,7 +266,8 @@ typedef struct mbr_t {
} QEMU_PACKED mbr_t;
typedef struct direntry_t {
uint8_t name[8 + 3];
uint8_t name[8];
uint8_t extension[3];
uint8_t attributes;
uint8_t reserved[2];
uint16_t ctime;
@@ -517,9 +518,11 @@ static inline uint8_t fat_chksum(const direntry_t* entry)
uint8_t chksum=0;
int i;
for (i = 0; i < ARRAY_SIZE(entry->name); i++) {
chksum = (((chksum & 0xfe) >> 1) |
((chksum & 0x01) ? 0x80 : 0)) + entry->name[i];
for(i=0;i<11;i++) {
unsigned char c;
c = (i < 8) ? entry->name[i] : entry->extension[i-8];
chksum=(((chksum&0xfe)>>1)|((chksum&0x01)?0x80:0)) + c;
}
return chksum;
@@ -614,7 +617,7 @@ static inline direntry_t* create_short_and_long_name(BDRVVVFATState* s,
if(is_dot) {
entry=array_get_next(&(s->directory));
memset(entry->name, 0x20, sizeof(entry->name));
memset(entry->name,0x20,11);
memcpy(entry->name,filename,strlen(filename));
return entry;
}
@@ -629,14 +632,12 @@ static inline direntry_t* create_short_and_long_name(BDRVVVFATState* s,
i = 8;
entry=array_get_next(&(s->directory));
memset(entry->name, 0x20, sizeof(entry->name));
memset(entry->name,0x20,11);
memcpy(entry->name, filename, i);
if (j > 0) {
for (i = 0; i < 3 && filename[j + 1 + i]; i++) {
entry->name[8 + i] = filename[j + 1 + i];
}
}
if(j > 0)
for (i = 0; i < 3 && filename[j+1+i]; i++)
entry->extension[i] = filename[j+1+i];
/* upcase & remove unwanted characters */
for(i=10;i>=0;i--) {
@@ -860,7 +861,8 @@ static int init_directories(BDRVVVFATState* s,
{
direntry_t* entry=array_get_next(&(s->directory));
entry->attributes=0x28; /* archive | volume label */
memcpy(entry->name, "QEMU VVFAT ", sizeof(entry->name));
memcpy(entry->name,"QEMU VVF",8);
memcpy(entry->extension,"AT ",3);
}
/* Now build FAT, and write back information into directory */
@@ -1083,17 +1085,19 @@ DLOG(if (stderr == NULL) {
setbuf(stderr, NULL);
})
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
opts = qemu_opts_create_nofail(&runtime_opts);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
ret = -EINVAL;
goto fail;
}
dirname = qemu_opt_get(opts, "dir");
if (!dirname) {
error_setg(errp, "vvfat block driver requires a 'dir' option");
qerror_report(ERROR_CLASS_GENERIC_ERROR, "vvfat block driver requires "
"a 'dir' option");
ret = -EINVAL;
goto fail;
}
@@ -1133,7 +1137,8 @@ DLOG(if (stderr == NULL) {
case 12:
break;
default:
error_setg(errp, "Valid FAT types are only 12, 16 and 32");
qerror_report(ERROR_CLASS_GENERIC_ERROR, "Valid FAT types are only "
"12, 16 and 32");
ret = -EINVAL;
goto fail;
}
@@ -1586,20 +1591,17 @@ static int parse_short_name(BDRVVVFATState* s,
lfn->name[i] = direntry->name[i];
}
for (j = 2; j >= 0 && direntry->name[8 + j] == ' '; j--) {
}
for (j = 2; j >= 0 && direntry->extension[j] == ' '; j--);
if (j >= 0) {
lfn->name[i++] = '.';
lfn->name[i + j + 1] = '\0';
for (;j >= 0; j--) {
uint8_t c = direntry->name[8 + j];
if (c <= ' ' || c > 0x7f) {
return -2;
} else if (s->downcase_short_names) {
lfn->name[i + j] = qemu_tolower(c);
} else {
lfn->name[i + j] = c;
}
if (direntry->extension[j] <= ' ' || direntry->extension[j] > 0x7f)
return -2;
else if (s->downcase_short_names)
lfn->name[i + j] = qemu_tolower(direntry->extension[j]);
else
lfn->name[i + j] = direntry->extension[j];
}
} else
lfn->name[i + j + 1] = '\0';
@@ -2933,13 +2935,15 @@ static int enable_write_target(BDRVVVFATState *s)
goto err;
}
s->qcow = NULL;
ret = bdrv_open(&s->qcow, s->qcow_filename, NULL, NULL,
s->qcow = bdrv_new("");
ret = bdrv_open(s->qcow, s->qcow_filename, NULL,
BDRV_O_RDWR | BDRV_O_CACHE_WB | BDRV_O_NO_FLUSH, bdrv_qcow,
&local_err);
if (ret < 0) {
qerror_report_err(local_err);
error_free(local_err);
bdrv_unref(s->qcow);
goto err;
}

View File

@@ -307,10 +307,12 @@ static bool check_throttle_config(ThrottleConfig *cfg, Error **errp)
typedef enum { MEDIA_DISK, MEDIA_CDROM } DriveMediaType;
/* Takes the ownership of bs_opts */
static DriveInfo *blockdev_init(const char *file, QDict *bs_opts,
static DriveInfo *blockdev_init(QDict *bs_opts,
BlockInterfaceType type,
Error **errp)
{
const char *buf;
const char *file = NULL;
const char *serial;
int ro = 0;
int bdrv_flags = 0;
@@ -330,13 +332,13 @@ static DriveInfo *blockdev_init(const char *file, QDict *bs_opts,
* stay in bs_opts for processing by bdrv_open(). */
id = qdict_get_try_str(bs_opts, "id");
opts = qemu_opts_create(&qemu_common_drive_opts, id, 1, &error);
if (error) {
if (error_is_set(&error)) {
error_propagate(errp, error);
return NULL;
}
qemu_opts_absorb_qdict(opts, bs_opts, &error);
if (error) {
if (error_is_set(&error)) {
error_propagate(errp, error);
goto early_err;
}
@@ -352,6 +354,7 @@ static DriveInfo *blockdev_init(const char *file, QDict *bs_opts,
ro = qemu_opt_get_bool(opts, "read-only", 0);
copy_on_read = qemu_opt_get_bool(opts, "copy-on-read", false);
file = qemu_opt_get(opts, "file");
serial = qemu_opt_get(opts, "serial");
if ((buf = qemu_opt_get(opts, "discard")) != NULL) {
@@ -436,8 +439,13 @@ static DriveInfo *blockdev_init(const char *file, QDict *bs_opts,
on_write_error = BLOCKDEV_ON_ERROR_ENOSPC;
if ((buf = qemu_opt_get(opts, "werror")) != NULL) {
if (type != IF_IDE && type != IF_SCSI && type != IF_VIRTIO && type != IF_NONE) {
error_setg(errp, "werror is not supported by this bus type");
goto early_err;
}
on_write_error = parse_block_error_action(buf, 0, &error);
if (error) {
if (error_is_set(&error)) {
error_propagate(errp, error);
goto early_err;
}
@@ -445,25 +453,25 @@ static DriveInfo *blockdev_init(const char *file, QDict *bs_opts,
on_read_error = BLOCKDEV_ON_ERROR_REPORT;
if ((buf = qemu_opt_get(opts, "rerror")) != NULL) {
if (type != IF_IDE && type != IF_VIRTIO && type != IF_SCSI && type != IF_NONE) {
error_report("rerror is not supported by this bus type");
goto early_err;
}
on_read_error = parse_block_error_action(buf, 1, &error);
if (error) {
if (error_is_set(&error)) {
error_propagate(errp, error);
goto early_err;
}
}
if (bdrv_find_node(qemu_opts_id(opts))) {
error_setg(errp, "device id=%s is conflicting with a node-name",
qemu_opts_id(opts));
goto early_err;
}
/* init */
dinfo = g_malloc0(sizeof(*dinfo));
dinfo->id = g_strdup(qemu_opts_id(opts));
dinfo->bdrv = bdrv_new(dinfo->id);
dinfo->bdrv->open_flags = snapshot ? BDRV_O_SNAPSHOT : 0;
dinfo->bdrv->read_only = ro;
dinfo->type = type;
dinfo->refcount = 1;
if (serial != NULL) {
dinfo->serial = g_strdup(serial);
@@ -504,7 +512,7 @@ static DriveInfo *blockdev_init(const char *file, QDict *bs_opts,
bdrv_flags |= ro ? 0 : BDRV_O_RDWR;
QINCREF(bs_opts);
ret = bdrv_open(&dinfo->bdrv, file, NULL, bs_opts, bdrv_flags, drv, &error);
ret = bdrv_open(dinfo->bdrv, file, bs_opts, bdrv_flags, drv, &error);
if (ret < 0) {
error_setg(errp, "could not open disk image %s: %s",
@@ -591,10 +599,6 @@ QemuOptsList qemu_legacy_drive_opts = {
.name = "addr",
.type = QEMU_OPT_STRING,
.help = "pci address (virtio only)",
},{
.name = "file",
.type = QEMU_OPT_STRING,
.help = "file name",
},
/* Options that are passed on, but have special semantics with -drive */
@@ -602,14 +606,6 @@ QemuOptsList qemu_legacy_drive_opts = {
.name = "read-only",
.type = QEMU_OPT_BOOL,
.help = "open drive file as read-only",
},{
.name = "rerror",
.type = QEMU_OPT_STRING,
.help = "read error action",
},{
.name = "werror",
.type = QEMU_OPT_STRING,
.help = "write error action",
},{
.name = "copy-on-read",
.type = QEMU_OPT_BOOL,
@@ -631,10 +627,8 @@ DriveInfo *drive_init(QemuOpts *all_opts, BlockInterfaceType block_default_type)
int cyls, heads, secs, translation;
int max_devs, bus_id, unit_id, index;
const char *devaddr;
const char *werror, *rerror;
bool read_only = false;
bool copy_on_read;
const char *filename;
Error *local_err = NULL;
/* Change legacy command line options into QMP ones */
@@ -688,10 +682,9 @@ DriveInfo *drive_init(QemuOpts *all_opts, BlockInterfaceType block_default_type)
bs_opts = qdict_new();
qemu_opts_to_qdict(all_opts, bs_opts);
legacy_opts = qemu_opts_create(&qemu_legacy_drive_opts, NULL, 0,
&error_abort);
legacy_opts = qemu_opts_create_nofail(&qemu_legacy_drive_opts);
qemu_opts_absorb_qdict(legacy_opts, bs_opts, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
goto fail;
@@ -779,10 +772,6 @@ DriveInfo *drive_init(QemuOpts *all_opts, BlockInterfaceType block_default_type)
translation = BIOS_ATA_TRANSLATION_NONE;
} else if (!strcmp(value, "lba")) {
translation = BIOS_ATA_TRANSLATION_LBA;
} else if (!strcmp(value, "large")) {
translation = BIOS_ATA_TRANSLATION_LARGE;
} else if (!strcmp(value, "rechs")) {
translation = BIOS_ATA_TRANSLATION_RECHS;
} else if (!strcmp(value, "auto")) {
translation = BIOS_ATA_TRANSLATION_AUTO;
} else {
@@ -864,8 +853,7 @@ DriveInfo *drive_init(QemuOpts *all_opts, BlockInterfaceType block_default_type)
if (type == IF_VIRTIO) {
QemuOpts *devopts;
devopts = qemu_opts_create(qemu_find_opts("device"), NULL, 0,
&error_abort);
devopts = qemu_opts_create_nofail(qemu_find_opts("device"));
if (arch_type == QEMU_ARCH_S390X) {
qemu_opt_set(devopts, "driver", "virtio-blk-s390");
} else {
@@ -877,39 +865,16 @@ DriveInfo *drive_init(QemuOpts *all_opts, BlockInterfaceType block_default_type)
}
}
filename = qemu_opt_get(legacy_opts, "file");
/* Check werror/rerror compatibility with if=... */
werror = qemu_opt_get(legacy_opts, "werror");
if (werror != NULL) {
if (type != IF_IDE && type != IF_SCSI && type != IF_VIRTIO &&
type != IF_NONE) {
error_report("werror is not supported by this bus type");
goto fail;
}
qdict_put(bs_opts, "werror", qstring_from_str(werror));
}
rerror = qemu_opt_get(legacy_opts, "rerror");
if (rerror != NULL) {
if (type != IF_IDE && type != IF_VIRTIO && type != IF_SCSI &&
type != IF_NONE) {
error_report("rerror is not supported by this bus type");
goto fail;
}
qdict_put(bs_opts, "rerror", qstring_from_str(rerror));
}
/* Actual block device init: Functionality shared with blockdev-add */
dinfo = blockdev_init(filename, bs_opts, &local_err);
dinfo = blockdev_init(bs_opts, type, &local_err);
if (dinfo == NULL) {
if (local_err) {
if (error_is_set(&local_err)) {
qerror_report_err(local_err);
error_free(local_err);
}
goto fail;
} else {
assert(!local_err);
assert(!error_is_set(&local_err));
}
/* Set legacy DriveInfo fields */
@@ -921,7 +886,6 @@ DriveInfo *drive_init(QemuOpts *all_opts, BlockInterfaceType block_default_type)
dinfo->secs = secs;
dinfo->trans = translation;
dinfo->type = type;
dinfo->bus = bus_id;
dinfo->unit = unit_id;
dinfo->devaddr = devaddr;
@@ -976,22 +940,14 @@ static void blockdev_do_action(int kind, void *data, Error **errp)
qmp_transaction(&list, errp);
}
void qmp_blockdev_snapshot_sync(bool has_device, const char *device,
bool has_node_name, const char *node_name,
const char *snapshot_file,
bool has_snapshot_node_name,
const char *snapshot_node_name,
void qmp_blockdev_snapshot_sync(const char *device, const char *snapshot_file,
bool has_format, const char *format,
bool has_mode, NewImageMode mode, Error **errp)
bool has_mode, enum NewImageMode mode,
Error **errp)
{
BlockdevSnapshot snapshot = {
.has_device = has_device,
.device = (char *) device,
.has_node_name = has_node_name,
.node_name = (char *) node_name,
.snapshot_file = (char *) snapshot_file,
.has_snapshot_node_name = has_snapshot_node_name,
.snapshot_node_name = (char *) snapshot_node_name,
.has_format = has_format,
.format = (char *) format,
.has_mode = has_mode,
@@ -1046,7 +1002,7 @@ SnapshotInfo *qmp_blockdev_snapshot_delete_internal_sync(const char *device,
}
ret = bdrv_snapshot_find_by_id_and_name(bs, id, name, &sn, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
return NULL;
}
@@ -1059,7 +1015,7 @@ SnapshotInfo *qmp_blockdev_snapshot_delete_internal_sync(const char *device,
}
bdrv_snapshot_delete(bs, id, name, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
return NULL;
}
@@ -1229,14 +1185,8 @@ static void external_snapshot_prepare(BlkTransactionState *common,
{
BlockDriver *drv;
int flags, ret;
QDict *options = NULL;
Error *local_err = NULL;
bool has_device = false;
const char *device;
bool has_node_name = false;
const char *node_name;
bool has_snapshot_node_name = false;
const char *snapshot_node_name;
const char *new_image_file;
const char *format = "qcow2";
enum NewImageMode mode = NEW_IMAGE_MODE_ABSOLUTE_PATHS;
@@ -1247,14 +1197,7 @@ static void external_snapshot_prepare(BlkTransactionState *common,
/* get parameters */
g_assert(action->kind == TRANSACTION_ACTION_KIND_BLOCKDEV_SNAPSHOT_SYNC);
has_device = action->blockdev_snapshot_sync->has_device;
device = action->blockdev_snapshot_sync->device;
has_node_name = action->blockdev_snapshot_sync->has_node_name;
node_name = action->blockdev_snapshot_sync->node_name;
has_snapshot_node_name =
action->blockdev_snapshot_sync->has_snapshot_node_name;
snapshot_node_name = action->blockdev_snapshot_sync->snapshot_node_name;
new_image_file = action->blockdev_snapshot_sync->snapshot_file;
if (action->blockdev_snapshot_sync->has_format) {
format = action->blockdev_snapshot_sync->format;
@@ -1270,21 +1213,9 @@ static void external_snapshot_prepare(BlkTransactionState *common,
return;
}
state->old_bs = bdrv_lookup_bs(has_device ? device : NULL,
has_node_name ? node_name : NULL,
&local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
if (has_node_name && !has_snapshot_node_name) {
error_setg(errp, "New snapshot node name missing");
return;
}
if (has_snapshot_node_name && bdrv_find_node(snapshot_node_name)) {
error_setg(errp, "New snapshot node name already existing");
state->old_bs = bdrv_find(device);
if (!state->old_bs) {
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
}
@@ -1305,7 +1236,7 @@ static void external_snapshot_prepare(BlkTransactionState *common,
}
}
if (!bdrv_is_first_non_filter(state->old_bs)) {
if (bdrv_check_ext_snapshot(state->old_bs) != EXT_SNAPSHOT_ALLOWED) {
error_set(errp, QERR_FEATURE_DISABLED, "snapshot");
return;
}
@@ -1318,24 +1249,18 @@ static void external_snapshot_prepare(BlkTransactionState *common,
state->old_bs->filename,
state->old_bs->drv->format_name,
NULL, -1, flags, &local_err, false);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
return;
}
}
if (has_snapshot_node_name) {
options = qdict_new();
qdict_put(options, "node-name",
qstring_from_str(snapshot_node_name));
}
/* We will manually add the backing_hd field to the bs later */
state->new_bs = bdrv_new("");
/* TODO Inherit bs->options or only take explicit options with an
* extended QMP command? */
assert(state->new_bs == NULL);
ret = bdrv_open(&state->new_bs, new_image_file, NULL, options,
ret = bdrv_open(state->new_bs, new_image_file, NULL,
flags | BDRV_O_NO_BACKING, drv, &local_err);
/* We will manually add the backing_hd field to the bs later */
if (ret != 0) {
error_propagate(errp, local_err);
}
@@ -1387,7 +1312,7 @@ static void drive_backup_prepare(BlkTransactionState *common, Error **errp)
backup->has_on_source_error, backup->on_source_error,
backup->has_on_target_error, backup->on_target_error,
&local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
state->bs = NULL;
state->job = NULL;
@@ -1479,7 +1404,7 @@ void qmp_transaction(TransactionActionList *dev_list, Error **errp)
QSIMPLEQ_INSERT_TAIL(&snap_bdrv_states, state, entry);
state->ops->prepare(state, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
goto delete_and_fail;
}
@@ -1549,19 +1474,14 @@ void qmp_eject(const char *device, bool has_force, bool force, Error **errp)
eject_device(bs, force, errp);
}
void qmp_block_passwd(bool has_device, const char *device,
bool has_node_name, const char *node_name,
const char *password, Error **errp)
void qmp_block_passwd(const char *device, const char *password, Error **errp)
{
Error *local_err = NULL;
BlockDriverState *bs;
int err;
bs = bdrv_lookup_bs(has_device ? device : NULL,
has_node_name ? node_name : NULL,
&local_err);
if (local_err) {
error_propagate(errp, local_err);
bs = bdrv_find(device);
if (!bs) {
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
}
@@ -1582,7 +1502,7 @@ static void qmp_bdrv_open_encrypted(BlockDriverState *bs, const char *filename,
Error *local_err = NULL;
int ret;
ret = bdrv_open(&bs, filename, NULL, NULL, bdrv_flags, drv, &local_err);
ret = bdrv_open(bs, filename, NULL, bdrv_flags, drv, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
return;
@@ -1603,7 +1523,7 @@ static void qmp_bdrv_open_encrypted(BlockDriverState *bs, const char *filename,
}
void qmp_change_blockdev(const char *device, const char *filename,
const char *format, Error **errp)
bool has_format, const char *format, Error **errp)
{
BlockDriverState *bs;
BlockDriver *drv = NULL;
@@ -1625,7 +1545,7 @@ void qmp_change_blockdev(const char *device, const char *filename,
}
eject_device(bs, 0, &err);
if (err) {
if (error_is_set(&err)) {
error_propagate(errp, err);
return;
}
@@ -1751,24 +1671,14 @@ int do_drive_del(Monitor *mon, const QDict *qdict, QObject **ret_data)
return 0;
}
void qmp_block_resize(bool has_device, const char *device,
bool has_node_name, const char *node_name,
int64_t size, Error **errp)
void qmp_block_resize(const char *device, int64_t size, Error **errp)
{
Error *local_err = NULL;
BlockDriverState *bs;
int ret;
bs = bdrv_lookup_bs(has_device ? device : NULL,
has_node_name ? node_name : NULL,
&local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
if (!bdrv_is_first_non_filter(bs)) {
error_set(errp, QERR_FEATURE_DISABLED, "resize");
bs = bdrv_find(device);
if (!bs) {
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
}
@@ -1855,7 +1765,7 @@ void qmp_block_stream(const char *device, bool has_base,
stream_start(bs, base_bs, base, has_speed ? speed : 0,
on_error, block_job_cb, bs, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
return;
}
@@ -1910,13 +1820,8 @@ void qmp_block_commit(const char *device,
return;
}
if (top_bs == bs) {
commit_active_start(bs, base_bs, speed, on_error, block_job_cb,
bs, &local_err);
} else {
commit_start(bs, base_bs, top_bs, speed, on_error, block_job_cb, bs,
&local_err);
}
commit_start(bs, base_bs, top_bs, speed, on_error, block_job_cb, bs,
&local_err);
if (local_err != NULL) {
error_propagate(errp, local_err);
return;
@@ -2013,14 +1918,15 @@ void qmp_drive_backup(const char *device, const char *target,
}
}
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
return;
}
target_bs = NULL;
ret = bdrv_open(&target_bs, target, NULL, NULL, flags, drv, &local_err);
target_bs = bdrv_new("");
ret = bdrv_open(target_bs, target, NULL, flags, drv, &local_err);
if (ret < 0) {
bdrv_unref(target_bs);
error_propagate(errp, local_err);
return;
}
@@ -2034,11 +1940,6 @@ void qmp_drive_backup(const char *device, const char *target,
}
}
BlockDeviceInfoList *qmp_query_named_block_nodes(Error **errp)
{
return bdrv_named_nodes_list();
}
#define DEFAULT_MIRROR_BUF_SIZE (10 << 20)
void qmp_drive_mirror(const char *device, const char *target,
@@ -2153,7 +2054,7 @@ void qmp_drive_mirror(const char *device, const char *target,
}
}
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
return;
}
@@ -2161,10 +2062,11 @@ void qmp_drive_mirror(const char *device, const char *target,
/* Mirroring takes care of copy-on-write using the source's backing
* file.
*/
target_bs = NULL;
ret = bdrv_open(&target_bs, target, NULL, NULL, flags | BDRV_O_NO_BACKING,
drv, &local_err);
target_bs = bdrv_new("");
ret = bdrv_open(target_bs, target, NULL, flags | BDRV_O_NO_BACKING, drv,
&local_err);
if (ret < 0) {
bdrv_unref(target_bs);
error_propagate(errp, local_err);
return;
}
@@ -2266,7 +2168,6 @@ void qmp_block_job_complete(const char *device, Error **errp)
void qmp_blockdev_add(BlockdevOptions *options, Error **errp)
{
QmpOutputVisitor *ov = qmp_output_visitor_new();
DriveInfo *dinfo;
QObject *obj;
QDict *qdict;
Error *local_err = NULL;
@@ -2283,10 +2184,8 @@ void qmp_blockdev_add(BlockdevOptions *options, Error **errp)
*
* For now, simply forbidding the combination for all drivers will do. */
if (options->has_aio && options->aio == BLOCKDEV_AIO_OPTIONS_NATIVE) {
bool direct = options->has_cache &&
options->cache->has_direct &&
options->cache->direct;
if (!direct) {
bool direct = options->cache->has_direct && options->cache->direct;
if (!options->has_cache && !direct) {
error_setg(errp, "aio=native requires cache.direct=true");
goto fail;
}
@@ -2294,7 +2193,7 @@ void qmp_blockdev_add(BlockdevOptions *options, Error **errp)
visit_type_BlockdevOptions(qmp_output_get_visitor(ov),
&options, NULL, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
goto fail;
}
@@ -2304,18 +2203,12 @@ void qmp_blockdev_add(BlockdevOptions *options, Error **errp)
qdict_flatten(qdict);
dinfo = blockdev_init(NULL, qdict, &local_err);
if (local_err) {
blockdev_init(qdict, IF_NONE, &local_err);
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
goto fail;
}
if (bdrv_key_required(dinfo->bdrv)) {
drive_uninit(dinfo);
error_setg(errp, "blockdev-add doesn't support encrypted devices");
goto fail;
}
fail:
qmp_output_visitor_cleanup(ov);
}
@@ -2350,6 +2243,10 @@ QemuOptsList qemu_common_drive_opts = {
.name = "snapshot",
.type = QEMU_OPT_BOOL,
.help = "enable/disable snapshot mode",
},{
.name = "file",
.type = QEMU_OPT_STRING,
.help = "disk image",
},{
.name = "discard",
.type = QEMU_OPT_STRING,

View File

@@ -61,7 +61,7 @@ void *block_job_create(const BlockJobDriver *driver, BlockDriverState *bs,
Error *local_err = NULL;
block_job_set_speed(job, speed, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
bs->job = NULL;
g_free(job);
bdrv_set_in_use(bs, 0);
@@ -92,7 +92,7 @@ void block_job_set_speed(BlockJob *job, int64_t speed, Error **errp)
return;
}
job->driver->set_speed(job, speed, &local_err);
if (local_err) {
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
return;
}

View File

@@ -1000,7 +1000,7 @@ int main(int argc, char **argv)
memset(ts, 0, sizeof(TaskState));
init_task_state(ts);
ts->info = info;
cpu->opaque = ts;
env->opaque = ts;
#if defined(TARGET_I386)
cpu_x86_set_cpl(env, 3);

987
configure vendored

File diff suppressed because it is too large Load Diff

View File

@@ -23,22 +23,29 @@
#include "qemu/atomic.h"
#include "sysemu/qtest.h"
void cpu_loop_exit(CPUState *cpu)
bool qemu_cpu_has_work(CPUState *cpu)
{
return cpu_has_work(cpu);
}
void cpu_loop_exit(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
cpu->current_tb = NULL;
siglongjmp(cpu->jmp_env, 1);
siglongjmp(env->jmp_env, 1);
}
/* exit the current TB from a signal handler. The host registers are
restored in a state compatible with the CPU emulator
*/
#if defined(CONFIG_SOFTMMU)
void cpu_resume_from_signal(CPUState *cpu, void *puc)
void cpu_resume_from_signal(CPUArchState *env, void *puc)
{
/* XXX: restore cpu registers saved in host registers */
cpu->exception_index = -1;
siglongjmp(cpu->jmp_env, 1);
env->exception_index = -1;
siglongjmp(env->jmp_env, 1);
}
#endif
@@ -46,25 +53,7 @@ void cpu_resume_from_signal(CPUState *cpu, void *puc)
static inline tcg_target_ulong cpu_tb_exec(CPUState *cpu, uint8_t *tb_ptr)
{
CPUArchState *env = cpu->env_ptr;
uintptr_t next_tb;
#if defined(DEBUG_DISAS)
if (qemu_loglevel_mask(CPU_LOG_TB_CPU)) {
#if defined(TARGET_I386)
log_cpu_state(cpu, CPU_DUMP_CCOP);
#elif defined(TARGET_M68K)
/* ??? Should not modify env state for dumping. */
cpu_m68k_flush_flags(env, env->cc_op);
env->cc_op = CC_OP_FLAGS;
env->sr = (env->sr & 0xffe0) | env->cc_dest | (env->cc_x << 4);
log_cpu_state(cpu, 0);
#else
log_cpu_state(cpu, 0);
#endif
}
#endif /* DEBUG_DISAS */
next_tb = tcg_qemu_tb_exec(env, tb_ptr);
uintptr_t next_tb = tcg_qemu_tb_exec(env, tb_ptr);
if ((next_tb & TB_EXIT_MASK) > TB_EXIT_IDX1) {
/* We didn't start executing this TB (eg because the instruction
* counter hit zero); we must restore the guest PC to the address
@@ -101,7 +90,7 @@ static void cpu_exec_nocache(CPUArchState *env, int max_cycles,
if (max_cycles > CF_COUNT_MASK)
max_cycles = CF_COUNT_MASK;
tb = tb_gen_code(cpu, orig_tb->pc, orig_tb->cs_base, orig_tb->flags,
tb = tb_gen_code(env, orig_tb->pc, orig_tb->cs_base, orig_tb->flags,
max_cycles);
cpu->current_tb = tb;
/* execute the generated code */
@@ -116,7 +105,6 @@ static TranslationBlock *tb_find_slow(CPUArchState *env,
target_ulong cs_base,
uint64_t flags)
{
CPUState *cpu = ENV_GET_CPU(env);
TranslationBlock *tb, **ptb1;
unsigned int h;
tb_page_addr_t phys_pc, phys_page1;
@@ -154,7 +142,7 @@ static TranslationBlock *tb_find_slow(CPUArchState *env,
}
not_found:
/* if no translated code available, then translate it now */
tb = tb_gen_code(cpu, pc, cs_base, flags, 0);
tb = tb_gen_code(env, pc, cs_base, flags, 0);
found:
/* Move the last found TB to the head of the list */
@@ -164,13 +152,12 @@ static TranslationBlock *tb_find_slow(CPUArchState *env,
tcg_ctx.tb_ctx.tb_phys_hash[h] = tb;
}
/* we add the TB in the virtual pc hash table */
cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)] = tb;
env->tb_jmp_cache[tb_jmp_cache_hash_func(pc)] = tb;
return tb;
}
static inline TranslationBlock *tb_find_fast(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
TranslationBlock *tb;
target_ulong cs_base, pc;
int flags;
@@ -179,7 +166,7 @@ static inline TranslationBlock *tb_find_fast(CPUArchState *env)
always be the same before a given translated block
is executed. */
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
tb = cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)];
tb = env->tb_jmp_cache[tb_jmp_cache_hash_func(pc)];
if (unlikely(!tb || tb->pc != pc || tb->cs_base != cs_base ||
tb->flags != flags)) {
tb = tb_find_slow(env, pc, cs_base, flags);
@@ -196,11 +183,10 @@ void cpu_set_debug_excp_handler(CPUDebugExcpHandler *handler)
static void cpu_handle_debug_exception(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
CPUWatchpoint *wp;
if (!cpu->watchpoint_hit) {
QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
if (!env->watchpoint_hit) {
QTAILQ_FOREACH(wp, &env->watchpoints, entry) {
wp->flags &= ~BP_WATCHPOINT_HIT;
}
}
@@ -219,9 +205,6 @@ int cpu_exec(CPUArchState *env)
#if !(defined(CONFIG_USER_ONLY) && \
(defined(TARGET_M68K) || defined(TARGET_PPC) || defined(TARGET_S390X)))
CPUClass *cc = CPU_GET_CLASS(cpu);
#endif
#ifdef TARGET_I386
X86CPU *x86_cpu = X86_CPU(cpu);
#endif
int ret, interrupt_request;
TranslationBlock *tb;
@@ -279,16 +262,16 @@ int cpu_exec(CPUArchState *env)
#else
#error unsupported target CPU
#endif
cpu->exception_index = -1;
env->exception_index = -1;
/* prepare setjmp context for exception handling */
for(;;) {
if (sigsetjmp(cpu->jmp_env, 0) == 0) {
if (sigsetjmp(env->jmp_env, 0) == 0) {
/* if an exception is pending, we execute it here */
if (cpu->exception_index >= 0) {
if (cpu->exception_index >= EXCP_INTERRUPT) {
if (env->exception_index >= 0) {
if (env->exception_index >= EXCP_INTERRUPT) {
/* exit request from the cpu execution loop */
ret = cpu->exception_index;
ret = env->exception_index;
if (ret == EXCP_DEBUG) {
cpu_handle_debug_exception(env);
}
@@ -301,11 +284,11 @@ int cpu_exec(CPUArchState *env)
#if defined(TARGET_I386)
cc->do_interrupt(cpu);
#endif
ret = cpu->exception_index;
ret = env->exception_index;
break;
#else
cc->do_interrupt(cpu);
cpu->exception_index = -1;
env->exception_index = -1;
#endif
}
}
@@ -320,8 +303,8 @@ int cpu_exec(CPUArchState *env)
}
if (interrupt_request & CPU_INTERRUPT_DEBUG) {
cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
cpu->exception_index = EXCP_DEBUG;
cpu_loop_exit(cpu);
env->exception_index = EXCP_DEBUG;
cpu_loop_exit(env);
}
#if defined(TARGET_ARM) || defined(TARGET_SPARC) || defined(TARGET_MIPS) || \
defined(TARGET_PPC) || defined(TARGET_ALPHA) || defined(TARGET_CRIS) || \
@@ -329,32 +312,32 @@ int cpu_exec(CPUArchState *env)
if (interrupt_request & CPU_INTERRUPT_HALT) {
cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
cpu->halted = 1;
cpu->exception_index = EXCP_HLT;
cpu_loop_exit(cpu);
env->exception_index = EXCP_HLT;
cpu_loop_exit(env);
}
#endif
#if defined(TARGET_I386)
#if !defined(CONFIG_USER_ONLY)
if (interrupt_request & CPU_INTERRUPT_POLL) {
cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
apic_poll_irq(x86_cpu->apic_state);
apic_poll_irq(env->apic_state);
}
#endif
if (interrupt_request & CPU_INTERRUPT_INIT) {
cpu_svm_check_intercept_param(env, SVM_EXIT_INIT,
0);
do_cpu_init(x86_cpu);
cpu->exception_index = EXCP_HALTED;
cpu_loop_exit(cpu);
do_cpu_init(x86_env_get_cpu(env));
env->exception_index = EXCP_HALTED;
cpu_loop_exit(env);
} else if (interrupt_request & CPU_INTERRUPT_SIPI) {
do_cpu_sipi(x86_cpu);
do_cpu_sipi(x86_env_get_cpu(env));
} else if (env->hflags2 & HF2_GIF_MASK) {
if ((interrupt_request & CPU_INTERRUPT_SMI) &&
!(env->hflags & HF_SMM_MASK)) {
cpu_svm_check_intercept_param(env, SVM_EXIT_SMI,
0);
cpu->interrupt_request &= ~CPU_INTERRUPT_SMI;
do_smm_enter(x86_cpu);
do_smm_enter(x86_env_get_cpu(env));
next_tb = 0;
} else if ((interrupt_request & CPU_INTERRUPT_NMI) &&
!(env->hflags2 & HF2_NMI_MASK)) {
@@ -391,10 +374,7 @@ int cpu_exec(CPUArchState *env)
/* FIXME: this should respect TPR */
cpu_svm_check_intercept_param(env, SVM_EXIT_VINTR,
0);
intno = ldl_phys(cpu->as,
env->vm_vmcb
+ offsetof(struct vmcb,
control.int_vector));
intno = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_vector));
qemu_log_mask(CPU_LOG_TB_IN_ASM, "Servicing virtual hardware INT=0x%02x\n", intno);
do_interrupt_x86_hardirq(env, intno, 1);
cpu->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
@@ -416,7 +396,7 @@ int cpu_exec(CPUArchState *env)
#elif defined(TARGET_LM32)
if ((interrupt_request & CPU_INTERRUPT_HARD)
&& (env->ie & IE_IE)) {
cpu->exception_index = EXCP_IRQ;
env->exception_index = EXCP_IRQ;
cc->do_interrupt(cpu);
next_tb = 0;
}
@@ -425,7 +405,7 @@ int cpu_exec(CPUArchState *env)
&& (env->sregs[SR_MSR] & MSR_IE)
&& !(env->sregs[SR_MSR] & (MSR_EIP | MSR_BIP))
&& !(env->iflags & (D_FLAG | IMM_FLAG))) {
cpu->exception_index = EXCP_IRQ;
env->exception_index = EXCP_IRQ;
cc->do_interrupt(cpu);
next_tb = 0;
}
@@ -433,7 +413,7 @@ int cpu_exec(CPUArchState *env)
if ((interrupt_request & CPU_INTERRUPT_HARD) &&
cpu_mips_hw_interrupts_pending(env)) {
/* Raise it */
cpu->exception_index = EXCP_EXT_INTERRUPT;
env->exception_index = EXCP_EXT_INTERRUPT;
env->error_code = 0;
cc->do_interrupt(cpu);
next_tb = 0;
@@ -450,7 +430,7 @@ int cpu_exec(CPUArchState *env)
idx = EXCP_TICK;
}
if (idx >= 0) {
cpu->exception_index = idx;
env->exception_index = idx;
cc->do_interrupt(cpu);
next_tb = 0;
}
@@ -465,7 +445,7 @@ int cpu_exec(CPUArchState *env)
if (((type == TT_EXTINT) &&
cpu_pil_allowed(env, pil)) ||
type != TT_EXTINT) {
cpu->exception_index = env->interrupt_index;
env->exception_index = env->interrupt_index;
cc->do_interrupt(cpu);
next_tb = 0;
}
@@ -473,8 +453,8 @@ int cpu_exec(CPUArchState *env)
}
#elif defined(TARGET_ARM)
if (interrupt_request & CPU_INTERRUPT_FIQ
&& !(env->daif & PSTATE_F)) {
cpu->exception_index = EXCP_FIQ;
&& !(env->uncached_cpsr & CPSR_F)) {
env->exception_index = EXCP_FIQ;
cc->do_interrupt(cpu);
next_tb = 0;
}
@@ -489,15 +469,15 @@ int cpu_exec(CPUArchState *env)
pc contains a magic address. */
if (interrupt_request & CPU_INTERRUPT_HARD
&& ((IS_M(env) && env->regs[15] < 0xfffffff0)
|| !(env->daif & PSTATE_I))) {
cpu->exception_index = EXCP_IRQ;
|| !(env->uncached_cpsr & CPSR_I))) {
env->exception_index = EXCP_IRQ;
cc->do_interrupt(cpu);
next_tb = 0;
}
#elif defined(TARGET_UNICORE32)
if (interrupt_request & CPU_INTERRUPT_HARD
&& !(env->uncached_asr & ASR_I)) {
cpu->exception_index = UC32_EXCP_INTR;
env->exception_index = UC32_EXCP_INTR;
cc->do_interrupt(cpu);
next_tb = 0;
}
@@ -532,7 +512,7 @@ int cpu_exec(CPUArchState *env)
}
}
if (idx >= 0) {
cpu->exception_index = idx;
env->exception_index = idx;
env->error_code = 0;
cc->do_interrupt(cpu);
next_tb = 0;
@@ -542,7 +522,7 @@ int cpu_exec(CPUArchState *env)
if (interrupt_request & CPU_INTERRUPT_HARD
&& (env->pregs[PR_CCS] & I_FLAG)
&& !env->locked_irq) {
cpu->exception_index = EXCP_IRQ;
env->exception_index = EXCP_IRQ;
cc->do_interrupt(cpu);
next_tb = 0;
}
@@ -554,7 +534,7 @@ int cpu_exec(CPUArchState *env)
m_flag_archval = M_FLAG_V32;
}
if ((env->pregs[PR_CCS] & m_flag_archval)) {
cpu->exception_index = EXCP_NMI;
env->exception_index = EXCP_NMI;
cc->do_interrupt(cpu);
next_tb = 0;
}
@@ -568,7 +548,7 @@ int cpu_exec(CPUArchState *env)
hardware doesn't rely on this, so we
provide/save the vector when the interrupt is
first signalled. */
cpu->exception_index = env->pending_vector;
env->exception_index = env->pending_vector;
do_interrupt_m68k_hardirq(env);
next_tb = 0;
}
@@ -580,7 +560,7 @@ int cpu_exec(CPUArchState *env)
}
#elif defined(TARGET_XTENSA)
if (interrupt_request & CPU_INTERRUPT_HARD) {
cpu->exception_index = EXC_IRQ;
env->exception_index = EXC_IRQ;
cc->do_interrupt(cpu);
next_tb = 0;
}
@@ -596,9 +576,25 @@ int cpu_exec(CPUArchState *env)
}
if (unlikely(cpu->exit_request)) {
cpu->exit_request = 0;
cpu->exception_index = EXCP_INTERRUPT;
cpu_loop_exit(cpu);
env->exception_index = EXCP_INTERRUPT;
cpu_loop_exit(env);
}
#if defined(DEBUG_DISAS)
if (qemu_loglevel_mask(CPU_LOG_TB_CPU)) {
/* restore flags in standard format */
#if defined(TARGET_I386)
log_cpu_state(cpu, CPU_DUMP_CCOP);
#elif defined(TARGET_M68K)
cpu_m68k_flush_flags(env, env->cc_op);
env->cc_op = CC_OP_FLAGS;
env->sr = (env->sr & 0xffe0)
| env->cc_dest | (env->cc_x << 4);
log_cpu_state(cpu, 0);
#else
log_cpu_state(cpu, 0);
#endif
}
#endif /* DEBUG_DISAS */
spin_lock(&tcg_ctx.tb_ctx.tb_lock);
tb = tb_find_fast(env);
/* Note: we do it here to avoid a gcc bug on Mac OS X when
@@ -650,25 +646,25 @@ int cpu_exec(CPUArchState *env)
/* Instruction counter expired. */
int insns_left;
tb = (TranslationBlock *)(next_tb & ~TB_EXIT_MASK);
insns_left = cpu->icount_decr.u32;
if (cpu->icount_extra && insns_left >= 0) {
insns_left = env->icount_decr.u32;
if (env->icount_extra && insns_left >= 0) {
/* Refill decrementer and continue execution. */
cpu->icount_extra += insns_left;
if (cpu->icount_extra > 0xffff) {
env->icount_extra += insns_left;
if (env->icount_extra > 0xffff) {
insns_left = 0xffff;
} else {
insns_left = cpu->icount_extra;
insns_left = env->icount_extra;
}
cpu->icount_extra -= insns_left;
cpu->icount_decr.u16.low = insns_left;
env->icount_extra -= insns_left;
env->icount_decr.u16.low = insns_left;
} else {
if (insns_left > 0) {
/* Execute remaining instructions. */
cpu_exec_nocache(env, insns_left, tb);
}
cpu->exception_index = EXCP_INTERRUPT;
env->exception_index = EXCP_INTERRUPT;
next_tb = 0;
cpu_loop_exit(cpu);
cpu_loop_exit(env);
}
break;
}
@@ -688,9 +684,6 @@ int cpu_exec(CPUArchState *env)
#if !(defined(CONFIG_USER_ONLY) && \
(defined(TARGET_M68K) || defined(TARGET_PPC) || defined(TARGET_S390X)))
cc = CPU_GET_CLASS(cpu);
#endif
#ifdef TARGET_I386
x86_cpu = X86_CPU(cpu);
#endif
}
} /* for(;;) */

57
cpus.c
View File

@@ -76,7 +76,7 @@ static bool cpu_thread_is_idle(CPUState *cpu)
if (cpu_is_stopped(cpu)) {
return true;
}
if (!cpu->halted || cpu_has_work(cpu) ||
if (!cpu->halted || qemu_cpu_has_work(cpu) ||
kvm_halt_in_kernel()) {
return false;
}
@@ -139,10 +139,11 @@ static int64_t cpu_get_icount_locked(void)
icount = qemu_icount;
if (cpu) {
if (!cpu_can_do_io(cpu)) {
CPUArchState *env = cpu->env_ptr;
if (!can_do_io(env)) {
fprintf(stderr, "Bad clock read\n");
}
icount -= (cpu->icount_decr.u16.low + cpu->icount_extra);
icount -= (env->icount_decr.u16.low + env->icount_extra);
}
return qemu_icount_bias + (icount << icount_time_shift);
}
@@ -1116,25 +1117,16 @@ void resume_all_vcpus(void)
}
}
/* For temporary buffers for forming a name */
#define VCPU_THREAD_NAME_SIZE 16
static void qemu_tcg_init_vcpu(CPUState *cpu)
{
char thread_name[VCPU_THREAD_NAME_SIZE];
tcg_cpu_address_space_init(cpu, cpu->as);
/* share a single thread for all cpus with TCG */
if (!tcg_cpu_thread) {
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
tcg_halt_cond = cpu->halt_cond;
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/TCG",
cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, qemu_tcg_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
qemu_thread_create(cpu->thread, qemu_tcg_cpu_thread_fn, cpu,
QEMU_THREAD_JOINABLE);
#ifdef _WIN32
cpu->hThread = qemu_thread_get_handle(cpu->thread);
#endif
@@ -1150,15 +1142,11 @@ static void qemu_tcg_init_vcpu(CPUState *cpu)
static void qemu_kvm_start_vcpu(CPUState *cpu)
{
char thread_name[VCPU_THREAD_NAME_SIZE];
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/KVM",
cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, qemu_kvm_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
qemu_thread_create(cpu->thread, qemu_kvm_cpu_thread_fn, cpu,
QEMU_THREAD_JOINABLE);
while (!cpu->created) {
qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex);
}
@@ -1166,14 +1154,10 @@ static void qemu_kvm_start_vcpu(CPUState *cpu)
static void qemu_dummy_start_vcpu(CPUState *cpu)
{
char thread_name[VCPU_THREAD_NAME_SIZE];
cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/DUMMY",
cpu->cpu_index);
qemu_thread_create(cpu->thread, thread_name, qemu_dummy_cpu_thread_fn, cpu,
qemu_thread_create(cpu->thread, qemu_dummy_cpu_thread_fn, cpu,
QEMU_THREAD_JOINABLE);
while (!cpu->created) {
qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex);
@@ -1235,7 +1219,6 @@ int vm_stop_force_state(RunState state)
static int tcg_cpu_exec(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
int ret;
#ifdef CONFIG_PROFILER
int64_t ti;
@@ -1248,9 +1231,9 @@ static int tcg_cpu_exec(CPUArchState *env)
int64_t count;
int64_t deadline;
int decr;
qemu_icount -= (cpu->icount_decr.u16.low + cpu->icount_extra);
cpu->icount_decr.u16.low = 0;
cpu->icount_extra = 0;
qemu_icount -= (env->icount_decr.u16.low + env->icount_extra);
env->icount_decr.u16.low = 0;
env->icount_extra = 0;
deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL);
/* Maintain prior (possibly buggy) behaviour where if no deadline
@@ -1266,8 +1249,8 @@ static int tcg_cpu_exec(CPUArchState *env)
qemu_icount += count;
decr = (count > 0xffff) ? 0xffff : count;
count -= decr;
cpu->icount_decr.u16.low = decr;
cpu->icount_extra = count;
env->icount_decr.u16.low = decr;
env->icount_extra = count;
}
ret = cpu_exec(env);
#ifdef CONFIG_PROFILER
@@ -1276,9 +1259,10 @@ static int tcg_cpu_exec(CPUArchState *env)
if (use_icount) {
/* Fold pending instructions back into the
instruction counter, and clear the interrupt flag. */
qemu_icount -= (cpu->icount_decr.u16.low + cpu->icount_extra);
cpu->icount_decr.u32 = 0;
cpu->icount_extra = 0;
qemu_icount -= (env->icount_decr.u16.low
+ env->icount_extra);
env->icount_decr.u32 = 0;
env->icount_extra = 0;
}
return ret;
}
@@ -1474,11 +1458,12 @@ void qmp_inject_nmi(Error **errp)
CPU_FOREACH(cs) {
X86CPU *cpu = X86_CPU(cs);
CPUX86State *env = &cpu->env;
if (!cpu->apic_state) {
if (!env->apic_state) {
cpu_interrupt(cs, CPU_INTERRUPT_NMI);
} else {
apic_deliver_nmi(cpu->apic_state);
apic_deliver_nmi(env->apic_state);
}
}
#elif defined(TARGET_S390X)

View File

@@ -26,7 +26,6 @@
#include "exec/cputlb.h"
#include "exec/memory-internal.h"
#include "exec/ram_addr.h"
//#define DEBUG_TLB
//#define DEBUG_TLB_CHECK
@@ -34,6 +33,13 @@
/* statistics */
int tlb_flush_count;
static const CPUTLBEntry s_cputlb_empty_entry = {
.addr_read = -1,
.addr_write = -1,
.addr_code = -1,
.addend = -1,
};
/* NOTE:
* If flush_global is true (the usual case), flush all tlb entries.
* If flush_global is false, flush (at least) all tlb entries not
@@ -46,9 +52,10 @@ int tlb_flush_count;
* entries from the TLB at any time, so flushing more entries than
* required is only an efficiency issue, not a correctness issue.
*/
void tlb_flush(CPUState *cpu, int flush_global)
void tlb_flush(CPUArchState *env, int flush_global)
{
CPUArchState *env = cpu->env_ptr;
CPUState *cpu = ENV_GET_CPU(env);
int i;
#if defined(DEBUG_TLB)
printf("tlb_flush:\n");
@@ -57,8 +64,15 @@ void tlb_flush(CPUState *cpu, int flush_global)
links while we are modifying them */
cpu->current_tb = NULL;
memset(env->tlb_table, -1, sizeof(env->tlb_table));
memset(cpu->tb_jmp_cache, 0, sizeof(cpu->tb_jmp_cache));
for (i = 0; i < CPU_TLB_SIZE; i++) {
int mmu_idx;
for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
env->tlb_table[mmu_idx][i] = s_cputlb_empty_entry;
}
}
memset(env->tb_jmp_cache, 0, TB_JMP_CACHE_SIZE * sizeof (void *));
env->tlb_flush_addr = -1;
env->tlb_flush_mask = 0;
@@ -73,13 +87,13 @@ static inline void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong addr)
(TARGET_PAGE_MASK | TLB_INVALID_MASK)) ||
addr == (tlb_entry->addr_code &
(TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
memset(tlb_entry, -1, sizeof(*tlb_entry));
*tlb_entry = s_cputlb_empty_entry;
}
}
void tlb_flush_page(CPUState *cpu, target_ulong addr)
void tlb_flush_page(CPUArchState *env, target_ulong addr)
{
CPUArchState *env = cpu->env_ptr;
CPUState *cpu = ENV_GET_CPU(env);
int i;
int mmu_idx;
@@ -93,7 +107,7 @@ void tlb_flush_page(CPUState *cpu, target_ulong addr)
TARGET_FMT_lx "/" TARGET_FMT_lx ")\n",
env->tlb_flush_addr, env->tlb_flush_mask);
#endif
tlb_flush(cpu, 1);
tlb_flush(env, 1);
return;
}
/* must reset current TB so that interrupts cannot modify the
@@ -106,23 +120,24 @@ void tlb_flush_page(CPUState *cpu, target_ulong addr)
tlb_flush_entry(&env->tlb_table[mmu_idx][i], addr);
}
tb_flush_jmp_cache(cpu, addr);
tb_flush_jmp_cache(env, addr);
}
/* update the TLBs so that writes to code in the virtual page 'addr'
can be detected */
void tlb_protect_code(ram_addr_t ram_addr)
{
cpu_physical_memory_reset_dirty(ram_addr, TARGET_PAGE_SIZE,
DIRTY_MEMORY_CODE);
cpu_physical_memory_reset_dirty(ram_addr,
ram_addr + TARGET_PAGE_SIZE,
CODE_DIRTY_FLAG);
}
/* update the TLB so that writes in physical page 'phys_addr' are no longer
tested for self modifying code */
void tlb_unprotect_code_phys(CPUState *cpu, ram_addr_t ram_addr,
void tlb_unprotect_code_phys(CPUArchState *env, ram_addr_t ram_addr,
target_ulong vaddr)
{
cpu_physical_memory_set_dirty_flag(ram_addr, DIRTY_MEMORY_CODE);
cpu_physical_memory_set_dirty_flags(ram_addr, CODE_DIRTY_FLAG);
}
static bool tlb_is_dirty_ram(CPUTLBEntry *tlbe)
@@ -221,11 +236,10 @@ static void tlb_add_large_page(CPUArchState *env, target_ulong vaddr,
/* Add a new TLB entry. At most one entry for a given virtual address
is permitted. Only a single TARGET_PAGE_SIZE region is mapped, the
supplied size is only used by tlb_flush_page. */
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
void tlb_set_page(CPUArchState *env, target_ulong vaddr,
hwaddr paddr, int prot,
int mmu_idx, target_ulong size)
{
CPUArchState *env = cpu->env_ptr;
MemoryRegionSection *section;
unsigned int index;
target_ulong address;
@@ -240,7 +254,7 @@ void tlb_set_page(CPUState *cpu, target_ulong vaddr,
}
sz = size;
section = address_space_translate_for_iotlb(cpu->as, paddr,
section = address_space_translate_for_iotlb(&address_space_memory, paddr,
&xlat, &sz);
assert(sz >= TARGET_PAGE_SIZE);
@@ -261,7 +275,7 @@ void tlb_set_page(CPUState *cpu, target_ulong vaddr,
}
code_address = address;
iotlb = memory_region_section_get_iotlb(cpu, section, vaddr, paddr, xlat,
iotlb = memory_region_section_get_iotlb(env, section, vaddr, paddr, xlat,
prot, &address);
index = (vaddr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
@@ -285,8 +299,7 @@ void tlb_set_page(CPUState *cpu, target_ulong vaddr,
/* Write access calls the I/O callback. */
te->addr_write = address | TLB_MMIO;
} else if (memory_region_is_ram(section->mr)
&& cpu_physical_memory_is_clean(section->mr->ram_addr
+ xlat)) {
&& !cpu_physical_memory_is_dirty(section->mr->ram_addr + xlat)) {
te->addr_write = address | TLB_NOTDIRTY;
} else {
te->addr_write = address;
@@ -306,7 +319,6 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr)
int mmu_idx, page_index, pd;
void *p;
MemoryRegion *mr;
CPUState *cpu = ENV_GET_CPU(env1);
page_index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
mmu_idx = cpu_mmu_index(env1);
@@ -315,14 +327,15 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr)
cpu_ldub_code(env1, addr);
}
pd = env1->iotlb[mmu_idx][page_index] & ~TARGET_PAGE_MASK;
mr = iotlb_to_region(cpu->as, pd);
mr = iotlb_to_region(pd);
if (memory_region_is_unassigned(mr)) {
CPUState *cpu = ENV_GET_CPU(env1);
CPUClass *cc = CPU_GET_CLASS(cpu);
if (cc->do_unassigned_access) {
cc->do_unassigned_access(cpu, addr, false, true, 0, 4);
} else {
cpu_abort(cpu, "Trying to execute code outside RAM or ROM at 0x"
cpu_abort(env1, "Trying to execute code outside RAM or ROM at 0x"
TARGET_FMT_lx "\n", addr);
}
}

View File

@@ -1,3 +0,0 @@
# Default configuration for aarch64-linux-user
CONFIG_GDBSTUB_XML=y

View File

@@ -1,6 +0,0 @@
# Default configuration for aarch64-softmmu
# We support all the 32 bit boards so need all their config
include arm-softmmu.mak
# Currently no 64-bit specific config requirements

View File

@@ -27,7 +27,6 @@ CONFIG_SSI_SD=y
CONFIG_SSI_M25P80=y
CONFIG_LAN9118=y
CONFIG_SMC91C111=y
CONFIG_ALLWINNER_EMAC=y
CONFIG_DS1338=y
CONFIG_PFLASH_CFI01=y
CONFIG_PFLASH_CFI02=y
@@ -42,7 +41,6 @@ CONFIG_ARM_GIC=y
CONFIG_ARM_GIC_KVM=$(CONFIG_KVM)
CONFIG_ARM_TIMER=y
CONFIG_ARM_MPTIMER=y
CONFIG_A9_GTIMER=y
CONFIG_PL011=y
CONFIG_PL022=y
CONFIG_PL031=y
@@ -65,7 +63,6 @@ CONFIG_XILINX_SPIPS=y
CONFIG_ARM11SCU=y
CONFIG_A9SCU=y
CONFIG_DIGIC=y
CONFIG_MARVELL_88W8618=y
CONFIG_OMAP=y
CONFIG_TSC210X=y
@@ -84,7 +81,3 @@ CONFIG_VERSATILE_I2C=y
CONFIG_SDHCI=y
CONFIG_INTEGRATOR_DEBUG=y
CONFIG_ALLWINNER_A10_PIT=y
CONFIG_ALLWINNER_A10_PIC=y
CONFIG_ALLWINNER_A10=y

View File

@@ -41,11 +41,8 @@ CONFIG_I8259=y
CONFIG_XILINX=y
CONFIG_XILINX_ETHLITE=y
CONFIG_OPENPIC=y
CONFIG_PREP=y
CONFIG_MAC=y
CONFIG_E500=y
CONFIG_OPENPIC_KVM=$(and $(CONFIG_E500),$(CONFIG_KVM))
# For PReP
CONFIG_MC146818RTC=y
CONFIG_ETSEC=y
CONFIG_ISA_TESTDEV=y

View File

@@ -42,8 +42,6 @@ CONFIG_XILINX=y
CONFIG_XILINX_ETHLITE=y
CONFIG_OPENPIC=y
CONFIG_PSERIES=y
CONFIG_PREP=y
CONFIG_MAC=y
CONFIG_E500=y
CONFIG_OPENPIC_KVM=$(and $(CONFIG_E500),$(CONFIG_KVM))
# For pSeries

View File

@@ -3,12 +3,32 @@
include pci.mak
include sound.mak
include usb.mak
CONFIG_ISA_MMIO=y
CONFIG_ESCC=y
CONFIG_M48T59=y
CONFIG_VGA=y
CONFIG_VGA_PCI=y
CONFIG_SERIAL=y
CONFIG_I8254=y
CONFIG_PCKBD=y
CONFIG_FDC=y
CONFIG_I8257=y
CONFIG_OPENPIC=y
CONFIG_PREP_PCI=y
CONFIG_MACIO=y
CONFIG_CUDA=y
CONFIG_ADB=y
CONFIG_MAC_NVRAM=y
CONFIG_MAC_DBDMA=y
CONFIG_HEATHROW_PIC=y
CONFIG_GRACKLE_PCI=y
CONFIG_UNIN_PCI=y
CONFIG_DEC_PCI=y
CONFIG_PPCE500_PCI=y
CONFIG_IDE_ISA=y
CONFIG_IDE_CMD646=y
CONFIG_IDE_MACIO=y
CONFIG_NE2000_ISA=y
CONFIG_PFLASH_CFI01=y
CONFIG_PFLASH_CFI02=y
CONFIG_PTIMER=y
@@ -16,3 +36,8 @@ CONFIG_I8259=y
CONFIG_XILINX=y
CONFIG_XILINX_ETHLITE=y
CONFIG_OPENPIC=y
CONFIG_E500=y
CONFIG_OPENPIC_KVM=$(and $(CONFIG_E500),$(CONFIG_KVM))
# For PReP
CONFIG_MC146818RTC=y
CONFIG_ISA_TESTDEV=y

View File

@@ -1,3 +1,2 @@
CONFIG_VIRTIO=y
CONFIG_SCLPCONSOLE=y
CONFIG_S390_FLIC=$(CONFIG_KVM)

View File

@@ -10,7 +10,6 @@ CONFIG_EMPTY_SLOT=y
CONFIG_PCNET_COMMON=y
CONFIG_LANCE=y
CONFIG_TCX=y
CONFIG_CG3=y
CONFIG_SLAVIO=y
CONFIG_CS4231=y
CONFIG_GRLIB=y

View File

@@ -33,14 +33,12 @@ DriveInfo *add_init_drive(const char *optstr)
{
DriveInfo *dinfo;
QemuOpts *opts;
MachineClass *mc;
opts = drive_def(optstr);
if (!opts)
return NULL;
mc = MACHINE_GET_CLASS(current_machine);
dinfo = drive_init(opts, mc->qemu_machine->block_default_type);
dinfo = drive_init(opts, current_machine->block_default_type);
if (!dinfo) {
qemu_opts_del(opts);
return NULL;

View File

@@ -41,10 +41,6 @@ void *create_device_tree(int *sizep)
if (ret < 0) {
goto fail;
}
ret = fdt_finish_reservemap(fdt);
if (ret < 0) {
goto fail;
}
ret = fdt_begin_node(fdt, "");
if (ret < 0) {
goto fail;
@@ -131,12 +127,12 @@ static int findnode_nofail(void *fdt, const char *node_path)
return offset;
}
int qemu_fdt_setprop(void *fdt, const char *node_path,
const char *property, const void *val, int size)
int qemu_devtree_setprop(void *fdt, const char *node_path,
const char *property, const void *val_array, int size)
{
int r;
r = fdt_setprop(fdt, findnode_nofail(fdt, node_path), property, val, size);
r = fdt_setprop(fdt, findnode_nofail(fdt, node_path), property, val_array, size);
if (r < 0) {
fprintf(stderr, "%s: Couldn't set %s/%s: %s\n", __func__, node_path,
property, fdt_strerror(r));
@@ -146,8 +142,8 @@ int qemu_fdt_setprop(void *fdt, const char *node_path,
return r;
}
int qemu_fdt_setprop_cell(void *fdt, const char *node_path,
const char *property, uint32_t val)
int qemu_devtree_setprop_cell(void *fdt, const char *node_path,
const char *property, uint32_t val)
{
int r;
@@ -161,15 +157,15 @@ int qemu_fdt_setprop_cell(void *fdt, const char *node_path,
return r;
}
int qemu_fdt_setprop_u64(void *fdt, const char *node_path,
const char *property, uint64_t val)
int qemu_devtree_setprop_u64(void *fdt, const char *node_path,
const char *property, uint64_t val)
{
val = cpu_to_be64(val);
return qemu_fdt_setprop(fdt, node_path, property, &val, sizeof(val));
return qemu_devtree_setprop(fdt, node_path, property, &val, sizeof(val));
}
int qemu_fdt_setprop_string(void *fdt, const char *node_path,
const char *property, const char *string)
int qemu_devtree_setprop_string(void *fdt, const char *node_path,
const char *property, const char *string)
{
int r;
@@ -183,8 +179,8 @@ int qemu_fdt_setprop_string(void *fdt, const char *node_path,
return r;
}
const void *qemu_fdt_getprop(void *fdt, const char *node_path,
const char *property, int *lenp)
const void *qemu_devtree_getprop(void *fdt, const char *node_path,
const char *property, int *lenp)
{
int len;
const void *r;
@@ -200,11 +196,11 @@ const void *qemu_fdt_getprop(void *fdt, const char *node_path,
return r;
}
uint32_t qemu_fdt_getprop_cell(void *fdt, const char *node_path,
const char *property)
uint32_t qemu_devtree_getprop_cell(void *fdt, const char *node_path,
const char *property)
{
int len;
const uint32_t *p = qemu_fdt_getprop(fdt, node_path, property, &len);
const uint32_t *p = qemu_devtree_getprop(fdt, node_path, property, &len);
if (len != 4) {
fprintf(stderr, "%s: %s/%s not 4 bytes long (not a cell?)\n",
__func__, node_path, property);
@@ -213,7 +209,7 @@ uint32_t qemu_fdt_getprop_cell(void *fdt, const char *node_path,
return be32_to_cpu(*p);
}
uint32_t qemu_fdt_get_phandle(void *fdt, const char *path)
uint32_t qemu_devtree_get_phandle(void *fdt, const char *path)
{
uint32_t r;
@@ -227,15 +223,15 @@ uint32_t qemu_fdt_get_phandle(void *fdt, const char *path)
return r;
}
int qemu_fdt_setprop_phandle(void *fdt, const char *node_path,
const char *property,
const char *target_node_path)
int qemu_devtree_setprop_phandle(void *fdt, const char *node_path,
const char *property,
const char *target_node_path)
{
uint32_t phandle = qemu_fdt_get_phandle(fdt, target_node_path);
return qemu_fdt_setprop_cell(fdt, node_path, property, phandle);
uint32_t phandle = qemu_devtree_get_phandle(fdt, target_node_path);
return qemu_devtree_setprop_cell(fdt, node_path, property, phandle);
}
uint32_t qemu_fdt_alloc_phandle(void *fdt)
uint32_t qemu_devtree_alloc_phandle(void *fdt)
{
static int phandle = 0x0;
@@ -259,7 +255,7 @@ uint32_t qemu_fdt_alloc_phandle(void *fdt)
return phandle++;
}
int qemu_fdt_nop_node(void *fdt, const char *node_path)
int qemu_devtree_nop_node(void *fdt, const char *node_path)
{
int r;
@@ -273,7 +269,7 @@ int qemu_fdt_nop_node(void *fdt, const char *node_path)
return r;
}
int qemu_fdt_add_subnode(void *fdt, const char *name)
int qemu_devtree_add_subnode(void *fdt, const char *name)
{
char *dupname = g_strdup(name);
char *basename = strrchr(dupname, '/');
@@ -303,7 +299,7 @@ int qemu_fdt_add_subnode(void *fdt, const char *name)
return retval;
}
void qemu_fdt_dumpdtb(void *fdt, int size)
void qemu_devtree_dumpdtb(void *fdt, int size)
{
const char *dumpdtb = qemu_opt_get(qemu_get_machine_opts(), "dumpdtb");
@@ -313,11 +309,11 @@ void qemu_fdt_dumpdtb(void *fdt, int size)
}
}
int qemu_fdt_setprop_sized_cells_from_array(void *fdt,
const char *node_path,
const char *property,
int numvalues,
uint64_t *values)
int qemu_devtree_setprop_sized_cells_from_array(void *fdt,
const char *node_path,
const char *property,
int numvalues,
uint64_t *values)
{
uint32_t *propcells;
uint64_t value;
@@ -342,6 +338,6 @@ int qemu_fdt_setprop_sized_cells_from_array(void *fdt,
propcells[cellnum++] = cpu_to_be32(value);
}
return qemu_fdt_setprop(fdt, node_path, property, propcells,
cellnum * sizeof(uint32_t));
return qemu_devtree_setprop(fdt, node_path, property, propcells,
cellnum * sizeof(uint32_t));
}

14
disas.c
View File

@@ -190,7 +190,7 @@ static int print_insn_od_target(bfd_vma pc, disassemble_info *info)
/* Disassemble this for me please... (debugging). 'flags' has the following
values:
i386 - 1 means 16 bit code, 2 means 64 bit code
arm - bit 0 = thumb, bit 1 = reverse endian, bit 2 = A64
arm - bit 0 = thumb, bit 1 = reverse endian
ppc - nonzero means little endian
other targets - unused
*/
@@ -225,15 +225,7 @@ void target_disas(FILE *out, CPUArchState *env, target_ulong code,
}
print_insn = print_insn_i386;
#elif defined(TARGET_ARM)
if (flags & 4) {
/* We might not be compiled with the A64 disassembler
* because it needs a C++ compiler; in that case we will
* fall through to the default print_insn_od case.
*/
#if defined(CONFIG_ARM_A64_DIS)
print_insn = print_insn_arm_a64;
#endif
} else if (flags & 1) {
if (flags & 1) {
print_insn = print_insn_thumb1;
} else {
print_insn = print_insn_arm;
@@ -364,8 +356,6 @@ void disas(FILE *out, void *code, unsigned long size)
#elif defined(_ARCH_PPC)
s.info.disassembler_options = (char *)"any";
print_insn = print_insn_ppc;
#elif defined(__aarch64__) && defined(CONFIG_ARM_A64_DIS)
print_insn = print_insn_arm_a64;
#elif defined(__alpha__)
print_insn = print_insn_alpha;
#elif defined(__sparc__)

View File

@@ -1,10 +1,5 @@
common-obj-$(CONFIG_ALPHA_DIS) += alpha.o
common-obj-$(CONFIG_ARM_DIS) += arm.o
common-obj-$(CONFIG_ARM_A64_DIS) += arm-a64.o
common-obj-$(CONFIG_ARM_A64_DIS) += libvixl/
libvixldir = $(SRC_PATH)/disas/libvixl
$(obj)/arm-a64.o: QEMU_CFLAGS += -I$(libvixldir)
common-obj-$(CONFIG_CRIS_DIS) += cris.o
common-obj-$(CONFIG_HPPA_DIS) += hppa.o
common-obj-$(CONFIG_I386_DIS) += i386.o

View File

@@ -1,87 +0,0 @@
/*
* ARM A64 disassembly output wrapper to libvixl
* Copyright (c) 2013 Linaro Limited
* Written by Claudio Fontana
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "a64/disasm-a64.h"
extern "C" {
#include "disas/bfd.h"
}
using namespace vixl;
static Decoder *vixl_decoder = NULL;
static Disassembler *vixl_disasm = NULL;
/* We don't use libvixl's PrintDisassembler because its output
* is a little unhelpful (trailing newlines, for example).
* Instead we use our own very similar variant so we have
* control over the format.
*/
class QEMUDisassembler : public Disassembler {
public:
explicit QEMUDisassembler(FILE *stream) : stream_(stream) { }
~QEMUDisassembler() { }
protected:
void ProcessOutput(Instruction *instr) {
fprintf(stream_, "%08" PRIx32 " %s",
instr->InstructionBits(), GetOutput());
}
private:
FILE *stream_;
};
static int vixl_is_initialized(void)
{
return vixl_decoder != NULL;
}
static void vixl_init(FILE *f) {
vixl_decoder = new Decoder();
vixl_disasm = new QEMUDisassembler(f);
vixl_decoder->AppendVisitor(vixl_disasm);
}
#define INSN_SIZE 4
/* Disassemble ARM A64 instruction. This is our only entry
* point from QEMU's C code.
*/
int print_insn_arm_a64(uint64_t addr, disassemble_info *info)
{
uint8_t bytes[INSN_SIZE];
uint32_t instr;
int status;
status = info->read_memory_func(addr, bytes, INSN_SIZE, info);
if (status != 0) {
info->memory_error_func(status, addr, info);
return -1;
}
if (!vixl_is_initialized()) {
vixl_init(info->stream);
}
instr = bytes[0] | bytes[1] << 8 | bytes[2] << 16 | bytes[3] << 24;
vixl_decoder->Decode(reinterpret_cast<Instruction*>(&instr));
return INSN_SIZE;
}

View File

@@ -171,7 +171,6 @@ static void print_operand_value (char *buf, size_t bufsize, int hex, bfd_vma dis
static void print_displacement (char *, bfd_vma);
static void OP_E (int, int);
static void OP_G (int, int);
static void OP_vvvv (int, int);
static bfd_vma get64 (void);
static bfd_signed_vma get32 (void);
static bfd_signed_vma get32s (void);
@@ -265,9 +264,6 @@ static int rex_used;
current instruction. */
static int used_prefixes;
/* The VEX.vvvv register, unencoded. */
static int vex_reg;
/* Flags stored in PREFIXES. */
#define PREFIX_REPZ 1
#define PREFIX_REPNZ 2
@@ -282,10 +278,6 @@ static int vex_reg;
#define PREFIX_ADDR 0x400
#define PREFIX_FWAIT 0x800
#define PREFIX_VEX_0F 0x1000
#define PREFIX_VEX_0F38 0x2000
#define PREFIX_VEX_0F3A 0x4000
/* Make sure that bytes from INFO->PRIVATE_DATA->BUFFER (inclusive)
to ADDR (exclusive) are valid. Returns 1 for success, longjmps
on error. */
@@ -331,7 +323,6 @@ fetch_data(struct disassemble_info *info, bfd_byte *addr)
#define XX { NULL, 0 }
#define Bv { OP_vvvv, v_mode }
#define Eb { OP_E, b_mode }
#define Ev { OP_E, v_mode }
#define Ed { OP_E, d_mode }
@@ -680,8 +671,7 @@ fetch_data(struct disassemble_info *info, bfd_byte *addr)
#define PREGRP102 NULL, { { NULL, USE_PREFIX_USER_TABLE }, { NULL, 102 } }
#define PREGRP103 NULL, { { NULL, USE_PREFIX_USER_TABLE }, { NULL, 103 } }
#define PREGRP104 NULL, { { NULL, USE_PREFIX_USER_TABLE }, { NULL, 104 } }
#define PREGRP105 NULL, { { NULL, USE_PREFIX_USER_TABLE }, { NULL, 105 } }
#define PREGRP106 NULL, { { NULL, USE_PREFIX_USER_TABLE }, { NULL, 106 } }
#define X86_64_0 NULL, { { NULL, X86_64_SPECIAL }, { NULL, 0 } }
#define X86_64_1 NULL, { { NULL, X86_64_SPECIAL }, { NULL, 1 } }
@@ -1459,7 +1449,7 @@ static const unsigned char threebyte_0x38_uses_DATA_prefix[256] = {
/* c0 */ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, /* cf */
/* d0 */ 0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1, /* df */
/* e0 */ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, /* ef */
/* f0 */ 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, /* ff */
/* f0 */ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, /* ff */
/* ------------------------------- */
/* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
};
@@ -1483,7 +1473,7 @@ static const unsigned char threebyte_0x38_uses_REPNZ_prefix[256] = {
/* c0 */ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, /* cf */
/* d0 */ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, /* df */
/* e0 */ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, /* ef */
/* f0 */ 1,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0, /* ff */
/* f0 */ 1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0, /* ff */
/* ------------------------------- */
/* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
};
@@ -1507,7 +1497,7 @@ static const unsigned char threebyte_0x38_uses_REPZ_prefix[256] = {
/* c0 */ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, /* cf */
/* d0 */ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, /* df */
/* e0 */ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, /* ef */
/* f0 */ 0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, /* ff */
/* f0 */ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, /* ff */
/* ------------------------------- */
/* 0 1 2 3 4 5 6 7 8 9 a b c d e f */
};
@@ -2642,17 +2632,17 @@ static const struct dis386 prefix_user_table[][4] = {
/* PREGRP87 */
{
{ "movbe", { Gv, Ev } },
{ "(bad)", { XX } },
{ "movbe", { Gv, Ev } },
{ "(bad)", { XX } },
{ "(bad)", { XX } },
{ "crc32", { Gdq, { CRC32_Fixup, b_mode } } },
},
/* PREGRP88 */
{
{ "movbe", { Ev, Gv } },
{ "(bad)", { XX } },
{ "movbe", { Ev, Gv } },
{ "(bad)", { XX } },
{ "(bad)", { XX } },
{ "crc32", { Gdq, { CRC32_Fixup, v_mode } } },
},
@@ -2784,22 +2774,6 @@ static const struct dis386 prefix_user_table[][4] = {
{ "(bad)", { XX } },
},
/* PREGRP105 */
{
{ "andnS", { Gv, Bv, Ev } },
{ "(bad)", { XX } },
{ "(bad)", { XX } },
{ "(bad)", { XX } },
},
/* PREGRP106 */
{
{ "bextrS", { Gv, Ev, Bv } },
{ "sarxS", { Gv, Ev, Bv } },
{ "shlxS", { Gv, Ev, Bv } },
{ "shrxS", { Gv, Ev, Bv } },
},
};
static const struct dis386 x86_64_table[][2] = {
@@ -3097,12 +3071,12 @@ static const struct dis386 three_byte_table[][256] = {
/* f0 */
{ PREGRP87 },
{ PREGRP88 },
{ PREGRP105 },
{ "(bad)", { XX } },
{ "(bad)", { XX } },
{ "(bad)", { XX } },
{ "(bad)", { XX } },
{ PREGRP106 },
{ "(bad)", { XX } },
{ "(bad)", { XX } },
/* f8 */
{ "(bad)", { XX } },
{ "(bad)", { XX } },
@@ -3503,74 +3477,6 @@ ckprefix (void)
}
}
static void
ckvexprefix (void)
{
int op, vex2, vex3, newrex = 0, newpfx = prefixes;
if (address_mode == mode_16bit) {
return;
}
fetch_data(the_info, codep + 1);
op = *codep;
if (op != 0xc4 && op != 0xc5) {
return;
}
fetch_data(the_info, codep + 2);
vex2 = codep[1];
if (address_mode == mode_32bit && (vex2 & 0xc0) != 0xc0) {
return;
}
if (op == 0xc4) {
/* Three byte VEX prefix. */
fetch_data(the_info, codep + 3);
vex3 = codep[2];
newrex |= (vex2 & 0x80 ? 0 : REX_R);
newrex |= (vex2 & 0x40 ? 0 : REX_X);
newrex |= (vex2 & 0x20 ? 0 : REX_B);
newrex |= (vex3 & 0x80 ? REX_W : 0);
switch (vex2 & 0x1f) { /* VEX.m-mmmm */
case 1:
newpfx |= PREFIX_VEX_0F;
break;
case 2:
newpfx |= PREFIX_VEX_0F | PREFIX_VEX_0F38;
break;
case 3:
newpfx |= PREFIX_VEX_0F | PREFIX_VEX_0F3A;
break;
}
vex2 = vex3;
codep += 3;
} else {
/* Two byte VEX prefix. */
newrex |= (vex2 & 0x80 ? 0 : REX_R);
codep += 2;
}
vex_reg = (~vex2 >> 3) & 15; /* VEX.vvvv */
switch (vex2 & 3) { /* VEX.pp */
case 1:
newpfx |= PREFIX_DATA; /* 0x66 */
break;
case 2:
newpfx |= PREFIX_REPZ; /* 0xf3 */
break;
case 3:
newpfx |= PREFIX_REPNZ; /* 0xf2 */
break;
}
rex = newrex;
prefixes = newpfx;
}
/* Return the name of the prefix byte PREF, or NULL if PREF is not a
prefix byte. */
@@ -3692,7 +3598,6 @@ print_insn (bfd_vma pc, disassemble_info *info)
const char *p;
struct dis_private priv;
unsigned char op;
unsigned char threebyte;
if (info->mach == bfd_mach_x86_64_intel_syntax
|| info->mach == bfd_mach_x86_64)
@@ -3847,7 +3752,6 @@ print_insn (bfd_vma pc, disassemble_info *info)
obufp = obuf;
ckprefix ();
ckvexprefix ();
insn_codep = codep;
sizeflag = priv.orig_sizeflag;
@@ -3871,29 +3775,18 @@ print_insn (bfd_vma pc, disassemble_info *info)
}
op = 0;
if (prefixes & PREFIX_VEX_0F)
{
used_prefixes |= PREFIX_VEX_0F | PREFIX_VEX_0F38 | PREFIX_VEX_0F3A;
if (prefixes & PREFIX_VEX_0F38)
threebyte = 0x38;
else if (prefixes & PREFIX_VEX_0F3A)
threebyte = 0x3a;
else
threebyte = *codep++;
goto vex_opcode;
}
if (*codep == 0x0f)
{
unsigned char threebyte;
fetch_data(info, codep + 2);
threebyte = codep[1];
codep += 2;
vex_opcode:
threebyte = *++codep;
dp = &dis386_twobyte[threebyte];
need_modrm = twobyte_has_modrm[threebyte];
uses_DATA_prefix = twobyte_uses_DATA_prefix[threebyte];
uses_REPNZ_prefix = twobyte_uses_REPNZ_prefix[threebyte];
uses_REPZ_prefix = twobyte_uses_REPZ_prefix[threebyte];
uses_LOCK_prefix = (threebyte & ~0x02) == 0x20;
need_modrm = twobyte_has_modrm[*codep];
uses_DATA_prefix = twobyte_uses_DATA_prefix[*codep];
uses_REPNZ_prefix = twobyte_uses_REPNZ_prefix[*codep];
uses_REPZ_prefix = twobyte_uses_REPZ_prefix[*codep];
uses_LOCK_prefix = (*codep & ~0x02) == 0x20;
codep++;
if (dp->name == NULL && dp->op[0].bytemode == IS_3BYTE_OPCODE)
{
fetch_data(info, codep + 2);
@@ -5398,17 +5291,6 @@ OP_G (int bytemode, int sizeflag)
}
}
static void
OP_vvvv (int bytemode, int sizeflags)
{
USED_REX (REX_W);
if (rex & REX_W) {
oappend(names64[vex_reg]);
} else {
oappend(names32[vex_reg]);
}
}
static bfd_vma
get64 (void)
{

View File

@@ -1,30 +0,0 @@
LICENCE
=======
The software in this repository is covered by the following licence.
// Copyright 2013, ARM Limited
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of ARM Limited nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS" AND
// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,8 +0,0 @@
libvixl_OBJS = utils.o \
a64/instructions-a64.o \
a64/decoder-a64.o \
a64/disasm-a64.o
$(addprefix $(obj)/,$(libvixl_OBJS)): QEMU_CFLAGS += -I$(SRC_PATH)/disas/libvixl
common-obj-$(CONFIG_ARM_A64_DIS) += $(libvixl_OBJS)

View File

@@ -1,12 +0,0 @@
The code in this directory is a subset of libvixl:
https://github.com/armvixl/vixl
(specifically, it is the set of files needed for disassembly only,
taken from libvixl 1.1).
Bugfixes should preferably be sent upstream initially.
The disassembler does not currently support the entire A64 instruction
set. Notably:
* No Advanced SIMD support.
* Limited support for system instructions.
* A few miscellaneous integer and floating point instructions are missing.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,56 +0,0 @@
// Copyright 2013, ARM Limited
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of ARM Limited nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS" AND
// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef VIXL_CPU_A64_H
#define VIXL_CPU_A64_H
#include "globals.h"
namespace vixl {
class CPU {
public:
// Initialise CPU support.
static void SetUp();
// Ensures the data at a given address and with a given size is the same for
// the I and D caches. I and D caches are not automatically coherent on ARM
// so this operation is required before any dynamically generated code can
// safely run.
static void EnsureIAndDCacheCoherency(void *address, size_t length);
private:
// Return the content of the cache type register.
static uint32_t GetCacheType();
// I and D cache line size in bytes.
static unsigned icache_line_size_;
static unsigned dcache_line_size_;
};
} // namespace vixl
#endif // VIXL_CPU_A64_H

View File

@@ -1,712 +0,0 @@
// Copyright 2013, ARM Limited
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of ARM Limited nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS" AND
// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#include "globals.h"
#include "utils.h"
#include "a64/decoder-a64.h"
namespace vixl {
// Top-level instruction decode function.
void Decoder::Decode(Instruction *instr) {
if (instr->Bits(28, 27) == 0) {
VisitUnallocated(instr);
} else {
switch (instr->Bits(27, 24)) {
// 0: PC relative addressing.
case 0x0: DecodePCRelAddressing(instr); break;
// 1: Add/sub immediate.
case 0x1: DecodeAddSubImmediate(instr); break;
// A: Logical shifted register.
// Add/sub with carry.
// Conditional compare register.
// Conditional compare immediate.
// Conditional select.
// Data processing 1 source.
// Data processing 2 source.
// B: Add/sub shifted register.
// Add/sub extended register.
// Data processing 3 source.
case 0xA:
case 0xB: DecodeDataProcessing(instr); break;
// 2: Logical immediate.
// Move wide immediate.
case 0x2: DecodeLogical(instr); break;
// 3: Bitfield.
// Extract.
case 0x3: DecodeBitfieldExtract(instr); break;
// 4: Unconditional branch immediate.
// Exception generation.
// Compare and branch immediate.
// 5: Compare and branch immediate.
// Conditional branch.
// System.
// 6,7: Unconditional branch.
// Test and branch immediate.
case 0x4:
case 0x5:
case 0x6:
case 0x7: DecodeBranchSystemException(instr); break;
// 8,9: Load/store register pair post-index.
// Load register literal.
// Load/store register unscaled immediate.
// Load/store register immediate post-index.
// Load/store register immediate pre-index.
// Load/store register offset.
// Load/store exclusive.
// C,D: Load/store register pair offset.
// Load/store register pair pre-index.
// Load/store register unsigned immediate.
// Advanced SIMD.
case 0x8:
case 0x9:
case 0xC:
case 0xD: DecodeLoadStore(instr); break;
// E: FP fixed point conversion.
// FP integer conversion.
// FP data processing 1 source.
// FP compare.
// FP immediate.
// FP data processing 2 source.
// FP conditional compare.
// FP conditional select.
// Advanced SIMD.
// F: FP data processing 3 source.
// Advanced SIMD.
case 0xE:
case 0xF: DecodeFP(instr); break;
}
}
}
void Decoder::AppendVisitor(DecoderVisitor* new_visitor) {
visitors_.remove(new_visitor);
visitors_.push_front(new_visitor);
}
void Decoder::PrependVisitor(DecoderVisitor* new_visitor) {
visitors_.remove(new_visitor);
visitors_.push_back(new_visitor);
}
void Decoder::InsertVisitorBefore(DecoderVisitor* new_visitor,
DecoderVisitor* registered_visitor) {
visitors_.remove(new_visitor);
std::list<DecoderVisitor*>::iterator it;
for (it = visitors_.begin(); it != visitors_.end(); it++) {
if (*it == registered_visitor) {
visitors_.insert(it, new_visitor);
return;
}
}
// We reached the end of the list. The last element must be
// registered_visitor.
ASSERT(*it == registered_visitor);
visitors_.insert(it, new_visitor);
}
void Decoder::InsertVisitorAfter(DecoderVisitor* new_visitor,
DecoderVisitor* registered_visitor) {
visitors_.remove(new_visitor);
std::list<DecoderVisitor*>::iterator it;
for (it = visitors_.begin(); it != visitors_.end(); it++) {
if (*it == registered_visitor) {
it++;
visitors_.insert(it, new_visitor);
return;
}
}
// We reached the end of the list. The last element must be
// registered_visitor.
ASSERT(*it == registered_visitor);
visitors_.push_back(new_visitor);
}
void Decoder::RemoveVisitor(DecoderVisitor* visitor) {
visitors_.remove(visitor);
}
void Decoder::DecodePCRelAddressing(Instruction* instr) {
ASSERT(instr->Bits(27, 24) == 0x0);
// We know bit 28 is set, as <b28:b27> = 0 is filtered out at the top level
// decode.
ASSERT(instr->Bit(28) == 0x1);
VisitPCRelAddressing(instr);
}
void Decoder::DecodeBranchSystemException(Instruction* instr) {
ASSERT((instr->Bits(27, 24) == 0x4) ||
(instr->Bits(27, 24) == 0x5) ||
(instr->Bits(27, 24) == 0x6) ||
(instr->Bits(27, 24) == 0x7) );
switch (instr->Bits(31, 29)) {
case 0:
case 4: {
VisitUnconditionalBranch(instr);
break;
}
case 1:
case 5: {
if (instr->Bit(25) == 0) {
VisitCompareBranch(instr);
} else {
VisitTestBranch(instr);
}
break;
}
case 2: {
if (instr->Bit(25) == 0) {
if ((instr->Bit(24) == 0x1) ||
(instr->Mask(0x01000010) == 0x00000010)) {
VisitUnallocated(instr);
} else {
VisitConditionalBranch(instr);
}
} else {
VisitUnallocated(instr);
}
break;
}
case 6: {
if (instr->Bit(25) == 0) {
if (instr->Bit(24) == 0) {
if ((instr->Bits(4, 2) != 0) ||
(instr->Mask(0x00E0001D) == 0x00200001) ||
(instr->Mask(0x00E0001D) == 0x00400001) ||
(instr->Mask(0x00E0001E) == 0x00200002) ||
(instr->Mask(0x00E0001E) == 0x00400002) ||
(instr->Mask(0x00E0001C) == 0x00600000) ||
(instr->Mask(0x00E0001C) == 0x00800000) ||
(instr->Mask(0x00E0001F) == 0x00A00000) ||
(instr->Mask(0x00C0001C) == 0x00C00000)) {
VisitUnallocated(instr);
} else {
VisitException(instr);
}
} else {
if (instr->Bits(23, 22) == 0) {
const Instr masked_003FF0E0 = instr->Mask(0x003FF0E0);
if ((instr->Bits(21, 19) == 0x4) ||
(masked_003FF0E0 == 0x00033000) ||
(masked_003FF0E0 == 0x003FF020) ||
(masked_003FF0E0 == 0x003FF060) ||
(masked_003FF0E0 == 0x003FF0E0) ||
(instr->Mask(0x00388000) == 0x00008000) ||
(instr->Mask(0x0038E000) == 0x00000000) ||
(instr->Mask(0x0039E000) == 0x00002000) ||
(instr->Mask(0x003AE000) == 0x00002000) ||
(instr->Mask(0x003CE000) == 0x00042000) ||
(instr->Mask(0x003FFFC0) == 0x000320C0) ||
(instr->Mask(0x003FF100) == 0x00032100) ||
(instr->Mask(0x003FF200) == 0x00032200) ||
(instr->Mask(0x003FF400) == 0x00032400) ||
(instr->Mask(0x003FF800) == 0x00032800) ||
(instr->Mask(0x0038F000) == 0x00005000) ||
(instr->Mask(0x0038E000) == 0x00006000)) {
VisitUnallocated(instr);
} else {
VisitSystem(instr);
}
} else {
VisitUnallocated(instr);
}
}
} else {
if ((instr->Bit(24) == 0x1) ||
(instr->Bits(20, 16) != 0x1F) ||
(instr->Bits(15, 10) != 0) ||
(instr->Bits(4, 0) != 0) ||
(instr->Bits(24, 21) == 0x3) ||
(instr->Bits(24, 22) == 0x3)) {
VisitUnallocated(instr);
} else {
VisitUnconditionalBranchToRegister(instr);
}
}
break;
}
case 3:
case 7: {
VisitUnallocated(instr);
break;
}
}
}
void Decoder::DecodeLoadStore(Instruction* instr) {
ASSERT((instr->Bits(27, 24) == 0x8) ||
(instr->Bits(27, 24) == 0x9) ||
(instr->Bits(27, 24) == 0xC) ||
(instr->Bits(27, 24) == 0xD) );
if (instr->Bit(24) == 0) {
if (instr->Bit(28) == 0) {
if (instr->Bit(29) == 0) {
if (instr->Bit(26) == 0) {
// TODO: VisitLoadStoreExclusive.
VisitUnimplemented(instr);
} else {
DecodeAdvSIMDLoadStore(instr);
}
} else {
if ((instr->Bits(31, 30) == 0x3) ||
(instr->Mask(0xC4400000) == 0x40000000)) {
VisitUnallocated(instr);
} else {
if (instr->Bit(23) == 0) {
if (instr->Mask(0xC4400000) == 0xC0400000) {
VisitUnallocated(instr);
} else {
VisitLoadStorePairNonTemporal(instr);
}
} else {
VisitLoadStorePairPostIndex(instr);
}
}
}
} else {
if (instr->Bit(29) == 0) {
if (instr->Mask(0xC4000000) == 0xC4000000) {
VisitUnallocated(instr);
} else {
VisitLoadLiteral(instr);
}
} else {
if ((instr->Mask(0x84C00000) == 0x80C00000) ||
(instr->Mask(0x44800000) == 0x44800000) ||
(instr->Mask(0x84800000) == 0x84800000)) {
VisitUnallocated(instr);
} else {
if (instr->Bit(21) == 0) {
switch (instr->Bits(11, 10)) {
case 0: {
VisitLoadStoreUnscaledOffset(instr);
break;
}
case 1: {
if (instr->Mask(0xC4C00000) == 0xC0800000) {
VisitUnallocated(instr);
} else {
VisitLoadStorePostIndex(instr);
}
break;
}
case 2: {
// TODO: VisitLoadStoreRegisterOffsetUnpriv.
VisitUnimplemented(instr);
break;
}
case 3: {
if (instr->Mask(0xC4C00000) == 0xC0800000) {
VisitUnallocated(instr);
} else {
VisitLoadStorePreIndex(instr);
}
break;
}
}
} else {
if (instr->Bits(11, 10) == 0x2) {
if (instr->Bit(14) == 0) {
VisitUnallocated(instr);
} else {
VisitLoadStoreRegisterOffset(instr);
}
} else {
VisitUnallocated(instr);
}
}
}
}
}
} else {
if (instr->Bit(28) == 0) {
if (instr->Bit(29) == 0) {
VisitUnallocated(instr);
} else {
if ((instr->Bits(31, 30) == 0x3) ||
(instr->Mask(0xC4400000) == 0x40000000)) {
VisitUnallocated(instr);
} else {
if (instr->Bit(23) == 0) {
VisitLoadStorePairOffset(instr);
} else {
VisitLoadStorePairPreIndex(instr);
}
}
}
} else {
if (instr->Bit(29) == 0) {
VisitUnallocated(instr);
} else {
if ((instr->Mask(0x84C00000) == 0x80C00000) ||
(instr->Mask(0x44800000) == 0x44800000) ||
(instr->Mask(0x84800000) == 0x84800000)) {
VisitUnallocated(instr);
} else {
VisitLoadStoreUnsignedOffset(instr);
}
}
}
}
}
void Decoder::DecodeLogical(Instruction* instr) {
ASSERT(instr->Bits(27, 24) == 0x2);
if (instr->Mask(0x80400000) == 0x00400000) {
VisitUnallocated(instr);
} else {
if (instr->Bit(23) == 0) {
VisitLogicalImmediate(instr);
} else {
if (instr->Bits(30, 29) == 0x1) {
VisitUnallocated(instr);
} else {
VisitMoveWideImmediate(instr);
}
}
}
}
void Decoder::DecodeBitfieldExtract(Instruction* instr) {
ASSERT(instr->Bits(27, 24) == 0x3);
if ((instr->Mask(0x80400000) == 0x80000000) ||
(instr->Mask(0x80400000) == 0x00400000) ||
(instr->Mask(0x80008000) == 0x00008000)) {
VisitUnallocated(instr);
} else if (instr->Bit(23) == 0) {
if ((instr->Mask(0x80200000) == 0x00200000) ||
(instr->Mask(0x60000000) == 0x60000000)) {
VisitUnallocated(instr);
} else {
VisitBitfield(instr);
}
} else {
if ((instr->Mask(0x60200000) == 0x00200000) ||
(instr->Mask(0x60000000) != 0x00000000)) {
VisitUnallocated(instr);
} else {
VisitExtract(instr);
}
}
}
void Decoder::DecodeAddSubImmediate(Instruction* instr) {
ASSERT(instr->Bits(27, 24) == 0x1);
if (instr->Bit(23) == 1) {
VisitUnallocated(instr);
} else {
VisitAddSubImmediate(instr);
}
}
void Decoder::DecodeDataProcessing(Instruction* instr) {
ASSERT((instr->Bits(27, 24) == 0xA) ||
(instr->Bits(27, 24) == 0xB) );
if (instr->Bit(24) == 0) {
if (instr->Bit(28) == 0) {
if (instr->Mask(0x80008000) == 0x00008000) {
VisitUnallocated(instr);
} else {
VisitLogicalShifted(instr);
}
} else {
switch (instr->Bits(23, 21)) {
case 0: {
if (instr->Mask(0x0000FC00) != 0) {
VisitUnallocated(instr);
} else {
VisitAddSubWithCarry(instr);
}
break;
}
case 2: {
if ((instr->Bit(29) == 0) ||
(instr->Mask(0x00000410) != 0)) {
VisitUnallocated(instr);
} else {
if (instr->Bit(11) == 0) {
VisitConditionalCompareRegister(instr);
} else {
VisitConditionalCompareImmediate(instr);
}
}
break;
}
case 4: {
if (instr->Mask(0x20000800) != 0x00000000) {
VisitUnallocated(instr);
} else {
VisitConditionalSelect(instr);
}
break;
}
case 6: {
if (instr->Bit(29) == 0x1) {
VisitUnallocated(instr);
} else {
if (instr->Bit(30) == 0) {
if ((instr->Bit(15) == 0x1) ||
(instr->Bits(15, 11) == 0) ||
(instr->Bits(15, 12) == 0x1) ||
(instr->Bits(15, 12) == 0x3) ||
(instr->Bits(15, 13) == 0x3) ||
(instr->Mask(0x8000EC00) == 0x00004C00) ||
(instr->Mask(0x8000E800) == 0x80004000) ||
(instr->Mask(0x8000E400) == 0x80004000)) {
VisitUnallocated(instr);
} else {
VisitDataProcessing2Source(instr);
}
} else {
if ((instr->Bit(13) == 1) ||
(instr->Bits(20, 16) != 0) ||
(instr->Bits(15, 14) != 0) ||
(instr->Mask(0xA01FFC00) == 0x00000C00) ||
(instr->Mask(0x201FF800) == 0x00001800)) {
VisitUnallocated(instr);
} else {
VisitDataProcessing1Source(instr);
}
}
break;
}
}
case 1:
case 3:
case 5:
case 7: VisitUnallocated(instr); break;
}
}
} else {
if (instr->Bit(28) == 0) {
if (instr->Bit(21) == 0) {
if ((instr->Bits(23, 22) == 0x3) ||
(instr->Mask(0x80008000) == 0x00008000)) {
VisitUnallocated(instr);
} else {
VisitAddSubShifted(instr);
}
} else {
if ((instr->Mask(0x00C00000) != 0x00000000) ||
(instr->Mask(0x00001400) == 0x00001400) ||
(instr->Mask(0x00001800) == 0x00001800)) {
VisitUnallocated(instr);
} else {
VisitAddSubExtended(instr);
}
}
} else {
if ((instr->Bit(30) == 0x1) ||
(instr->Bits(30, 29) == 0x1) ||
(instr->Mask(0xE0600000) == 0x00200000) ||
(instr->Mask(0xE0608000) == 0x00400000) ||
(instr->Mask(0x60608000) == 0x00408000) ||
(instr->Mask(0x60E00000) == 0x00E00000) ||
(instr->Mask(0x60E00000) == 0x00800000) ||
(instr->Mask(0x60E00000) == 0x00600000)) {
VisitUnallocated(instr);
} else {
VisitDataProcessing3Source(instr);
}
}
}
}
void Decoder::DecodeFP(Instruction* instr) {
ASSERT((instr->Bits(27, 24) == 0xE) ||
(instr->Bits(27, 24) == 0xF) );
if (instr->Bit(28) == 0) {
DecodeAdvSIMDDataProcessing(instr);
} else {
if (instr->Bit(29) == 1) {
VisitUnallocated(instr);
} else {
if (instr->Bits(31, 30) == 0x3) {
VisitUnallocated(instr);
} else if (instr->Bits(31, 30) == 0x1) {
DecodeAdvSIMDDataProcessing(instr);
} else {
if (instr->Bit(24) == 0) {
if (instr->Bit(21) == 0) {
if ((instr->Bit(23) == 1) ||
(instr->Bit(18) == 1) ||
(instr->Mask(0x80008000) == 0x00000000) ||
(instr->Mask(0x000E0000) == 0x00000000) ||
(instr->Mask(0x000E0000) == 0x000A0000) ||
(instr->Mask(0x00160000) == 0x00000000) ||
(instr->Mask(0x00160000) == 0x00120000)) {
VisitUnallocated(instr);
} else {
VisitFPFixedPointConvert(instr);
}
} else {
if (instr->Bits(15, 10) == 32) {
VisitUnallocated(instr);
} else if (instr->Bits(15, 10) == 0) {
if ((instr->Bits(23, 22) == 0x3) ||
(instr->Mask(0x000E0000) == 0x000A0000) ||
(instr->Mask(0x000E0000) == 0x000C0000) ||
(instr->Mask(0x00160000) == 0x00120000) ||
(instr->Mask(0x00160000) == 0x00140000) ||
(instr->Mask(0x20C40000) == 0x00800000) ||
(instr->Mask(0x20C60000) == 0x00840000) ||
(instr->Mask(0xA0C60000) == 0x80060000) ||
(instr->Mask(0xA0C60000) == 0x00860000) ||
(instr->Mask(0xA0C60000) == 0x00460000) ||
(instr->Mask(0xA0CE0000) == 0x80860000) ||
(instr->Mask(0xA0CE0000) == 0x804E0000) ||
(instr->Mask(0xA0CE0000) == 0x000E0000) ||
(instr->Mask(0xA0D60000) == 0x00160000) ||
(instr->Mask(0xA0D60000) == 0x80560000) ||
(instr->Mask(0xA0D60000) == 0x80960000)) {
VisitUnallocated(instr);
} else {
VisitFPIntegerConvert(instr);
}
} else if (instr->Bits(14, 10) == 16) {
const Instr masked_A0DF8000 = instr->Mask(0xA0DF8000);
if ((instr->Mask(0x80180000) != 0) ||
(masked_A0DF8000 == 0x00020000) ||
(masked_A0DF8000 == 0x00030000) ||
(masked_A0DF8000 == 0x00068000) ||
(masked_A0DF8000 == 0x00428000) ||
(masked_A0DF8000 == 0x00430000) ||
(masked_A0DF8000 == 0x00468000) ||
(instr->Mask(0xA0D80000) == 0x00800000) ||
(instr->Mask(0xA0DE0000) == 0x00C00000) ||
(instr->Mask(0xA0DF0000) == 0x00C30000) ||
(instr->Mask(0xA0DC0000) == 0x00C40000)) {
VisitUnallocated(instr);
} else {
VisitFPDataProcessing1Source(instr);
}
} else if (instr->Bits(13, 10) == 8) {
if ((instr->Bits(15, 14) != 0) ||
(instr->Bits(2, 0) != 0) ||
(instr->Mask(0x80800000) != 0x00000000)) {
VisitUnallocated(instr);
} else {
VisitFPCompare(instr);
}
} else if (instr->Bits(12, 10) == 4) {
if ((instr->Bits(9, 5) != 0) ||
(instr->Mask(0x80800000) != 0x00000000)) {
VisitUnallocated(instr);
} else {
VisitFPImmediate(instr);
}
} else {
if (instr->Mask(0x80800000) != 0x00000000) {
VisitUnallocated(instr);
} else {
switch (instr->Bits(11, 10)) {
case 1: {
VisitFPConditionalCompare(instr);
break;
}
case 2: {
if ((instr->Bits(15, 14) == 0x3) ||
(instr->Mask(0x00009000) == 0x00009000) ||
(instr->Mask(0x0000A000) == 0x0000A000)) {
VisitUnallocated(instr);
} else {
VisitFPDataProcessing2Source(instr);
}
break;
}
case 3: {
VisitFPConditionalSelect(instr);
break;
}
default: UNREACHABLE();
}
}
}
}
} else {
// Bit 30 == 1 has been handled earlier.
ASSERT(instr->Bit(30) == 0);
if (instr->Mask(0xA0800000) != 0) {
VisitUnallocated(instr);
} else {
VisitFPDataProcessing3Source(instr);
}
}
}
}
}
}
void Decoder::DecodeAdvSIMDLoadStore(Instruction* instr) {
// TODO: Implement Advanced SIMD load/store instruction decode.
ASSERT(instr->Bits(29, 25) == 0x6);
VisitUnimplemented(instr);
}
void Decoder::DecodeAdvSIMDDataProcessing(Instruction* instr) {
// TODO: Implement Advanced SIMD data processing instruction decode.
ASSERT(instr->Bits(27, 25) == 0x7);
VisitUnimplemented(instr);
}
#define DEFINE_VISITOR_CALLERS(A) \
void Decoder::Visit##A(Instruction *instr) { \
ASSERT(instr->Mask(A##FMask) == A##Fixed); \
std::list<DecoderVisitor*>::iterator it; \
for (it = visitors_.begin(); it != visitors_.end(); it++) { \
(*it)->Visit##A(instr); \
} \
}
VISITOR_LIST(DEFINE_VISITOR_CALLERS)
#undef DEFINE_VISITOR_CALLERS
} // namespace vixl

View File

@@ -1,198 +0,0 @@
// Copyright 2013, ARM Limited
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of ARM Limited nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS" AND
// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef VIXL_A64_DECODER_A64_H_
#define VIXL_A64_DECODER_A64_H_
#include <list>
#include "globals.h"
#include "a64/instructions-a64.h"
// List macro containing all visitors needed by the decoder class.
#define VISITOR_LIST(V) \
V(PCRelAddressing) \
V(AddSubImmediate) \
V(LogicalImmediate) \
V(MoveWideImmediate) \
V(Bitfield) \
V(Extract) \
V(UnconditionalBranch) \
V(UnconditionalBranchToRegister) \
V(CompareBranch) \
V(TestBranch) \
V(ConditionalBranch) \
V(System) \
V(Exception) \
V(LoadStorePairPostIndex) \
V(LoadStorePairOffset) \
V(LoadStorePairPreIndex) \
V(LoadStorePairNonTemporal) \
V(LoadLiteral) \
V(LoadStoreUnscaledOffset) \
V(LoadStorePostIndex) \
V(LoadStorePreIndex) \
V(LoadStoreRegisterOffset) \
V(LoadStoreUnsignedOffset) \
V(LogicalShifted) \
V(AddSubShifted) \
V(AddSubExtended) \
V(AddSubWithCarry) \
V(ConditionalCompareRegister) \
V(ConditionalCompareImmediate) \
V(ConditionalSelect) \
V(DataProcessing1Source) \
V(DataProcessing2Source) \
V(DataProcessing3Source) \
V(FPCompare) \
V(FPConditionalCompare) \
V(FPConditionalSelect) \
V(FPImmediate) \
V(FPDataProcessing1Source) \
V(FPDataProcessing2Source) \
V(FPDataProcessing3Source) \
V(FPIntegerConvert) \
V(FPFixedPointConvert) \
V(Unallocated) \
V(Unimplemented)
namespace vixl {
// The Visitor interface. Disassembler and simulator (and other tools)
// must provide implementations for all of these functions.
class DecoderVisitor {
public:
#define DECLARE(A) virtual void Visit##A(Instruction* instr) = 0;
VISITOR_LIST(DECLARE)
#undef DECLARE
virtual ~DecoderVisitor() {}
private:
// Visitors are registered in a list.
std::list<DecoderVisitor*> visitors_;
friend class Decoder;
};
class Decoder: public DecoderVisitor {
public:
Decoder() {}
// Top-level instruction decoder function. Decodes an instruction and calls
// the visitor functions registered with the Decoder class.
void Decode(Instruction *instr);
// Register a new visitor class with the decoder.
// Decode() will call the corresponding visitor method from all registered
// visitor classes when decoding reaches the leaf node of the instruction
// decode tree.
// Visitors are called in the order.
// A visitor can only be registered once.
// Registering an already registered visitor will update its position.
//
// d.AppendVisitor(V1);
// d.AppendVisitor(V2);
// d.PrependVisitor(V2); // Move V2 at the start of the list.
// d.InsertVisitorBefore(V3, V2);
// d.AppendVisitor(V4);
// d.AppendVisitor(V4); // No effect.
//
// d.Decode(i);
//
// will call in order visitor methods in V3, V2, V1, V4.
void AppendVisitor(DecoderVisitor* visitor);
void PrependVisitor(DecoderVisitor* visitor);
void InsertVisitorBefore(DecoderVisitor* new_visitor,
DecoderVisitor* registered_visitor);
void InsertVisitorAfter(DecoderVisitor* new_visitor,
DecoderVisitor* registered_visitor);
// Remove a previously registered visitor class from the list of visitors
// stored by the decoder.
void RemoveVisitor(DecoderVisitor* visitor);
#define DECLARE(A) void Visit##A(Instruction* instr);
VISITOR_LIST(DECLARE)
#undef DECLARE
private:
// Decode the PC relative addressing instruction, and call the corresponding
// visitors.
// On entry, instruction bits 27:24 = 0x0.
void DecodePCRelAddressing(Instruction* instr);
// Decode the add/subtract immediate instruction, and call the correspoding
// visitors.
// On entry, instruction bits 27:24 = 0x1.
void DecodeAddSubImmediate(Instruction* instr);
// Decode the branch, system command, and exception generation parts of
// the instruction tree, and call the corresponding visitors.
// On entry, instruction bits 27:24 = {0x4, 0x5, 0x6, 0x7}.
void DecodeBranchSystemException(Instruction* instr);
// Decode the load and store parts of the instruction tree, and call
// the corresponding visitors.
// On entry, instruction bits 27:24 = {0x8, 0x9, 0xC, 0xD}.
void DecodeLoadStore(Instruction* instr);
// Decode the logical immediate and move wide immediate parts of the
// instruction tree, and call the corresponding visitors.
// On entry, instruction bits 27:24 = 0x2.
void DecodeLogical(Instruction* instr);
// Decode the bitfield and extraction parts of the instruction tree,
// and call the corresponding visitors.
// On entry, instruction bits 27:24 = 0x3.
void DecodeBitfieldExtract(Instruction* instr);
// Decode the data processing parts of the instruction tree, and call the
// corresponding visitors.
// On entry, instruction bits 27:24 = {0x1, 0xA, 0xB}.
void DecodeDataProcessing(Instruction* instr);
// Decode the floating point parts of the instruction tree, and call the
// corresponding visitors.
// On entry, instruction bits 27:24 = {0xE, 0xF}.
void DecodeFP(Instruction* instr);
// Decode the Advanced SIMD (NEON) load/store part of the instruction tree,
// and call the corresponding visitors.
// On entry, instruction bits 29:25 = 0x6.
void DecodeAdvSIMDLoadStore(Instruction* instr);
// Decode the Advanced SIMD (NEON) data processing part of the instruction
// tree, and call the corresponding visitors.
// On entry, instruction bits 27:25 = 0x7.
void DecodeAdvSIMDDataProcessing(Instruction* instr);
};
} // namespace vixl
#endif // VIXL_A64_DECODER_A64_H_

File diff suppressed because it is too large Load Diff

View File

@@ -1,109 +0,0 @@
// Copyright 2013, ARM Limited
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of ARM Limited nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS" AND
// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef VIXL_A64_DISASM_A64_H
#define VIXL_A64_DISASM_A64_H
#include "globals.h"
#include "utils.h"
#include "instructions-a64.h"
#include "decoder-a64.h"
namespace vixl {
class Disassembler: public DecoderVisitor {
public:
Disassembler();
Disassembler(char* text_buffer, int buffer_size);
virtual ~Disassembler();
char* GetOutput();
// Declare all Visitor functions.
#define DECLARE(A) void Visit##A(Instruction* instr);
VISITOR_LIST(DECLARE)
#undef DECLARE
protected:
virtual void ProcessOutput(Instruction* instr);
private:
void Format(Instruction* instr, const char* mnemonic, const char* format);
void Substitute(Instruction* instr, const char* string);
int SubstituteField(Instruction* instr, const char* format);
int SubstituteRegisterField(Instruction* instr, const char* format);
int SubstituteImmediateField(Instruction* instr, const char* format);
int SubstituteLiteralField(Instruction* instr, const char* format);
int SubstituteBitfieldImmediateField(Instruction* instr, const char* format);
int SubstituteShiftField(Instruction* instr, const char* format);
int SubstituteExtendField(Instruction* instr, const char* format);
int SubstituteConditionField(Instruction* instr, const char* format);
int SubstitutePCRelAddressField(Instruction* instr, const char* format);
int SubstituteBranchTargetField(Instruction* instr, const char* format);
int SubstituteLSRegOffsetField(Instruction* instr, const char* format);
int SubstitutePrefetchField(Instruction* instr, const char* format);
inline bool RdIsZROrSP(Instruction* instr) const {
return (instr->Rd() == kZeroRegCode);
}
inline bool RnIsZROrSP(Instruction* instr) const {
return (instr->Rn() == kZeroRegCode);
}
inline bool RmIsZROrSP(Instruction* instr) const {
return (instr->Rm() == kZeroRegCode);
}
inline bool RaIsZROrSP(Instruction* instr) const {
return (instr->Ra() == kZeroRegCode);
}
bool IsMovzMovnImm(unsigned reg_size, uint64_t value);
void ResetOutput();
void AppendToOutput(const char* string, ...);
char* buffer_;
uint32_t buffer_pos_;
uint32_t buffer_size_;
bool own_buffer_;
};
class PrintDisassembler: public Disassembler {
public:
explicit PrintDisassembler(FILE* stream) : stream_(stream) { }
~PrintDisassembler() { }
protected:
virtual void ProcessOutput(Instruction* instr);
private:
FILE *stream_;
};
} // namespace vixl
#endif // VIXL_A64_DISASM_A64_H

View File

@@ -1,238 +0,0 @@
// Copyright 2013, ARM Limited
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of ARM Limited nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS" AND
// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#include "a64/instructions-a64.h"
#include "a64/assembler-a64.h"
namespace vixl {
static uint64_t RotateRight(uint64_t value,
unsigned int rotate,
unsigned int width) {
ASSERT(width <= 64);
rotate &= 63;
return ((value & ((1UL << rotate) - 1UL)) << (width - rotate)) |
(value >> rotate);
}
static uint64_t RepeatBitsAcrossReg(unsigned reg_size,
uint64_t value,
unsigned width) {
ASSERT((width == 2) || (width == 4) || (width == 8) || (width == 16) ||
(width == 32));
ASSERT((reg_size == kWRegSize) || (reg_size == kXRegSize));
uint64_t result = value & ((1UL << width) - 1UL);
for (unsigned i = width; i < reg_size; i *= 2) {
result |= (result << i);
}
return result;
}
// Logical immediates can't encode zero, so a return value of zero is used to
// indicate a failure case. Specifically, where the constraints on imm_s are
// not met.
uint64_t Instruction::ImmLogical() {
unsigned reg_size = SixtyFourBits() ? kXRegSize : kWRegSize;
int64_t n = BitN();
int64_t imm_s = ImmSetBits();
int64_t imm_r = ImmRotate();
// An integer is constructed from the n, imm_s and imm_r bits according to
// the following table:
//
// N imms immr size S R
// 1 ssssss rrrrrr 64 UInt(ssssss) UInt(rrrrrr)
// 0 0sssss xrrrrr 32 UInt(sssss) UInt(rrrrr)
// 0 10ssss xxrrrr 16 UInt(ssss) UInt(rrrr)
// 0 110sss xxxrrr 8 UInt(sss) UInt(rrr)
// 0 1110ss xxxxrr 4 UInt(ss) UInt(rr)
// 0 11110s xxxxxr 2 UInt(s) UInt(r)
// (s bits must not be all set)
//
// A pattern is constructed of size bits, where the least significant S+1
// bits are set. The pattern is rotated right by R, and repeated across a
// 32 or 64-bit value, depending on destination register width.
//
if (n == 1) {
if (imm_s == 0x3F) {
return 0;
}
uint64_t bits = (1UL << (imm_s + 1)) - 1;
return RotateRight(bits, imm_r, 64);
} else {
if ((imm_s >> 1) == 0x1F) {
return 0;
}
for (int width = 0x20; width >= 0x2; width >>= 1) {
if ((imm_s & width) == 0) {
int mask = width - 1;
if ((imm_s & mask) == mask) {
return 0;
}
uint64_t bits = (1UL << ((imm_s & mask) + 1)) - 1;
return RepeatBitsAcrossReg(reg_size,
RotateRight(bits, imm_r & mask, width),
width);
}
}
}
UNREACHABLE();
return 0;
}
float Instruction::ImmFP32() {
// ImmFP: abcdefgh (8 bits)
// Single: aBbb.bbbc.defg.h000.0000.0000.0000.0000 (32 bits)
// where B is b ^ 1
uint32_t bits = ImmFP();
uint32_t bit7 = (bits >> 7) & 0x1;
uint32_t bit6 = (bits >> 6) & 0x1;
uint32_t bit5_to_0 = bits & 0x3f;
uint32_t result = (bit7 << 31) | ((32 - bit6) << 25) | (bit5_to_0 << 19);
return rawbits_to_float(result);
}
double Instruction::ImmFP64() {
// ImmFP: abcdefgh (8 bits)
// Double: aBbb.bbbb.bbcd.efgh.0000.0000.0000.0000
// 0000.0000.0000.0000.0000.0000.0000.0000 (64 bits)
// where B is b ^ 1
uint32_t bits = ImmFP();
uint64_t bit7 = (bits >> 7) & 0x1;
uint64_t bit6 = (bits >> 6) & 0x1;
uint64_t bit5_to_0 = bits & 0x3f;
uint64_t result = (bit7 << 63) | ((256 - bit6) << 54) | (bit5_to_0 << 48);
return rawbits_to_double(result);
}
LSDataSize CalcLSPairDataSize(LoadStorePairOp op) {
switch (op) {
case STP_x:
case LDP_x:
case STP_d:
case LDP_d: return LSDoubleWord;
default: return LSWord;
}
}
Instruction* Instruction::ImmPCOffsetTarget() {
ptrdiff_t offset;
if (IsPCRelAddressing()) {
// PC-relative addressing. Only ADR is supported.
offset = ImmPCRel();
} else {
// All PC-relative branches.
ASSERT(BranchType() != UnknownBranchType);
// Relative branch offsets are instruction-size-aligned.
offset = ImmBranch() << kInstructionSizeLog2;
}
return this + offset;
}
inline int Instruction::ImmBranch() const {
switch (BranchType()) {
case CondBranchType: return ImmCondBranch();
case UncondBranchType: return ImmUncondBranch();
case CompareBranchType: return ImmCmpBranch();
case TestBranchType: return ImmTestBranch();
default: UNREACHABLE();
}
return 0;
}
void Instruction::SetImmPCOffsetTarget(Instruction* target) {
if (IsPCRelAddressing()) {
SetPCRelImmTarget(target);
} else {
SetBranchImmTarget(target);
}
}
void Instruction::SetPCRelImmTarget(Instruction* target) {
// ADRP is not supported, so 'this' must point to an ADR instruction.
ASSERT(Mask(PCRelAddressingMask) == ADR);
Instr imm = Assembler::ImmPCRelAddress(target - this);
SetInstructionBits(Mask(~ImmPCRel_mask) | imm);
}
void Instruction::SetBranchImmTarget(Instruction* target) {
ASSERT(((target - this) & 3) == 0);
Instr branch_imm = 0;
uint32_t imm_mask = 0;
int offset = (target - this) >> kInstructionSizeLog2;
switch (BranchType()) {
case CondBranchType: {
branch_imm = Assembler::ImmCondBranch(offset);
imm_mask = ImmCondBranch_mask;
break;
}
case UncondBranchType: {
branch_imm = Assembler::ImmUncondBranch(offset);
imm_mask = ImmUncondBranch_mask;
break;
}
case CompareBranchType: {
branch_imm = Assembler::ImmCmpBranch(offset);
imm_mask = ImmCmpBranch_mask;
break;
}
case TestBranchType: {
branch_imm = Assembler::ImmTestBranch(offset);
imm_mask = ImmTestBranch_mask;
break;
}
default: UNREACHABLE();
}
SetInstructionBits(Mask(~imm_mask) | branch_imm);
}
void Instruction::SetImmLLiteral(Instruction* source) {
ASSERT(((source - this) & 3) == 0);
int offset = (source - this) >> kLiteralEntrySizeLog2;
Instr imm = Assembler::ImmLLiteral(offset);
Instr mask = ImmLLiteral_mask;
SetInstructionBits(Mask(~mask) | imm);
}
} // namespace vixl

View File

@@ -1,344 +0,0 @@
// Copyright 2013, ARM Limited
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of ARM Limited nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS" AND
// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef VIXL_A64_INSTRUCTIONS_A64_H_
#define VIXL_A64_INSTRUCTIONS_A64_H_
#include "globals.h"
#include "utils.h"
#include "a64/constants-a64.h"
namespace vixl {
// ISA constants. --------------------------------------------------------------
typedef uint32_t Instr;
const unsigned kInstructionSize = 4;
const unsigned kInstructionSizeLog2 = 2;
const unsigned kLiteralEntrySize = 4;
const unsigned kLiteralEntrySizeLog2 = 2;
const unsigned kMaxLoadLiteralRange = 1 * MBytes;
const unsigned kWRegSize = 32;
const unsigned kWRegSizeLog2 = 5;
const unsigned kWRegSizeInBytes = kWRegSize / 8;
const unsigned kXRegSize = 64;
const unsigned kXRegSizeLog2 = 6;
const unsigned kXRegSizeInBytes = kXRegSize / 8;
const unsigned kSRegSize = 32;
const unsigned kSRegSizeLog2 = 5;
const unsigned kSRegSizeInBytes = kSRegSize / 8;
const unsigned kDRegSize = 64;
const unsigned kDRegSizeLog2 = 6;
const unsigned kDRegSizeInBytes = kDRegSize / 8;
const int64_t kWRegMask = 0x00000000ffffffffLL;
const int64_t kXRegMask = 0xffffffffffffffffLL;
const int64_t kSRegMask = 0x00000000ffffffffLL;
const int64_t kDRegMask = 0xffffffffffffffffLL;
const int64_t kXSignMask = 0x1LL << 63;
const int64_t kWSignMask = 0x1LL << 31;
const int64_t kByteMask = 0xffL;
const int64_t kHalfWordMask = 0xffffL;
const int64_t kWordMask = 0xffffffffLL;
const uint64_t kXMaxUInt = 0xffffffffffffffffULL;
const uint64_t kWMaxUInt = 0xffffffffULL;
const int64_t kXMaxInt = 0x7fffffffffffffffLL;
const int64_t kXMinInt = 0x8000000000000000LL;
const int32_t kWMaxInt = 0x7fffffff;
const int32_t kWMinInt = 0x80000000;
const unsigned kLinkRegCode = 30;
const unsigned kZeroRegCode = 31;
const unsigned kSPRegInternalCode = 63;
const unsigned kRegCodeMask = 0x1f;
// AArch64 floating-point specifics. These match IEEE-754.
const unsigned kDoubleMantissaBits = 52;
const unsigned kDoubleExponentBits = 11;
const unsigned kFloatMantissaBits = 23;
const unsigned kFloatExponentBits = 8;
const float kFP32PositiveInfinity = rawbits_to_float(0x7f800000);
const float kFP32NegativeInfinity = rawbits_to_float(0xff800000);
const double kFP64PositiveInfinity = rawbits_to_double(0x7ff0000000000000ULL);
const double kFP64NegativeInfinity = rawbits_to_double(0xfff0000000000000ULL);
// This value is a signalling NaN as both a double and as a float (taking the
// least-significant word).
static const double kFP64SignallingNaN = rawbits_to_double(0x7ff000007f800001ULL);
static const float kFP32SignallingNaN = rawbits_to_float(0x7f800001);
// A similar value, but as a quiet NaN.
static const double kFP64QuietNaN = rawbits_to_double(0x7ff800007fc00001ULL);
static const float kFP32QuietNaN = rawbits_to_float(0x7fc00001);
enum LSDataSize {
LSByte = 0,
LSHalfword = 1,
LSWord = 2,
LSDoubleWord = 3
};
LSDataSize CalcLSPairDataSize(LoadStorePairOp op);
enum ImmBranchType {
UnknownBranchType = 0,
CondBranchType = 1,
UncondBranchType = 2,
CompareBranchType = 3,
TestBranchType = 4
};
enum AddrMode {
Offset,
PreIndex,
PostIndex
};
enum FPRounding {
// The first four values are encodable directly by FPCR<RMode>.
FPTieEven = 0x0,
FPPositiveInfinity = 0x1,
FPNegativeInfinity = 0x2,
FPZero = 0x3,
// The final rounding mode is only available when explicitly specified by the
// instruction (such as with fcvta). It cannot be set in FPCR.
FPTieAway
};
enum Reg31Mode {
Reg31IsStackPointer,
Reg31IsZeroRegister
};
// Instructions. ---------------------------------------------------------------
class Instruction {
public:
inline Instr InstructionBits() const {
return *(reinterpret_cast<const Instr*>(this));
}
inline void SetInstructionBits(Instr new_instr) {
*(reinterpret_cast<Instr*>(this)) = new_instr;
}
inline int Bit(int pos) const {
return (InstructionBits() >> pos) & 1;
}
inline uint32_t Bits(int msb, int lsb) const {
return unsigned_bitextract_32(msb, lsb, InstructionBits());
}
inline int32_t SignedBits(int msb, int lsb) const {
int32_t bits = *(reinterpret_cast<const int32_t*>(this));
return signed_bitextract_32(msb, lsb, bits);
}
inline Instr Mask(uint32_t mask) const {
return InstructionBits() & mask;
}
#define DEFINE_GETTER(Name, HighBit, LowBit, Func) \
inline int64_t Name() const { return Func(HighBit, LowBit); }
INSTRUCTION_FIELDS_LIST(DEFINE_GETTER)
#undef DEFINE_GETTER
// ImmPCRel is a compound field (not present in INSTRUCTION_FIELDS_LIST),
// formed from ImmPCRelLo and ImmPCRelHi.
int ImmPCRel() const {
int const offset = ((ImmPCRelHi() << ImmPCRelLo_width) | ImmPCRelLo());
int const width = ImmPCRelLo_width + ImmPCRelHi_width;
return signed_bitextract_32(width-1, 0, offset);
}
uint64_t ImmLogical();
float ImmFP32();
double ImmFP64();
inline LSDataSize SizeLSPair() const {
return CalcLSPairDataSize(
static_cast<LoadStorePairOp>(Mask(LoadStorePairMask)));
}
// Helpers.
inline bool IsCondBranchImm() const {
return Mask(ConditionalBranchFMask) == ConditionalBranchFixed;
}
inline bool IsUncondBranchImm() const {
return Mask(UnconditionalBranchFMask) == UnconditionalBranchFixed;
}
inline bool IsCompareBranch() const {
return Mask(CompareBranchFMask) == CompareBranchFixed;
}
inline bool IsTestBranch() const {
return Mask(TestBranchFMask) == TestBranchFixed;
}
inline bool IsPCRelAddressing() const {
return Mask(PCRelAddressingFMask) == PCRelAddressingFixed;
}
inline bool IsLogicalImmediate() const {
return Mask(LogicalImmediateFMask) == LogicalImmediateFixed;
}
inline bool IsAddSubImmediate() const {
return Mask(AddSubImmediateFMask) == AddSubImmediateFixed;
}
inline bool IsAddSubExtended() const {
return Mask(AddSubExtendedFMask) == AddSubExtendedFixed;
}
inline bool IsLoadOrStore() const {
return Mask(LoadStoreAnyFMask) == LoadStoreAnyFixed;
}
inline bool IsMovn() const {
return (Mask(MoveWideImmediateMask) == MOVN_x) ||
(Mask(MoveWideImmediateMask) == MOVN_w);
}
// Indicate whether Rd can be the stack pointer or the zero register. This
// does not check that the instruction actually has an Rd field.
inline Reg31Mode RdMode() const {
// The following instructions use sp or wsp as Rd:
// Add/sub (immediate) when not setting the flags.
// Add/sub (extended) when not setting the flags.
// Logical (immediate) when not setting the flags.
// Otherwise, r31 is the zero register.
if (IsAddSubImmediate() || IsAddSubExtended()) {
if (Mask(AddSubSetFlagsBit)) {
return Reg31IsZeroRegister;
} else {
return Reg31IsStackPointer;
}
}
if (IsLogicalImmediate()) {
// Of the logical (immediate) instructions, only ANDS (and its aliases)
// can set the flags. The others can all write into sp.
// Note that some logical operations are not available to
// immediate-operand instructions, so we have to combine two masks here.
if (Mask(LogicalImmediateMask & LogicalOpMask) == ANDS) {
return Reg31IsZeroRegister;
} else {
return Reg31IsStackPointer;
}
}
return Reg31IsZeroRegister;
}
// Indicate whether Rn can be the stack pointer or the zero register. This
// does not check that the instruction actually has an Rn field.
inline Reg31Mode RnMode() const {
// The following instructions use sp or wsp as Rn:
// All loads and stores.
// Add/sub (immediate).
// Add/sub (extended).
// Otherwise, r31 is the zero register.
if (IsLoadOrStore() || IsAddSubImmediate() || IsAddSubExtended()) {
return Reg31IsStackPointer;
}
return Reg31IsZeroRegister;
}
inline ImmBranchType BranchType() const {
if (IsCondBranchImm()) {
return CondBranchType;
} else if (IsUncondBranchImm()) {
return UncondBranchType;
} else if (IsCompareBranch()) {
return CompareBranchType;
} else if (IsTestBranch()) {
return TestBranchType;
} else {
return UnknownBranchType;
}
}
// Find the target of this instruction. 'this' may be a branch or a
// PC-relative addressing instruction.
Instruction* ImmPCOffsetTarget();
// Patch a PC-relative offset to refer to 'target'. 'this' may be a branch or
// a PC-relative addressing instruction.
void SetImmPCOffsetTarget(Instruction* target);
// Patch a literal load instruction to load from 'source'.
void SetImmLLiteral(Instruction* source);
inline uint8_t* LiteralAddress() {
int offset = ImmLLiteral() << kLiteralEntrySizeLog2;
return reinterpret_cast<uint8_t*>(this) + offset;
}
inline uint32_t Literal32() {
uint32_t literal;
memcpy(&literal, LiteralAddress(), sizeof(literal));
return literal;
}
inline uint64_t Literal64() {
uint64_t literal;
memcpy(&literal, LiteralAddress(), sizeof(literal));
return literal;
}
inline float LiteralFP32() {
return rawbits_to_float(Literal32());
}
inline double LiteralFP64() {
return rawbits_to_double(Literal64());
}
inline Instruction* NextInstruction() {
return this + kInstructionSize;
}
inline Instruction* InstructionAtOffset(int64_t offset) {
ASSERT(IsWordAligned(this + offset));
return this + offset;
}
template<typename T> static inline Instruction* Cast(T src) {
return reinterpret_cast<Instruction*>(src);
}
private:
inline int ImmBranch() const;
void SetPCRelImmTarget(Instruction* target);
void SetBranchImmTarget(Instruction* target);
};
} // namespace vixl
#endif // VIXL_A64_INSTRUCTIONS_A64_H_

View File

@@ -1,65 +0,0 @@
// Copyright 2013, ARM Limited
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of ARM Limited nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS" AND
// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef VIXL_GLOBALS_H
#define VIXL_GLOBALS_H
// Get the standard printf format macros for C99 stdint types.
#define __STDC_FORMAT_MACROS
#include <inttypes.h>
#include <assert.h>
#include <stdarg.h>
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <stddef.h>
#include "platform.h"
typedef uint8_t byte;
const int KBytes = 1024;
const int MBytes = 1024 * KBytes;
#define ABORT() printf("in %s, line %i", __FILE__, __LINE__); abort()
#ifdef DEBUG
#define ASSERT(condition) assert(condition)
#define CHECK(condition) ASSERT(condition)
#define UNIMPLEMENTED() printf("UNIMPLEMENTED\t"); ABORT()
#define UNREACHABLE() printf("UNREACHABLE\t"); ABORT()
#else
#define ASSERT(condition) ((void) 0)
#define CHECK(condition) assert(condition)
#define UNIMPLEMENTED() ((void) 0)
#define UNREACHABLE() ((void) 0)
#endif
template <typename T> inline void USE(T) {}
#define ALIGNMENT_EXCEPTION() printf("ALIGNMENT EXCEPTION\t"); ABORT()
#endif // VIXL_GLOBALS_H

View File

@@ -1,43 +0,0 @@
// Copyright 2013, ARM Limited
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of ARM Limited nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS" AND
// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef PLATFORM_H
#define PLATFORM_H
// Define platform specific functionalities.
namespace vixl {
#ifdef USE_SIMULATOR
// Currently we assume running the simulator implies running on x86 hardware.
inline void HostBreakpoint() { asm("int3"); }
#else
inline void HostBreakpoint() {
// TODO: Implement HostBreakpoint on a64.
}
#endif
} // namespace vixl
#endif

View File

@@ -1,126 +0,0 @@
// Copyright 2013, ARM Limited
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of ARM Limited nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS" AND
// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#include "utils.h"
#include <stdio.h>
namespace vixl {
uint32_t float_to_rawbits(float value) {
uint32_t bits = 0;
memcpy(&bits, &value, 4);
return bits;
}
uint64_t double_to_rawbits(double value) {
uint64_t bits = 0;
memcpy(&bits, &value, 8);
return bits;
}
float rawbits_to_float(uint32_t bits) {
float value = 0.0;
memcpy(&value, &bits, 4);
return value;
}
double rawbits_to_double(uint64_t bits) {
double value = 0.0;
memcpy(&value, &bits, 8);
return value;
}
int CountLeadingZeros(uint64_t value, int width) {
ASSERT((width == 32) || (width == 64));
int count = 0;
uint64_t bit_test = 1UL << (width - 1);
while ((count < width) && ((bit_test & value) == 0)) {
count++;
bit_test >>= 1;
}
return count;
}
int CountLeadingSignBits(int64_t value, int width) {
ASSERT((width == 32) || (width == 64));
if (value >= 0) {
return CountLeadingZeros(value, width) - 1;
} else {
return CountLeadingZeros(~value, width) - 1;
}
}
int CountTrailingZeros(uint64_t value, int width) {
ASSERT((width == 32) || (width == 64));
int count = 0;
while ((count < width) && (((value >> count) & 1) == 0)) {
count++;
}
return count;
}
int CountSetBits(uint64_t value, int width) {
// TODO: Other widths could be added here, as the implementation already
// supports them.
ASSERT((width == 32) || (width == 64));
// Mask out unused bits to ensure that they are not counted.
value &= (0xffffffffffffffffULL >> (64-width));
// Add up the set bits.
// The algorithm works by adding pairs of bit fields together iteratively,
// where the size of each bit field doubles each time.
// An example for an 8-bit value:
// Bits: h g f e d c b a
// \ | \ | \ | \ |
// value = h+g f+e d+c b+a
// \ | \ |
// value = h+g+f+e d+c+b+a
// \ |
// value = h+g+f+e+d+c+b+a
value = ((value >> 1) & 0x5555555555555555ULL) +
(value & 0x5555555555555555ULL);
value = ((value >> 2) & 0x3333333333333333ULL) +
(value & 0x3333333333333333ULL);
value = ((value >> 4) & 0x0f0f0f0f0f0f0f0fULL) +
(value & 0x0f0f0f0f0f0f0f0fULL);
value = ((value >> 8) & 0x00ff00ff00ff00ffULL) +
(value & 0x00ff00ff00ff00ffULL);
value = ((value >> 16) & 0x0000ffff0000ffffULL) +
(value & 0x0000ffff0000ffffULL);
value = ((value >> 32) & 0x00000000ffffffffULL) +
(value & 0x00000000ffffffffULL);
return value;
}
} // namespace vixl

View File

@@ -1,126 +0,0 @@
// Copyright 2013, ARM Limited
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of ARM Limited nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS" AND
// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef VIXL_UTILS_H
#define VIXL_UTILS_H
#include <string.h>
#include "globals.h"
namespace vixl {
// Check number width.
inline bool is_intn(unsigned n, int64_t x) {
ASSERT((0 < n) && (n < 64));
int64_t limit = 1ULL << (n - 1);
return (-limit <= x) && (x < limit);
}
inline bool is_uintn(unsigned n, int64_t x) {
ASSERT((0 < n) && (n < 64));
return !(x >> n);
}
inline unsigned truncate_to_intn(unsigned n, int64_t x) {
ASSERT((0 < n) && (n < 64));
return (x & ((1ULL << n) - 1));
}
#define INT_1_TO_63_LIST(V) \
V(1) V(2) V(3) V(4) V(5) V(6) V(7) V(8) \
V(9) V(10) V(11) V(12) V(13) V(14) V(15) V(16) \
V(17) V(18) V(19) V(20) V(21) V(22) V(23) V(24) \
V(25) V(26) V(27) V(28) V(29) V(30) V(31) V(32) \
V(33) V(34) V(35) V(36) V(37) V(38) V(39) V(40) \
V(41) V(42) V(43) V(44) V(45) V(46) V(47) V(48) \
V(49) V(50) V(51) V(52) V(53) V(54) V(55) V(56) \
V(57) V(58) V(59) V(60) V(61) V(62) V(63)
#define DECLARE_IS_INT_N(N) \
inline bool is_int##N(int64_t x) { return is_intn(N, x); }
#define DECLARE_IS_UINT_N(N) \
inline bool is_uint##N(int64_t x) { return is_uintn(N, x); }
#define DECLARE_TRUNCATE_TO_INT_N(N) \
inline int truncate_to_int##N(int x) { return truncate_to_intn(N, x); }
INT_1_TO_63_LIST(DECLARE_IS_INT_N)
INT_1_TO_63_LIST(DECLARE_IS_UINT_N)
INT_1_TO_63_LIST(DECLARE_TRUNCATE_TO_INT_N)
#undef DECLARE_IS_INT_N
#undef DECLARE_IS_UINT_N
#undef DECLARE_TRUNCATE_TO_INT_N
// Bit field extraction.
inline uint32_t unsigned_bitextract_32(int msb, int lsb, uint32_t x) {
return (x >> lsb) & ((1 << (1 + msb - lsb)) - 1);
}
inline uint64_t unsigned_bitextract_64(int msb, int lsb, uint64_t x) {
return (x >> lsb) & ((static_cast<uint64_t>(1) << (1 + msb - lsb)) - 1);
}
inline int32_t signed_bitextract_32(int msb, int lsb, int32_t x) {
return (x << (31 - msb)) >> (lsb + 31 - msb);
}
inline int64_t signed_bitextract_64(int msb, int lsb, int64_t x) {
return (x << (63 - msb)) >> (lsb + 63 - msb);
}
// floating point representation
uint32_t float_to_rawbits(float value);
uint64_t double_to_rawbits(double value);
float rawbits_to_float(uint32_t bits);
double rawbits_to_double(uint64_t bits);
// Bits counting.
int CountLeadingZeros(uint64_t value, int width);
int CountLeadingSignBits(int64_t value, int width);
int CountTrailingZeros(uint64_t value, int width);
int CountSetBits(uint64_t value, int width);
// Pointer alignment
// TODO: rename/refactor to make it specific to instructions.
template<typename T>
bool IsWordAligned(T pointer) {
ASSERT(sizeof(pointer) == sizeof(intptr_t)); // NOLINT(runtime/sizeof)
return (reinterpret_cast<intptr_t>(pointer) & 3) == 0;
}
// Increment a pointer until it has the specified alignment.
template<class T>
T AlignUp(T pointer, size_t alignment) {
ASSERT(sizeof(pointer) == sizeof(uintptr_t));
uintptr_t pointer_raw = reinterpret_cast<uintptr_t>(pointer);
size_t align_step = (alignment - pointer_raw) % alignment;
ASSERT((pointer_raw + align_step) % alignment == 0);
return reinterpret_cast<T>(pointer_raw + align_step);
}
} // namespace vixl
#endif // VIXL_UTILS_H

View File

@@ -123,12 +123,11 @@ And it looks like this on the wire:
Flat union types avoid the nesting on the wire. They are used whenever a
specific field of the base type is declared as the discriminator ('type' is
then no longer generated). The discriminator must be of enumeration type.
then no longer generated). The discriminator must always be a string field.
The above example can then be modified as follows:
{ 'enum': 'BlockdevDriver', 'data': [ 'raw', 'qcow2' ] }
{ 'type': 'BlockdevCommonOptions',
'data': { 'driver': 'BlockdevDriver', 'readonly': 'bool' } }
'data': { 'driver': 'str', 'readonly': 'bool' } }
{ 'union': 'BlockdevOptions',
'base': 'BlockdevCommonOptions',
'discriminator': 'driver',

View File

@@ -11,92 +11,99 @@
; (Com+Lpt)" from the list. Click "Have a disk". Select this file.
; Procedure may vary a bit depending on the windows version.
; This file covers all options: pci-serial, pci-serial-2x, pci-serial-4x
; for both 32 and 64 bit platforms.
; FIXME: This file covers the single port version only.
[Version]
Signature="$Windows NT$"
Class=MultiFunction
ClassGUID={4d36e971-e325-11ce-bfc1-08002be10318}
Signature="$CHICAGO$"
Class=Ports
ClassGuid={4D36E978-E325-11CE-BFC1-08002BE10318}
Provider=%QEMU%
DriverVer=12/29/2013,1.3.0
[ControlFlags]
ExcludeFromSelect=*
DriverVer=09/24/2012,1.3.0
[SourceDisksNames]
3426=windows cd
[SourceDisksFiles]
serial.sys = 3426
serenum.sys = 3426
[DestinationDirs]
DefaultDestDir = 11 ;LDID_SYS
ComPort.NT.Copy = 12 ;DIRID_DRIVERS
SerialEnumerator.NT.Copy=12 ;DIRID_DRIVERS
; Drivers
;----------------------------------------------------------
[Manufacturer]
%QEMU%=QEMU,NTx86,NTAMD64
%QEMU%=QEMU,NTx86
[QEMU.NTx86]
%QEMU-PCI_SERIAL_1_PORT%=ComPort_inst1, PCI\VEN_1B36&DEV_0002
%QEMU-PCI_SERIAL_2_PORT%=ComPort_inst2, PCI\VEN_1B36&DEV_0003
%QEMU-PCI_SERIAL_4_PORT%=ComPort_inst4, PCI\VEN_1B36&DEV_0004
%QEMU-PCI_SERIAL.DeviceDesc% = ComPort, "PCI\VEN_1b36&DEV_0002&CC_0700"
[QEMU.NTAMD64]
%QEMU-PCI_SERIAL_1_PORT%=ComPort_inst1, PCI\VEN_1B36&DEV_0002
%QEMU-PCI_SERIAL_2_PORT%=ComPort_inst2, PCI\VEN_1B36&DEV_0003
%QEMU-PCI_SERIAL_4_PORT%=ComPort_inst4, PCI\VEN_1B36&DEV_0004
; COM sections
;----------------------------------------------------------
[ComPort.AddReg]
HKR,,PortSubClass,1,01
[ComPort_inst1]
Include=mf.inf
Needs=MFINSTALL.mf
[ComPort.NT]
AddReg=ComPort.AddReg, ComPort.NT.AddReg
LogConfig=caa
SyssetupPnPFlags = 1
[ComPort_inst2]
Include=mf.inf
Needs=MFINSTALL.mf
[ComPort.NT.HW]
AddReg=ComPort.NT.HW.AddReg
[ComPort_inst4]
Include=mf.inf
Needs=MFINSTALL.mf
[ComPort.NT.AddReg]
HKR,,EnumPropPages32,,"MsPorts.dll,SerialPortPropPageProvider"
[ComPort_inst1.HW]
AddReg=ComPort_inst1.RegHW
[ComPort.NT.HW.AddReg]
HKR,,"UpperFilters",0x00010000,"serenum"
[ComPort_inst2.HW]
AddReg=ComPort_inst2.RegHW
;-------------- Service installation
; Port Driver (function driver for this device)
[ComPort.NT.Services]
AddService = Serial, 0x00000002, Serial_Service_Inst, Serial_EventLog_Inst
AddService = Serenum,,Serenum_Service_Inst
[ComPort_inst4.HW]
AddReg=ComPort_inst4.RegHW
; -------------- Serial Port Driver install sections
[Serial_Service_Inst]
DisplayName = %Serial.SVCDESC%
ServiceType = 1 ; SERVICE_KERNEL_DRIVER
StartType = 1 ; SERVICE_SYSTEM_START (this driver may do detection)
ErrorControl = 0 ; SERVICE_ERROR_IGNORE
ServiceBinary = %12%\serial.sys
LoadOrderGroup = Extended base
[ComPort_inst1.Services]
Include=mf.inf
Needs=MFINSTALL.mf.Services
; -------------- Serenum Driver install section
[Serenum_Service_Inst]
DisplayName = %Serenum.SVCDESC%
ServiceType = 1 ; SERVICE_KERNEL_DRIVER
StartType = 3 ; SERVICE_DEMAND_START
ErrorControl = 1 ; SERVICE_ERROR_NORMAL
ServiceBinary = %12%\serenum.sys
LoadOrderGroup = PNP Filter
[ComPort_inst2.Services]
Include=mf.inf
Needs=MFINSTALL.mf.Services
[Serial_EventLog_Inst]
AddReg = Serial_EventLog_AddReg
[ComPort_inst4.Services]
Include=mf.inf
Needs=MFINSTALL.mf.Services
[Serial_EventLog_AddReg]
HKR,,EventMessageFile,0x00020000,"%%SystemRoot%%\System32\IoLogMsg.dll;%%SystemRoot%%\System32\drivers\serial.sys"
HKR,,TypesSupported,0x00010001,7
[ComPort_inst1.RegHW]
HKR,Child0000,HardwareID,,*PNP0501
HKR,Child0000,VaryingResourceMap,1,00, 00,00,00,00, 08,00,00,00
HKR,Child0000,ResourceMap,1,02
; The following sections are COM port resource configs.
; Section name format means:
; Char 1 = c (COM port)
; Char 2 = I/O config: 1 (3f8), 2 (2f8), 3 (3e8), 4 (2e8), a (any)
; Char 3 = IRQ config: #, a (any)
[ComPort_inst2.RegHW]
HKR,Child0000,HardwareID,,*PNP0501
HKR,Child0000,VaryingResourceMap,1,00, 00,00,00,00, 08,00,00,00
HKR,Child0000,ResourceMap,1,02
HKR,Child0001,HardwareID,,*PNP0501
HKR,Child0001,VaryingResourceMap,1,00, 08,00,00,00, 08,00,00,00
HKR,Child0001,ResourceMap,1,02
[ComPort_inst4.RegHW]
HKR,Child0000,HardwareID,,*PNP0501
HKR,Child0000,VaryingResourceMap,1,00, 00,00,00,00, 08,00,00,00
HKR,Child0000,ResourceMap,1,02
HKR,Child0001,HardwareID,,*PNP0501
HKR,Child0001,VaryingResourceMap,1,00, 08,00,00,00, 08,00,00,00
HKR,Child0001,ResourceMap,1,02
HKR,Child0002,HardwareID,,*PNP0501
HKR,Child0002,VaryingResourceMap,1,00, 10,00,00,00, 08,00,00,00
HKR,Child0002,ResourceMap,1,02
HKR,Child0003,HardwareID,,*PNP0501
HKR,Child0003,VaryingResourceMap,1,00, 18,00,00,00, 08,00,00,00
HKR,Child0003,ResourceMap,1,02
[caa] ; Any base, any IRQ
ConfigPriority=HARDRECONFIG
IOConfig=8@100-ffff%fff8(3ff::)
IRQConfig=S:3,4,5,7,9,10,11,12,14,15
[Strings]
QEMU="QEMU"
QEMU-PCI_SERIAL_1_PORT="1x QEMU PCI Serial Card"
QEMU-PCI_SERIAL_2_PORT="2x QEMU PCI Serial Card"
QEMU-PCI_SERIAL_4_PORT="4x QEMU PCI Serial Card"
QEMU-PCI_SERIAL.DeviceDesc="QEMU Serial PCI Card"
Serial.SVCDESC = "Serial port driver"
Serenum.SVCDESC = "Serenum Filter Driver"

View File

@@ -225,45 +225,6 @@ Data:
"timestamp": { "seconds": 1368697518, "microseconds": 326866 } }
}
QUORUM_FAILURE
--------------
Emitted by the Quorum block driver if it fails to establish a quorum.
Data:
- "reference": device name if defined else node name.
- "sector-num": Number of the first sector of the failed read operation.
- "sector-count": Failed read operation sector count.
Example:
{ "event": "QUORUM_FAILURE",
"data": { "reference": "usr1", "sector-num": 345435, "sector-count": 5 },
"timestamp": { "seconds": 1344522075, "microseconds": 745528 } }
QUORUM_REPORT_BAD
-----------------
Emitted to report a corruption of a Quorum file.
Data:
- "error": Error message (json-string, optional)
Only present on failure. This field contains a human-readable
error message. There are no semantics other than that the
block layer reported an error and clients should not try to
interpret the error string.
- "node-name": The graph node name of the block driver state.
- "sector-num": Number of the first sector of the failed read operation.
- "sector-count": Failed read operation sector count.
Example:
{ "event": "QUORUM_REPORT_BAD",
"data": { "node-name": "1.raw", "sector-num": 345435, "sector-count": 5 },
"timestamp": { "seconds": 1344522075, "microseconds": 745528 } }
RESET
-----
@@ -518,7 +479,7 @@ Data: None.
Example:
{ "event": "WAKEUP",
{ "event": "WATCHDOG",
"timestamp": { "seconds": 1344522075, "microseconds": 745528 } }
WATCHDOG

Some files were not shown because too many files have changed in this diff Show More