Compare commits

..

46 Commits

Author SHA1 Message Date
Bruce Rogers
36f5bc4fdb This is the delta between the qemu and qemu-kvm v0.15.1 versions
Signed-off-by: Bruce Rogers <brogers@suse.com>
2018-02-09 11:41:11 -07:00
Justin M. Forbes
82b2b32a32 Version 0.15.1
Signed-off-by: Justin M. Forbes <jforbes@redhat.com>
2011-10-11 09:46:03 -05:00
Stefan Hajnoczi
4a81ab81e4 qed: fix use-after-free during l2 cache commit
QED's metadata caching strategy allows two parallel requests to race for
metadata lookup.  The first one to complete will populate the metadata
cache and the second one will drop the data it just read in favor of the
cached data.

There is a use-after-free in qed_read_l2_table_cb() and
qed_commit_l2_update() where l2_table->offset was used after the
l2_table may have been freed due to a metadata lookup race.  Fix this by
keeping the l2_offset in a local variable and not reaching into the
possibly freed l2_table.

Reported-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-10-05 11:33:31 -05:00
Jan Kiszka
68e3508eaf sdl: Fix termination in -no-shutdown mode
Just like the monitor does, we need to clear no_shutdown before calling
qemu_system_shutdown_request on quit requests. Otherwise, QEMU just
stops the VM.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-10-03 14:42:24 -05:00
Kevin Wolf
fb524042db Fix termination by signal with -no-shutdown
On signals such as SIGTERM qemu should exit instead of just stopping the VM
even with -no-shutdown.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-10-03 14:42:16 -05:00
Brad
336398391a Add support for finding libpng via pkg-config.
Signed-off-by: Brad Smith <brad@comstyle.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-10-03 14:41:44 -05:00
Brad
672aefeb5e Check for presence of compiler -pthread flag.
OpenBSD / FreeBSD and some other OS's require the use of
cc -pthread to link threaded programs so have QEMU's
configure script check for the presence of the flag
and use it if so.

Signed-off-by: Brad Smith <brad@comstyle.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-10-03 14:41:23 -05:00
Brad
6a10ccca80 Allow overriding the location of Samba's smbd.
Allow overriding the location of Samba's smbd.

Pretty much every OS I look at has some means of
changing this path (patching) so lets just make
it easier for OS developers creating packages
and/or end users to override the location.

Signed-off-by: Brad Smith <brad@comstyle.com>
Reviewed-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-10-03 14:40:52 -05:00
Gerd Hoffmann
7095e71576 Fix linker scripts
Remove PROVIDE_HIDDEN and ONLY_IF_{RO,RW} from linker scripts to make
them work with older binutils versions.  Fixes *-bsd-user build on
OpenBSD 4.9 which ships binutils 2.15.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-10-03 14:40:25 -05:00
Brad
91b31d6158 Fix install(1) usage to be compatible with OpenBSD's install(1).
Fix install(1) usage to be compatible with OpenBSD's install(1).

When creating a directory via the -d flag the -p flag cannot be
used at the same time. Also in the context of installing QEMU it
doesn't make sense to use the -p flag anyway so use the [default]
-c flag instead.

Signed-off-by: Brad Smith <brad@comstyle.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-10-03 14:39:58 -05:00
Jan Kiszka
b89f4a7d2a Fix qjson test of solidus encoding
"\/" is supposed to be decoded as "/", but there is no need to encode
"/" via escape. Fix the existing test and add a second one expressing
this.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Acked-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
2011-10-03 14:38:17 -05:00
Luiz Capitulino
fbdd7c8bd5 configure: Copy test data to build directory
The QDict unit-tests (check-qdict) will fail when ran on a different
build directory. That's, it only works when ran on the source dir.

This happens because its data file (qdict-test-data.txt) is not
copied to the build dir. Fix it.

Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>

Conflicts:

	configure
2011-10-03 14:37:44 -05:00
Jamie Iles
e19a4e89ae monitor: fix build breakage for !CONFIG_VNC
Commit c62f6d1 (monitor: fix build breakage with --disable-vnc)
conditionalised some VNC setup code but left an unused variable.  Move
the variable into the conditional code to fix the build breakage.

Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
2011-08-31 15:09:31 -05:00
TeLeMan
ff5acedd8f monitor: fix build breakage with --disable-vnc
The breakage was introduced by the commit 1366108981

Signed-off-by: TeLeMan <geleman@gmail.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-31 15:09:25 -05:00
Brad
2af86a2ff7 Fix forcing multicast msgs to loopback on OpenBSD.
Fix forcing multicast msgs to loopback on OpenBSD.
e.g.
$ sudo qemu -m 128 -no-fd-bootchk \
        -hda virtual.img -boot n -nographic \
        -net nic,vlan=0,model=rtl8139,macaddr=52:54:00:12:34:03 \
        -net user -tftp /usr/src/sys/arch/i386/compile/TEST -bootp pxeboot \
        -net nic,vlan=1,model=rtl8139,macaddr=52:54:00:23:03:01 \
        -net tap,vlan=1,script=no \
        -net nic,vlan=3,model=rtl8139,macaddr=52:54:00:23:03:03 \
        -net socket,vlan=3,mcast=230.0.0.1:10003
setsockopt(SOL_IP, IP_MULTICAST_LOOP): Invalid argument
qemu: -net socket,vlan=3,mcast=230.0.0.1:10003: Device 'socket' could not be initialized

Signed-off-by: Brad Smith <brad@comstyle.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-08-31 15:08:26 -05:00
Justin M. Forbes
e62ad8314a Merge branch 'stable-0.15' of git://git.qemu.org/qemu 2011-08-14 10:55:05 -05:00
Anthony Liguori
76e4e1d237 Update version to 0.15.0
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-08 13:27:32 -05:00
Kevin Wolf
4fbe5233fd qcow2: Fix L1 table size after bdrv_snapshot_goto
When loading an internal snapshot whose L1 table is smaller than the current L1
table, the size of the current L1 would be shrunk to the snapshot's L1 size in
memory, but not on disk. This lead to incorrect refcount updates and eventuelly
to image corruption.

Instead of writing the new L1 size to disk, this simply retains the bigger L1
size that is currently in use and makes sure that the unused part is zeroed.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Tested-by: Philipp Hahn <hahn@univention.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 35d7ace74b)
2011-08-05 07:25:45 -05:00
Justin M. Forbes
4bea41dbaa Merge branch 'stable-0.15' of git://git.qemu.org/qemu 2011-08-04 16:40:07 -05:00
Anthony Liguori
e2f775205a Revert "floppy: save and restore DIR register"
This reverts commit 7d905f716b.

The use of subsections by this commit are broken because of a fundamental
limitations of subsections in the current protocol.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-04 16:19:04 -05:00
Richard Henderson
51dd7a94c7 alpha-softmmu: Disable for the 0.15 release branch.
The system emulation code was not merged before the branch.
Let's leave that work for the next release.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-04 16:19:04 -05:00
Wolfgang Mauerer
9096de69ff vhost build fix for i386
vhost.c uses __sync_fetch_and_and(), which is only
available for -march=i486 and above (see
https://bugzilla.redhat.com/show_bug.cgi?id=624279).

Signed-off-by: Wolfgang Mauerer <wolfgang.mauerer@siemens.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
(cherry picked from commit 023367e6cd)
2011-08-04 16:19:04 -05:00
Michael Roth
09afeef1ab guest agent: add --enable-guest-agent config option
QAPI will require glib/python, but for now the guest agent is the only
user. For now, make these dependencies an explicit guest agent one, and
give users the option to disable it if need be.

Once QAPI is adopted in core QEMU code, we would basically revert this
patch.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-04 16:19:04 -05:00
Peter Maydell
01825a8ddf user: Restore debug usage message for '-d ?' in user mode emulation
The code which prints the debug usage message on '-d ?' for *-user
has to come before the check for "not enough arguments", so that
"qemu-foo -d ?" prints the list of possible debug log items rather than
the generic usage message. (This was inadvertently broken in commit
c235d73.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2011-08-04 16:19:04 -05:00
Michael Walle
ae2dd33693 lm32: softusb: claim to support full speed
The QEMU keyboard and mouse reports themselves as full speed devices,
though they are actually low speed devices. Until this is fixed, claim that
we are supporting full speed devices.

Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Michael Walle <michael@walle.cc>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
2011-08-04 01:25:39 +02:00
Peter Maydell
a80f53aee3 user: Restore debug usage message for '-d ?' in user mode emulation
The code which prints the debug usage message on '-d ?' for *-user
has to come before the check for "not enough arguments", so that
"qemu-foo -d ?" prints the list of possible debug log items rather than
the generic usage message. (This was inadvertently broken in commit
c235d73.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2011-08-02 14:38:17 -05:00
Michael Roth
88ca9f047b Makefile: add missing deps on $(GENERATED_HEADERS)
This fixes a build issue with make -j6+ due to qapi-generated files
being built before $(GENERATED_HEADERS) have been created.

Tested-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2011-07-31 15:56:52 -05:00
Anthony Liguori
898517b0bc Update version to 0.15.0-rc2 2011-07-31 15:38:11 -05:00
Anthony Liguori
9dc9f2b820 Bump version to 0.15.0-rc1
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-07-29 17:14:11 -05:00
Justin M. Forbes
ef942b795a Merge branch 'for-upstream-0.15' of git://git.linaro.org/people/pmaydell/qemu-arm 2011-07-29 10:14:01 -05:00
Amit Shah
868aa386b8 virtio-balloon: Unregister savevm section on device unplug
Migrating after unplugging a virtio-balloon device resulted in an error
message on the destination:

Unknown savevm section or instance '0000:00:04.0/virtio-balloon' 0
load of migration failed

Fix this by unregistering the section on device unplug.

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-28 15:10:39 +05:30
Amit Shah
7e10be8c74 virtio-balloon: Add exit handler, fix memleaks
Add an exit handler that will free up RAM after a virtio-balloon device
is unplugged.

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-28 15:10:33 +05:30
Amit Shah
9843621e3b balloon: Reject negative balloon values
Negative balloon values don't make sense, reject them and throw a qerror
with QERR_INVALID_PARAMETER_VALUE.

Reported-by: Mike Cao <bcao@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-28 15:10:27 +05:30
Amit Shah
ab640dbfc0 virtio-balloon: Check if balloon registration failed
Multiple balloon registrations are not allowed; check if the
registration with the qemu balloon api succeeded.  If not, fail the
device init.

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-28 15:10:19 +05:30
Amit Shah
eaa8b2778c balloon: Don't allow multiple balloon handler registrations
Multiple balloon devices don't make sense; disallow more than one
registration attempt to register handlers.

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-28 15:09:49 +05:30
Peter Maydell
7ec7f28019 target-arm: UNDEF on a VCVTT/VCVTB UNPREDICTABLE to avoid TCG assert
VCVTT/VCVTB with bit 8 set is UNPREDICTABLE; we choose to UNDEF.
This avoids a TCG assert later when the VCVTT/VCVTB code tries to
use a source register that wasn't ever set up.

We pull the check for the presence of the half-precision extension
up in to this common code as well.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2011-07-27 09:29:22 +00:00
Peter Maydell
31b1308046 target-arm: Handle UNDEF and UNPREDICTABLE cases for VLDM, VSTM
Handle the UNDEF and UNPREDICTABLE cases for VLDM and VSTM. In
particular, we now generate an undef exception for overlarge imm8
values rather than generating 1000+ TCG ops and hitting an assertion.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2011-07-27 09:29:22 +00:00
Peter Maydell
4ec648dd6e target-arm: Support v6 barriers in linux-user mode
ARMv6 implemented various operations as special cases of cp15 accesses
which are true instructions in v7; this includes barriers (DMB, DSB, ISB).
Catch this special case at translate time, so that it works in linux-user
mode (which doesn't provide a functional get_cp15 helper) as well as
system mode.

Includes minor cleanup of the existing cases (single switch statement,
and doing the "OK in user mode?" test explicitly rather than hiding it in
cp15_user_ok()).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2011-07-27 09:29:22 +00:00
Peter Maydell
e961d129e1 target-arm: Mark 1136r1 as a v6K core
The 1136r1 is actually a v6K core (unlike the 1136r0); mark it as such,
thus enabling the TLS registers, NOP hints, CLREX, half and byte wide
exclusive load/stores, etc.

The VA-to-PA translation registers are not present on 1136r1, so
introduce a new feature flag for them, which is enabled on
11MPCore and all v7 cores.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2011-07-26 14:58:43 +00:00
Amit Shah
8959459386 virtio-balloon: Fix header comment; add Copyright
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2011-07-26 11:21:15 +05:30
Amit Shah
e2b40e003a balloon: Fix header comment; add Copyright
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2011-07-26 11:21:14 +05:30
Amit Shah
1a39b0fcff balloon: Separate out stat and balloon handling
Passing on '0' as ballooning target to indicate retrieval of stats is
bad API.  It also makes 'balloon 0' in the monitor cause a segfault.
Have two different functions handle the different functionality instead.

Detailed explanation from Markus's review:

1. do_info_balloon() is an info_async() method.  It receives a callback
   with argument, to be called exactly once (callback frees the
   argument).  It passes the callback via qemu_balloon_status() and
   indirectly through qemu_balloon_event to virtio_balloon_to_target().

   virtio_balloon_to_target() executes its balloon stats half.  It
   stores the callback in the device state.

   If it can't send a stats request, it resets stats and calls the
   callback right away.

   Else, it sends a stats request.  The device model runs the callback
   when it receives the answer.

   Works.

2. do_balloon() is a cmd_async() method.  It receives a callback with
   argument, to be called when the command completes.  do_balloon()
   calls it right before it succeeds.  Odd, but should work.

   Nevertheless, it passes the callback on via qemu_ballon() and
   indirectly through qemu_balloon_event to virtio_balloon_to_target().

   a. If the argument is non-zero, virtio_balloon_to_target() executes
      its balloon half, which doesn't use the callback in any way.

      Odd, but works.

   b. If the argument is zero, virtio_balloon_to_target() executes its
      balloon stats half, just like in 1.  It either calls the callback
      right away, or arranges for it to be called later.

      Thus, the callback runs twice: use after free and double free.

Test case: start with -S -device virtio-balloon, execute "balloon 0" in
human monitor.  Runs the callback first from virtio_balloon_to_target(),
then again from do_balloon().

Reported-by: Mike Cao <bcao@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
2011-07-26 11:21:14 +05:30
Amit Shah
4a97e18b87 virtio-balloon: Separate status handling into separate function
Separate out the code to retrieve balloon info from the code that sets
balloon values.

This will be used to separate the two callbacks from balloon.c and help
cope with 'balloon 0' on the monitor.  Currently, 'balloon 0' causes a
segfault in monitor_resume().

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
2011-07-26 11:21:13 +05:30
Amit Shah
f1ee0a0ebd balloon: Simplify code flow
Replace:
  if (foo) {
    ...
  } else {
    return 0;
  }

by

  if (!foo) {
    return 0;
  }
  ...

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
2011-07-26 11:21:13 +05:30
Amit Shah
3583bc031e balloon: Add braces around if statements
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
2011-07-26 11:21:12 +05:30
Amit Shah
2798b5e174 balloon: Make functions, local vars static
balloon.h had function declarations for a couple of functions that are
local to balloon.c.  Make them static.

Drop the 'qemu_' prefix for balloon.c-local variables, and make them
static.

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
2011-07-26 11:21:12 +05:30
1776 changed files with 106410 additions and 319485 deletions

21
.gitignore vendored
View File

@@ -15,12 +15,7 @@ libdis*
libhw32
libhw64
libuser
linux-headers/asm
qapi-generated
qapi-types.[ch]
qapi-visit.[ch]
qmp-commands.h
qmp-marshal.c
qemu-doc.html
qemu-tech.html
qemu-doc.info
@@ -39,16 +34,8 @@ qemu-img-cmds.texi
qemu-img-cmds.h
qemu-io
qemu-ga
qemu-bridge-helper
qemu-monitor.texi
QMP/qmp-commands.txt
test-coroutine
test-qmp-input-visitor
test-qmp-output-visitor
test-string-input-visitor
test-string-output-visitor
fsdev/virtfs-proxy-helper.1
fsdev/virtfs-proxy-helper.pod
.gdbinit
*.a
*.aux
@@ -76,14 +63,10 @@ patches
pc-bios/bios-pq/status
pc-bios/vgabios-pq/status
pc-bios/optionrom/linuxboot.bin
pc-bios/optionrom/linuxboot.raw
pc-bios/optionrom/linuxboot.img
pc-bios/optionrom/multiboot.bin
pc-bios/optionrom/multiboot.raw
pc-bios/optionrom/multiboot.img
pc-bios/optionrom/kvmvapic.bin
pc-bios/optionrom/kvmvapic.raw
pc-bios/optionrom/kvmvapic.img
pc-bios/optionrom/extboot.bin
pc-bios/optionrom/vapic.bin
.stgit-*
cscope.*
tags

11
.gitmodules vendored
View File

@@ -1,6 +1,6 @@
[submodule "roms/vgabios"]
path = roms/vgabios
url = git://git.qemu.org/vgabios.git/
url = git://git.kernel.org/pub/scm/virt/kvm/vgabios.git/
[submodule "roms/seabios"]
path = roms/seabios
url = git://git.qemu.org/seabios.git/
@@ -10,12 +10,3 @@
[submodule "roms/ipxe"]
path = roms/ipxe
url = git://git.qemu.org/ipxe.git
[submodule "roms/openbios"]
path = roms/openbios
url = git://git.qemu.org/openbios.git
[submodule "roms/qemu-palcode"]
path = roms/qemu-palcode
url = git://repo.or.cz/qemu-palcode.git
[submodule "roms/sgabios"]
path = roms/sgabios
url = git://git.qemu.org/sgabios.git

View File

@@ -1,16 +0,0 @@
# This mailmap just translates the weird addresses from the original import into git
# into proper addresses so that they are counted properly in git shortlog output.
#
Andrzej Zaborowski <balrogg@gmail.com> balrog <balrog@c046a42c-6fe2-441c-8c8c-71466251a162>
Anthony Liguori <aliguori@us.ibm.com> aliguori <aliguori@c046a42c-6fe2-441c-8c8c-71466251a162>
Aurelien Jarno <aurelien@aurel32.net> aurel32 <aurel32@c046a42c-6fe2-441c-8c8c-71466251a162>
Blue Swirl <blauwirbel@gmail.com> blueswir1 <blueswir1@c046a42c-6fe2-441c-8c8c-71466251a162>
Edgar E. Iglesias <edgar.iglesias@gmail.com> edgar_igl <edgar_igl@c046a42c-6fe2-441c-8c8c-71466251a162>
Fabrice Bellard <fabrice@bellard.org> bellard <bellard@c046a42c-6fe2-441c-8c8c-71466251a162>
Jocelyn Mayer <l_indien@magic.fr> j_mayer <j_mayer@c046a42c-6fe2-441c-8c8c-71466251a162>
Paul Brook <paul@codesourcery.com> pbrook <pbrook@c046a42c-6fe2-441c-8c8c-71466251a162>
Thiemo Seufer <ths@networkno.de> ths <ths@c046a42c-6fe2-441c-8c8c-71466251a162>
malc <av1474@comtv.ru> malc <malc@c046a42c-6fe2-441c-8c8c-71466251a162>
# There is also a:
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>
# for the cvs2svn initialization commit e63c3dc74bf.

View File

@@ -1,4 +1,4 @@
QEMU Coding Style
Qemu Coding Style
=================
Please use the script checkpatch.pl in the scripts directory to check
@@ -44,8 +44,7 @@ Rationale:
3. Naming
Variables are lower_case_with_underscores; easy to type and read. Structured
type names are in CamelCase; harder to type but standing out. Enum type
names and function type names should also be in CamelCase. Scalar type
type names are in CamelCase; harder to type but standing out. Scalar type
names are lower_case_with_underscores_ending_with_a_t, like the POSIX
uint64_t and family. Note that this last convention contradicts POSIX
and is therefore likely to be changed.
@@ -69,10 +68,6 @@ keyword. Example:
printf("a was something else entirely.\n");
}
Note that 'else if' is considered a single statement; otherwise a long if/
else if/else if/.../else sequence would need an indent for every else
statement.
An exception is the opening brace for a function; for reasons of tradition
and clarity it comes on a line by itself:

View File

@@ -78,7 +78,7 @@ version 0.10.2:
- fix savevm/loadvm (Anthony Liguori)
- live migration: fix dirty tracking windows (Glauber Costa)
- live migration: improve error propagation (Glauber Costa)
- live migration: improve error propogation (Glauber Costa)
- qcow2: fix image creation for > ~2TB images (Chris Wright)
- hotplug: fix error handling for if= parameter (Eduardo Habkost)
- qcow2: fix data corruption (Nolan Leake)
@@ -386,7 +386,7 @@ version 0.5.3:
- support of CD-ROM change
- multiple network interface support
- initial x86-64 host support (Gwenole Beauchesne)
- lret to outer privilege fix (OS/2 install fix)
- lret to outer priviledge fix (OS/2 install fix)
- task switch fixes (SkyOS boot)
- VM save/restore commands
- new timer API
@@ -447,7 +447,7 @@ version 0.5.0:
- multi-target build
- fixed: no error code in hardware interrupts
- fixed: pop ss, mov ss, x and sti disable hardware irqs for the next insn
- correct single stepping through string operations
- correct single stepping thru string operations
- preliminary SPARC target support (Thomas M. Ogrisegg)
- tun-fd option (Rusty Russell)
- automatic IDE geometry detection

12
HACKING
View File

@@ -77,13 +77,11 @@ avoided.
Use of the malloc/free/realloc/calloc/valloc/memalign/posix_memalign
APIs is not allowed in the QEMU codebase. Instead of these routines,
use the GLib memory allocation routines g_malloc/g_malloc0/g_new/
g_new0/g_realloc/g_free or QEMU's qemu_vmalloc/qemu_memalign/qemu_vfree
APIs.
use the replacement qemu_malloc/qemu_mallocz/qemu_realloc/qemu_free or
qemu_vmalloc/qemu_memalign/qemu_vfree APIs.
Please note that g_malloc will exit on allocation failure, so there
is no need to test for failure (as you would have to with malloc).
Calling g_malloc with a zero size is valid and will return NULL.
Please note that NULL check for the qemu_malloc result is redundant and
that qemu_malloc() call with zero size is not allowed.
Memory allocated by qemu_vmalloc or qemu_memalign must be freed with
qemu_vfree, since breaking this will cause problems on Win32 and user
@@ -110,7 +108,7 @@ int qemu_strnlen(const char *s, int max_len)
There are also replacement character processing macros for isxyz and toxyz,
so instead of e.g. isalnum you should use qemu_isalnum.
Because of the memory management rules, you must use g_strdup/g_strndup
Because of the memory management rules, you must use qemu_strdup/qemu_strndup
instead of plain strdup/strndup.
5. Printf-style functions

View File

@@ -6,7 +6,9 @@ The following points clarify the QEMU license:
GNU General Public License. Hence each source file contains its own
licensing information.
Many hardware device emulation sources are released under the BSD license.
In particular, the QEMU virtual CPU core library (libqemu.a) is
released under the GNU Lesser General Public License. Many hardware
device emulation sources are released under the BSD license.
3) The Tiny Code Generator (TCG) is released under the BSD license
(see license headers in files).

View File

@@ -20,7 +20,7 @@ Descriptions of section entries:
Supported: Someone is actually paid to look after this.
Maintained: Someone actually looks after it.
Odd Fixes: It has a maintainer but they don't have time to do
much other than throw the odd patch in. See below.
much other than throw the odd patch in. See below..
Orphan: No current maintainer [but maybe you could take the
role as you write your new code].
Obsolete: Old code. Something tagged obsolete generally means
@@ -62,7 +62,6 @@ F: target-alpha/
ARM
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: target-arm/
@@ -78,7 +77,7 @@ F: target-lm32/
M68K
M: Paul Brook <paul@codesourcery.com>
S: Odd Fixes
S: Maintained
F: target-m68k/
MicroBlaze
@@ -88,12 +87,11 @@ F: target-microblaze/
MIPS
M: Aurelien Jarno <aurelien@aurel32.net>
S: Odd Fixes
S: Maintained
F: target-mips/
PowerPC
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Maintained
F: target-ppc/
@@ -104,7 +102,7 @@ F: target-s390x/
SH4
M: Aurelien Jarno <aurelien@aurel32.net>
S: Odd Fixes
S: Maintained
F: target-sh4/
SPARC
@@ -112,22 +110,11 @@ M: Blue Swirl <blauwirbel@gmail.com>
S: Maintained
F: target-sparc/
UniCore32
M: Guan Xuetao <gxt@mprc.pku.edu.cn>
S: Maintained
F: target-unicore32/
X86
M: qemu-devel@nongnu.org
S: Odd Fixes
F: target-i386/
Xtensa
M: Max Filippov <jcmvbkbc@gmail.com>
W: http://wiki.osll.spb.ru/doku.php?id=etc:users:jcmvbkbc:qemu-target-xtensa
S: Maintained
F: target-xtensa/
Guest CPU Cores (KVM):
----------------------
@@ -156,52 +143,8 @@ L: kvm@vger.kernel.org
S: Supported
F: target-i386/kvm.c
Guest CPU Cores (Xen):
----------------------
X86
M: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
L: xen-devel@lists.xensource.com
S: Supported
F: xen-*
F: */xen*
Hosts:
------
LINUX
L: qemu-devel@nongnu.org
S: Maintained
F: linux-*
F: linux-headers/
POSIX
L: qemu-devel@nongnu.org
S: Maintained
F: *posix*
W32, W64
L: qemu-devel@nongnu.org
M: Stefan Weil <sw@weilnetz.de>
S: Maintained
F: *win32*
ARM Machines
------------
Exynos
M: Evgeny Voevodin <e.voevodin@samsung.com>
M: Maksim Kozlov <m.kozlov@samsung.com>
M: Igor Mitsyanko <i.mitsyanko@samsung.com>
M: Dmitry Solodkiy <d.solodkiy@samsung.com>
S: Maintained
F: hw/exynos*
Calxeda Highbank
M: Mark Langsdorf <mark.langsdorf@calxeda.com>
S: Supported
F: hw/highbank.c
F: hw/xgmac.c
Gumstix
M: qemu-devel@nongnu.org
S: Orphan
@@ -209,7 +152,6 @@ F: hw/gumstix.c
Integrator CP
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/integratorcp.c
@@ -235,7 +177,6 @@ F: hw/palm.c
Real View
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/realview*
@@ -246,23 +187,14 @@ F: hw/spitz.c
Stellaris
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/stellaris.c
Versatile PB
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/versatilepb.c
Xilinx Zynq
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
S: Maintained
F: hw/xilinx_zynq.c
F: hw/zynq_slcr.c
F: hw/cadence_*
CRIS Machines
-------------
Axis Dev88
@@ -337,31 +269,23 @@ PowerPC Machines
----------------
405
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Maintained
F: hw/ppc405_boards.c
New World
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Maintained
F: hw/ppc_newworld.c
F: hw/unin_pci.c
F: hw/dec_pci.[hc]
Old World
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Maintained
F: hw/ppc_oldworld.c
F: hw/grackle_pci.c
PReP
M: Andreas Färber <andreas.faerber@web.de>
L: qemu-ppc@nongnu.org
S: Odd Fixes
Prep
M: qemu-devel@nongnu.org
S: Orphan
F: hw/ppc_prep.c
F: hw/prep_pci.[hc]
SH4 Machines
------------
@@ -399,20 +323,7 @@ X86 Machines
PC
M: Anthony Liguori <aliguori@us.ibm.com>
S: Supported
F: hw/pc.[ch]
F: hw/pc_piix.c
Xtensa Machines
---------------
sim
M: Max Filippov <jcmvbkbc@gmail.com>
S: Maintained
F: hw/xtensa_sim.c
Avnet LX60
M: Max Filippov <jcmvbkbc@gmail.com>
S: Maintained
F: hw/xtensa_lx60.c
F: hw/pc.[ch] hw/pc_piix.c
Devices
-------
@@ -421,11 +332,6 @@ M: Kevin Wolf <kwolf@redhat.com>
S: Odd Fixes
F: hw/ide/
OMAP
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/omap*
PCI
M: Michael S. Tsirkin <mst@redhat.com>
S: Supported
@@ -433,16 +339,11 @@ F: hw/pci*
F: hw/piix*
SCSI
M: Paolo Bonzini <pbonzini@redhat.com>
S: Supported
F: hw/virtio-scsi.*
F: hw/scsi*
T: git git://github.com/bonzini/qemu.git scsi-next
LSI53C895A
M: Paul Brook <paul@codesourcery.com>
M: Kevin Wolf <kwolf@redhat.com>
S: Odd Fixes
F: hw/lsi53c895a.c
F: hw/scsi*
USB
M: Gerd Hoffmann <kraxel@redhat.com>
@@ -460,11 +361,9 @@ S: Supported
F: hw/virtio*
virtio-9p
M: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
M: Venkateswararao Jujjuri (JV) <jvrao@linux.vnet.ibm.com>
S: Supported
F: hw/9pfs/
F: fsdev/
T: git git://github.com/kvaneesh/QEMU.git
F: hw/virtio-9p*
virtio-blk
M: Kevin Wolf <kwolf@redhat.com>
@@ -514,11 +413,6 @@ M: Anthony Liguori <aliguori@us.ibm.com>
S: Maintained
F: ui/
Cocoa graphics
M: Andreas Färber <andreas.faerber@web.de>
S: Odd Fixes
F: ui/cocoa.m
Main loop
M: Anthony Liguori <aliguori@us.ibm.com>
S: Supported
@@ -536,33 +430,10 @@ M: Mark McLoughlin <markmc@redhat.com>
S: Maintained
F: net/
Network Block Device (NBD)
M: Paolo Bonzini <pbonzini@redhat.com>
S: Odd Fixes
F: block/nbd.c
F: nbd.*
F: qemu-nbd.c
T: git git://github.com/bonzini/qemu.git nbd-next
SLIRP
M: Jan Kiszka <jan.kiszka@siemens.com>
S: Maintained
M: qemu-devel@nongnu.org
S: Orphan
F: slirp/
T: git git://git.kiszka.org/qemu.git queues/slirp
Tracing
M: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
S: Maintained
F: trace/
F: scripts/tracetool.py
F: scripts/tracetool/
F: docs/tracing.txt
T: git git://github.com/stefanha/qemu.git tracing
Checkpatch
M: Blue Swirl <blauwirbel@gmail.com>
S: Odd Fixes
F: scripts/checkpatch.pl
Usermode Emulation
------------------
@@ -571,6 +442,11 @@ M: Blue Swirl <blauwirbel@gmail.com>
S: Maintained
F: bsd-user/
Darwin user
M: qemu-devel@nongnu.org
S: Orphan
F: darwin-user/
Linux user
M: Riku Voipio <riku.voipio@iki.fi>
S: Maintained
@@ -628,30 +504,3 @@ SPARC target
M: Blue Swirl <blauwirbel@gmail.com>
S: Maintained
F: tcg/sparc/
TCI target
M: Stefan Weil <sw@weilnetz.de>
S: Maintained
F: tcg/tci/
Stable branches
---------------
Stable 1.0
L: qemu-stable@nongnu.org
T: git git://git.qemu.org/qemu-stable-1.0.git
S: Orphan
Stable 0.15
L: qemu-stable@nongnu.org
T: git git://git.qemu.org/qemu-stable-0.15.git
S: Orphan
Stable 0.14
L: qemu-stable@nongnu.org
T: git git://git.qemu.org/qemu-stable-0.14.git
S: Orphan
Stable 0.10
L: qemu-stable@nongnu.org
T: git git://git.qemu.org/qemu-stable-0.10.git
S: Orphan

237
Makefile
View File

@@ -1,9 +1,10 @@
# Makefile for QEMU.
# Always point to the root of the build tree (needs GNU make).
BUILD_DIR=$(CURDIR)
GENERATED_HEADERS = config-host.h trace.h qemu-options.def
ifeq ($(TRACE_BACKEND),dtrace)
GENERATED_HEADERS += trace-dtrace.h
endif
# All following code might depend on configuration variables
ifneq ($(wildcard config-host.mak),)
# Put the all: rule here so that config-host.mak can contain dependencies.
all: build-all
@@ -18,13 +19,6 @@ config-host.mak:
@exit 1
endif
GENERATED_HEADERS = config-host.h trace.h qemu-options.def
ifeq ($(TRACE_BACKEND),dtrace)
GENERATED_HEADERS += trace-dtrace.h
endif
GENERATED_HEADERS += qmp-commands.h qapi-types.h qapi-visit.h
GENERATED_SOURCES += qmp-marshal.c qapi-types.c qapi-visit.c trace.c
# Don't try to regenerate Makefile or configure
# We don't generate any of them
Makefile: ;
@@ -37,18 +31,13 @@ $(call set-vpath, $(SRC_PATH):$(SRC_PATH)/hw)
LIBS+=-lz $(LIBS_TOOLS)
HELPERS-$(CONFIG_LINUX) = qemu-bridge-helper$(EXESUF)
ifdef BUILD_DOCS
DOCS=qemu-doc.html qemu-tech.html qemu.1 qemu-img.1 qemu-nbd.8 QMP/qmp-commands.txt
ifdef CONFIG_VIRTFS
DOCS+=fsdev/virtfs-proxy-helper.1
endif
else
DOCS=
endif
SUBDIR_MAKEFLAGS=$(if $(V),,--no-print-directory) BUILD_DIR=$(BUILD_DIR)
SUBDIR_MAKEFLAGS=$(if $(V),,--no-print-directory)
SUBDIR_DEVICES_MAK=$(patsubst %, %/config-devices.mak, $(TARGET_DIRS))
SUBDIR_DEVICES_MAK_DEP=$(patsubst %, %/config-devices.mak.d, $(TARGET_DIRS))
@@ -82,7 +71,7 @@ defconfig:
-include config-all-devices.mak
build-all: $(DOCS) $(TOOLS) $(HELPERS-y) recurse-all
build-all: $(DOCS) $(TOOLS) recurse-all
config-host.h: config-host.h-timestamp
config-host.h-timestamp: config-host.mak
@@ -98,12 +87,12 @@ ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/Makefile.objs
endif
$(universal-obj-y) $(common-obj-y): $(GENERATED_HEADERS)
subdir-libcacard: $(oslib-obj-y) $(trace-obj-y) qemu-timer-common.o
$(common-obj-y): $(GENERATED_HEADERS)
subdir-libcacard: $(oslib-obj-y) $(trace-obj-y) qemu-malloc.o qemu-timer-common.o
$(filter %-softmmu,$(SUBDIR_RULES)): $(universal-obj-y) $(trace-obj-y) $(common-obj-y) subdir-libdis
$(filter %-softmmu,$(SUBDIR_RULES)): $(trace-obj-y) $(common-obj-y) subdir-libdis
$(filter %-user,$(SUBDIR_RULES)): $(GENERATED_HEADERS) $(universal-obj-y) $(trace-obj-y) subdir-libdis-user subdir-libuser
$(filter %-user,$(SUBDIR_RULES)): $(GENERATED_HEADERS) $(trace-obj-y) subdir-libdis-user subdir-libuser
ROMSUBDIR_RULES=$(patsubst %,romsubdir-%, $(ROMS))
romsubdir-%:
@@ -117,7 +106,7 @@ audio/audio.o audio/fmodaudio.o: QEMU_CFLAGS += $(FMOD_CFLAGS)
QEMU_CFLAGS+=$(CURL_CFLAGS)
QEMU_CFLAGS += -I$(SRC_PATH)/include
QEMU_CFLAGS+=$(GLIB_CFLAGS)
ui/cocoa.o: ui/cocoa.m
@@ -127,7 +116,7 @@ ui/vnc.o: QEMU_CFLAGS += $(VNC_TLS_CFLAGS)
bt-host.o: QEMU_CFLAGS += $(BLUEZ_CFLAGS)
version.o: $(SRC_PATH)/version.rc config-host.h
version.o: $(SRC_PATH)/version.rc config-host.mak
$(call quiet-command,$(WINDRES) -I. -o $@ $<," RC $(TARGET_DIR)$@")
version-obj-$(CONFIG_WIN32) += version.o
@@ -142,7 +131,7 @@ libcacard.la:
install-libcacard:
@echo "libtool is missing, please install and rerun configure"; exit 1
else
libcacard.la: $(GENERATED_HEADERS) $(oslib-obj-y) qemu-timer-common.o $(addsuffix .lo, $(basename $(trace-obj-y)))
libcacard.la: $(GENERATED_HEADERS) $(oslib-obj-y) qemu-malloc.o qemu-timer-common.o $(addsuffix .lo, $(basename $(trace-obj-y)))
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C libcacard V="$(V)" TARGET_DIR="$*/" libcacard.la,)
install-libcacard: libcacard.la
@@ -153,60 +142,61 @@ endif
qemu-img.o: qemu-img-cmds.h
qemu-img.o qemu-tool.o qemu-nbd.o qemu-io.o cmd.o qemu-ga.o: $(GENERATED_HEADERS)
tools-obj-y = $(oslib-obj-y) $(trace-obj-y) qemu-tool.o qemu-timer.o \
qemu-timer-common.o main-loop.o notify.o iohandler.o cutils.o async.o
tools-obj-$(CONFIG_POSIX) += compatfd.o
qemu-img$(EXESUF): qemu-img.o qemu-tool.o qemu-error.o $(oslib-obj-y) $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y) qemu-timer-common.o
qemu-img$(EXESUF): qemu-img.o $(tools-obj-y) $(block-obj-y)
qemu-nbd$(EXESUF): qemu-nbd.o $(tools-obj-y) $(block-obj-y)
qemu-io$(EXESUF): qemu-io.o cmd.o $(tools-obj-y) $(block-obj-y)
qemu-nbd$(EXESUF): qemu-nbd.o qemu-tool.o qemu-error.o $(oslib-obj-y) $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y) qemu-timer-common.o
qemu-bridge-helper$(EXESUF): qemu-bridge-helper.o
qemu-bridge-helper.o: $(GENERATED_HEADERS)
fsdev/virtfs-proxy-helper$(EXESUF): fsdev/virtfs-proxy-helper.o fsdev/virtio-9p-marshal.o oslib-posix.o $(trace-obj-y)
fsdev/virtfs-proxy-helper$(EXESUF): LIBS += -lcap
qemu-io$(EXESUF): qemu-io.o cmd.o qemu-tool.o qemu-error.o $(oslib-obj-y) $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y) qemu-timer-common.o
qemu-img-cmds.h: $(SRC_PATH)/qemu-img-cmds.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $@")
check-qint.o check-qstring.o check-qdict.o check-qlist.o check-qfloat.o check-qjson.o: $(GENERATED_HEADERS)
CHECK_PROG_DEPS = qemu-malloc.o $(oslib-obj-y) $(trace-obj-y) qemu-tool.o
check-qint: check-qint.o qint.o $(CHECK_PROG_DEPS)
check-qstring: check-qstring.o qstring.o $(CHECK_PROG_DEPS)
check-qdict: check-qdict.o qdict.o qfloat.o qint.o qstring.o qbool.o qlist.o $(CHECK_PROG_DEPS)
check-qlist: check-qlist.o qlist.o qint.o $(CHECK_PROG_DEPS)
check-qfloat: check-qfloat.o qfloat.o $(CHECK_PROG_DEPS)
check-qjson: check-qjson.o qfloat.o qint.o qdict.o qstring.o qlist.o qbool.o qjson.o json-streamer.o json-lexer.o json-parser.o error.o qerror.o qemu-error.o $(CHECK_PROG_DEPS)
$(qapi-obj-y): $(GENERATED_HEADERS)
qapi-dir := $(BUILD_DIR)/qapi-generated
qemu-ga$(EXESUF): LIBS = $(LIBS_QGA)
qemu-ga$(EXESUF): QEMU_CFLAGS += -I $(qapi-dir)
qapi-dir := qapi-generated
test-visitor.o test-qmp-commands.o qemu-ga$(EXESUF): QEMU_CFLAGS += -I $(qapi-dir)
gen-out-type = $(subst .,-,$(suffix $@))
$(qapi-dir)/test-qapi-types.c: $(qapi-dir)/test-qapi-types.h
$(qapi-dir)/test-qapi-types.h: $(SRC_PATH)/qapi-schema-test.json $(SRC_PATH)/scripts/qapi-types.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py -o "$(qapi-dir)" -p "test-" < $<, " GEN $@")
$(qapi-dir)/test-qapi-visit.c: $(qapi-dir)/test-qapi-visit.h
$(qapi-dir)/test-qapi-visit.h: $(SRC_PATH)/qapi-schema-test.json $(SRC_PATH)/scripts/qapi-visit.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py -o "$(qapi-dir)" -p "test-" < $<, " GEN $@")
$(qapi-dir)/test-qmp-commands.h: $(qapi-dir)/test-qmp-marshal.c
$(qapi-dir)/test-qmp-marshal.c: $(SRC_PATH)/qapi-schema-test.json $(SRC_PATH)/scripts/qapi-commands.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py -o "$(qapi-dir)" -p "test-" < $<, " GEN $@")
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/tests/Makefile
endif
$(qapi-dir)/qga-qapi-types.c: $(qapi-dir)/qga-qapi-types.h
$(qapi-dir)/qga-qapi-types.h: $(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-types.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py -o "$(qapi-dir)" -p "qga-" < $<, " GEN $@")
$(qapi-dir)/qga-qapi-visit.c: $(qapi-dir)/qga-qapi-visit.h
$(qapi-dir)/qga-qapi-visit.h: $(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-visit.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py -o "$(qapi-dir)" -p "qga-" < $<, " GEN $@")
$(qapi-dir)/qga-qmp-marshal.c: $(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-commands.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py -o "$(qapi-dir)" -p "qga-" < $<, " GEN $@")
$(qapi-dir)/qga-qapi-types.c $(qapi-dir)/qga-qapi-types.h :\
$(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-types.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py $(gen-out-type) -o "$(qapi-dir)" -p "qga-" < $<, " GEN $@")
$(qapi-dir)/qga-qapi-visit.c $(qapi-dir)/qga-qapi-visit.h :\
$(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-visit.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py $(gen-out-type) -o "$(qapi-dir)" -p "qga-" < $<, " GEN $@")
$(qapi-dir)/qga-qmp-commands.h $(qapi-dir)/qga-qmp-marshal.c :\
$(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-commands.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py $(gen-out-type) -o "$(qapi-dir)" -p "qga-" < $<, " GEN $@")
test-visitor.o: $(addprefix $(qapi-dir)/, test-qapi-types.c test-qapi-types.h test-qapi-visit.c test-qapi-visit.h) $(qapi-obj-y)
test-visitor: test-visitor.o qfloat.o qint.o qdict.o qstring.o qlist.o qbool.o $(qapi-obj-y) error.o osdep.o qemu-malloc.o $(oslib-obj-y) qjson.o json-streamer.o json-lexer.o json-parser.o qerror.o qemu-error.o qemu-tool.o $(qapi-dir)/test-qapi-visit.o $(qapi-dir)/test-qapi-types.o
qapi-types.c qapi-types.h :\
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-types.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py $(gen-out-type) -o "." < $<, " GEN $@")
qapi-visit.c qapi-visit.h :\
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-visit.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py $(gen-out-type) -o "." < $<, " GEN $@")
qmp-commands.h qmp-marshal.c :\
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-commands.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py $(gen-out-type) -m -o "." < $<, " GEN $@")
test-qmp-commands.o: $(addprefix $(qapi-dir)/, test-qapi-types.c test-qapi-types.h test-qapi-visit.c test-qapi-visit.h test-qmp-marshal.c test-qmp-commands.h) $(qapi-obj-y)
test-qmp-commands: test-qmp-commands.o qfloat.o qint.o qdict.o qstring.o qlist.o qbool.o $(qapi-obj-y) error.o osdep.o qemu-malloc.o $(oslib-obj-y) qjson.o json-streamer.o json-lexer.o json-parser.o qerror.o qemu-error.o qemu-tool.o $(qapi-dir)/test-qapi-visit.o $(qapi-dir)/test-qapi-types.o $(qapi-dir)/test-qmp-marshal.o module.o
QGALIB_OBJ=$(addprefix $(qapi-dir)/, qga-qapi-types.o qga-qapi-visit.o qga-qmp-marshal.o)
QGALIB_GEN=$(addprefix $(qapi-dir)/, qga-qapi-types.h qga-qapi-visit.h qga-qmp-commands.h)
$(QGALIB_OBJ): $(QGALIB_GEN) $(GENERATED_HEADERS)
$(qga-obj-y) qemu-ga.o: $(QGALIB_GEN) $(GENERATED_HEADERS)
QGALIB=qga/guest-agent-command-state.o qga/guest-agent-commands.o
QGALIB_GEN=$(addprefix $(qapi-dir)/, qga-qapi-types.c qga-qapi-types.h qga-qapi-visit.c qga-qmp-marshal.c)
qemu-ga$(EXESUF): qemu-ga.o $(qga-obj-y) $(tools-obj-y) $(qapi-obj-y) $(qobject-obj-y) $(version-obj-y) $(QGALIB_OBJ)
$(QGALIB_GEN): $(GENERATED_HEADERS)
$(QGALIB) qemu-ga.o: $(QGALIB_GEN) $(qapi-obj-y)
qemu-ga$(EXESUF): qemu-ga.o $(QGALIB) qemu-tool.o qemu-error.o error.o $(oslib-obj-y) $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y) $(qapi-obj-y) qemu-timer-common.o qemu-sockets.o module.o qapi/qmp-dispatch.o qapi/qmp-registry.o $(qapi-dir)/qga-qapi-visit.o $(qapi-dir)/qga-qapi-types.o $(qapi-dir)/qga-qmp-marshal.o
QEMULIBS=libhw32 libhw64 libuser libdis libdis-user
@@ -214,19 +204,15 @@ clean:
# avoid old build problems by removing potentially incorrect old files
rm -f config.mak op-i386.h opc-i386.h gen-op-i386.h op-arm.h opc-arm.h gen-op-arm.h
rm -f qemu-options.def
rm -f *.o *.d *.a *.lo $(TOOLS) $(HELPERS-y) qemu-ga TAGS cscope.* *.pod *~ */*~
rm -f *.o *.d *.a *.lo $(TOOLS) qemu-ga TAGS cscope.* *.pod *~ */*~
rm -Rf .libs
rm -f slirp/*.o slirp/*.d audio/*.o audio/*.d block/*.o block/*.d net/*.o net/*.d fsdev/*.o fsdev/*.d ui/*.o ui/*.d qapi/*.o qapi/*.d qga/*.o qga/*.d
rm -f qom/*.o qom/*.d
rm -f qemu-img-cmds.h
rm -f trace/*.o trace/*.d
rm -f trace.c trace.h trace.c-timestamp trace.h-timestamp
rm -f trace-dtrace.dtrace trace-dtrace.dtrace-timestamp
@# May not be present in GENERATED_HEADERS
rm -f trace-dtrace.h trace-dtrace.h-timestamp
rm -f $(foreach f,$(GENERATED_HEADERS),$(f) $(f)-timestamp)
rm -f $(foreach f,$(GENERATED_SOURCES),$(f) $(f)-timestamp)
rm -rf $(qapi-dir)
$(MAKE) -C tests/tcg clean
$(MAKE) -C tests clean
for d in $(ALL_SUBDIRS) $(QEMULIBS) libcacard; do \
if test -d $$d; then $(MAKE) -C $$d $@ || exit 1; fi; \
rm -f $$d/qemu-options.def; \
@@ -240,8 +226,6 @@ distclean: clean
rm -f qemu-doc.fn qemu-doc.fns qemu-doc.info qemu-doc.ky qemu-doc.kys
rm -f qemu-doc.log qemu-doc.pdf qemu-doc.pg qemu-doc.toc qemu-doc.tp
rm -f qemu-doc.vr
rm -f config.log
rm -f linux-headers/asm
rm -f qemu-tech.info qemu-tech.aux qemu-tech.cp qemu-tech.dvi qemu-tech.fn qemu-tech.info qemu-tech.ky qemu-tech.log qemu-tech.pdf qemu-tech.pg qemu-tech.toc qemu-tech.tp qemu-tech.vr
for d in $(TARGET_DIRS) $(QEMULIBS); do \
rm -rf $$d || exit 1 ; \
@@ -252,64 +236,55 @@ ar de en-us fi fr-be hr it lv nl pl ru th \
common de-ch es fo fr-ca hu ja mk nl-be pt sl tr
ifdef INSTALL_BLOBS
BLOBS=bios.bin sgabios.bin vgabios.bin vgabios-cirrus.bin \
BLOBS=bios.bin vgabios.bin vgabios-cirrus.bin \
vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin \
ppc_rom.bin openbios-sparc32 openbios-sparc64 openbios-ppc \
pxe-e1000.rom pxe-eepro100.rom pxe-ne2k_pci.rom \
pxe-pcnet.rom pxe-rtl8139.rom pxe-virtio.rom \
qemu-icon.bmp \
bamboo.dtb petalogix-s3adsp1800.dtb petalogix-ml605.dtb \
mpc8544ds.dtb \
multiboot.bin linuxboot.bin kvmvapic.bin \
multiboot.bin linuxboot.bin \
s390-zipl.rom \
spapr-rtas.bin slof.bin \
palcode-clipper
spapr-rtas.bin slof.bin
BLOBS += extboot.bin
BLOBS += vapic.bin
else
BLOBS=
endif
install-doc: $(DOCS)
$(INSTALL_DIR) "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) qemu-doc.html qemu-tech.html "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) QMP/qmp-commands.txt "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DIR) "$(DESTDIR)$(docdir)"
$(INSTALL_DATA) qemu-doc.html qemu-tech.html "$(DESTDIR)$(docdir)"
ifdef CONFIG_POSIX
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DATA) qemu.1 qemu-img.1 "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man8"
$(INSTALL_DATA) qemu-nbd.8 "$(DESTDIR)$(mandir)/man8"
endif
ifdef CONFIG_VIRTFS
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DATA) fsdev/virtfs-proxy-helper.1 "$(DESTDIR)$(mandir)/man1"
endif
install-datadir:
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)"
install-sysconfig:
$(INSTALL_DIR) "$(DESTDIR)$(sysconfdir)/qemu"
$(INSTALL_DATA) $(SRC_PATH)/sysconfigs/target/target-x86_64.conf "$(DESTDIR)$(sysconfdir)/qemu"
install-confdir:
$(INSTALL_DIR) "$(DESTDIR)$(qemu_confdir)"
install-sysconfig: install-datadir install-confdir
$(INSTALL_DATA) $(SRC_PATH)/sysconfigs/target/target-x86_64.conf "$(DESTDIR)$(qemu_confdir)"
$(INSTALL_DATA) $(SRC_PATH)/sysconfigs/target/cpus-x86_64.conf "$(DESTDIR)$(qemu_datadir)"
install: all $(if $(BUILD_DOCS),install-doc) install-sysconfig install-datadir
install: all $(if $(BUILD_DOCS),install-doc) install-sysconfig
$(INSTALL_DIR) "$(DESTDIR)$(bindir)"
ifneq ($(TOOLS),)
$(INSTALL_PROG) $(STRIP_OPT) $(TOOLS) "$(DESTDIR)$(bindir)"
endif
ifneq ($(HELPERS-y),)
$(INSTALL_DIR) "$(DESTDIR)$(libexecdir)"
$(INSTALL_PROG) $(STRIP_OPT) $(HELPERS-y) "$(DESTDIR)$(libexecdir)"
endif
ifneq ($(BLOBS),)
$(INSTALL_DIR) "$(DESTDIR)$(datadir)"
set -e; for x in $(BLOBS); do \
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/$$x "$(DESTDIR)$(qemu_datadir)"; \
if [ -f $(SRC_PATH)/pc-bios/$$x ];then \
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/$$x "$(DESTDIR)$(datadir)"; \
fi \
; if [ -f pc-bios/optionrom/$$x ];then \
$(INSTALL_DATA) pc-bios/optionrom/$$x "$(DESTDIR)$(datadir)"; \
fi \
done
endif
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)/keymaps"
$(INSTALL_DIR) "$(DESTDIR)$(datadir)/keymaps"
set -e; for x in $(KEYMAPS); do \
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/keymaps/$$x "$(DESTDIR)$(qemu_datadir)/keymaps"; \
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/keymaps/$$x "$(DESTDIR)$(datadir)/keymaps"; \
done
for d in $(TARGET_DIRS); do \
$(MAKE) -C $$d $@ || exit 1 ; \
@@ -317,7 +292,7 @@ endif
# various test targets
test speed: all
$(MAKE) -C tests/tcg $@
$(MAKE) -C tests $@
.PHONY: TAGS
TAGS:
@@ -325,7 +300,7 @@ TAGS:
cscope:
rm -f ./cscope.*
find "$(SRC_PATH)" -name "*.[chsS]" -print | sed 's,^\./,,' > ./cscope.files
find . -name "*.[ch]" -print | sed 's,^\./,,' > ./cscope.files
cscope -b
# documentation
@@ -336,7 +311,7 @@ TEXIFLAG=$(if $(V),,--quiet)
$(call quiet-command,texi2dvi $(TEXIFLAG) -I . $<," GEN $@")
%.html: %.texi
$(call quiet-command,LC_ALL=C $(MAKEINFO) $(MAKEINFOFLAGS) --html $< -o $@, \
$(call quiet-command,$(MAKEINFO) $(MAKEINFOFLAGS) --html $< -o $@, \
" GEN $@")
%.info: %.texi
@@ -360,25 +335,19 @@ qemu-img-cmds.texi: $(SRC_PATH)/qemu-img-cmds.hx
qemu.1: qemu-doc.texi qemu-options.texi qemu-monitor.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< qemu.pod && \
$(POD2MAN) --section=1 --center=" " --release=" " qemu.pod > $@, \
pod2man --section=1 --center=" " --release=" " qemu.pod > $@, \
" GEN $@")
qemu-img.1: qemu-img.texi qemu-img-cmds.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< qemu-img.pod && \
$(POD2MAN) --section=1 --center=" " --release=" " qemu-img.pod > $@, \
" GEN $@")
fsdev/virtfs-proxy-helper.1: fsdev/virtfs-proxy-helper.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< fsdev/virtfs-proxy-helper.pod && \
$(POD2MAN) --section=1 --center=" " --release=" " fsdev/virtfs-proxy-helper.pod > $@, \
pod2man --section=1 --center=" " --release=" " qemu-img.pod > $@, \
" GEN $@")
qemu-nbd.8: qemu-nbd.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< qemu-nbd.pod && \
$(POD2MAN) --section=8 --center=" " --release=" " qemu-nbd.pod > $@, \
pod2man --section=8 --center=" " --release=" " qemu-nbd.pod > $@, \
" GEN $@")
dvi: qemu-doc.dvi qemu-tech.dvi
@@ -400,5 +369,43 @@ tar:
cd /tmp && tar zcvf ~/$(FILE).tar.gz $(FILE) --exclude CVS --exclude .git --exclude .svn
rm -rf /tmp/$(FILE)
SYSTEM_TARGETS=$(filter %-softmmu,$(TARGET_DIRS))
SYSTEM_PROGS=$(patsubst qemu-system-i386,qemu, \
$(patsubst %-softmmu,qemu-system-%, \
$(SYSTEM_TARGETS)))
USER_TARGETS=$(filter %-user,$(TARGET_DIRS))
USER_PROGS=$(patsubst %-bsd-user,qemu-%, \
$(patsubst %-darwin-user,qemu-%, \
$(patsubst %-linux-user,qemu-%, \
$(USER_TARGETS))))
# generate a binary distribution
tarbin:
cd / && tar zcvf ~/qemu-$(VERSION)-$(ARCH).tar.gz \
$(patsubst %,$(bindir)/%, $(SYSTEM_PROGS)) \
$(patsubst %,$(bindir)/%, $(USER_PROGS)) \
$(bindir)/qemu-img \
$(bindir)/qemu-nbd \
$(datadir)/bios.bin \
$(datadir)/vgabios.bin \
$(datadir)/vgabios-cirrus.bin \
$(datadir)/ppc_rom.bin \
$(datadir)/openbios-sparc32 \
$(datadir)/openbios-sparc64 \
$(datadir)/openbios-ppc \
$(datadir)/pxe-e1000.rom \
$(datadir)/pxe-eepro100.rom \
$(datadir)/pxe-ne2k_pci.rom \
$(datadir)/pxe-pcnet.rom \
$(datadir)/pxe-rtl8139.rom \
$(datadir)/pxe-virtio.rom \
$(datadir)/extboot.bin \
$(docdir)/qemu-doc.html \
$(docdir)/qemu-tech.html \
$(mandir)/man1/qemu.1 \
$(mandir)/man1/qemu-img.1 \
$(mandir)/man8/qemu-nbd.8
# Include automatically generated dependency files
-include $(wildcard *.d audio/*.d slirp/*.d block/*.d net/*.d ui/*.d qapi/*.d qga/*.d)

View File

@@ -9,8 +9,7 @@ include $(SRC_PATH)/rules.mak
$(call set-vpath, $(SRC_PATH):$(SRC_PATH)/hw)
QEMU_CFLAGS+=-I..
QEMU_CFLAGS += -I$(SRC_PATH)/include
QEMU_CFLAGS+=-I.. -I$(SRC_PATH)/fpu
include $(SRC_PATH)/Makefile.objs

View File

@@ -1,22 +1,8 @@
#######################################################################
# Target-independent parts used in system and user emulation
universal-obj-y =
#######################################################################
# QObject
qobject-obj-y = qint.o qstring.o qdict.o qlist.o qfloat.o qbool.o
qobject-obj-y += qjson.o json-lexer.o json-streamer.o json-parser.o
qobject-obj-y += qerror.o error.o qemu-error.o
universal-obj-y += $(qobject-obj-y)
#######################################################################
# QOM
include $(SRC_PATH)/qom/Makefile
qom-obj-y = $(addprefix qom/, $(qom-y))
qom-obj-twice-y = $(addprefix qom/, $(qom-twice-y))
universal-obj-y += $(qom-obj-y)
qobject-obj-y += qerror.o error.o
#######################################################################
# oslib-obj-y is code depending on the OS (win32 vs posix)
@@ -24,39 +10,22 @@ oslib-obj-y = osdep.o
oslib-obj-$(CONFIG_WIN32) += oslib-win32.o qemu-thread-win32.o
oslib-obj-$(CONFIG_POSIX) += oslib-posix.o qemu-thread-posix.o
#######################################################################
# coroutines
coroutine-obj-y = qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
coroutine-obj-y += qemu-coroutine-sleep.o
ifeq ($(CONFIG_UCONTEXT_COROUTINE),y)
coroutine-obj-$(CONFIG_POSIX) += coroutine-ucontext.o
else
ifeq ($(CONFIG_SIGALTSTACK_COROUTINE),y)
coroutine-obj-$(CONFIG_POSIX) += coroutine-sigaltstack.o
else
coroutine-obj-$(CONFIG_POSIX) += coroutine-gthread.o
endif
endif
coroutine-obj-$(CONFIG_WIN32) += coroutine-win32.o
#######################################################################
# block-obj-y is code used by both qemu system emulation and qemu-img
block-obj-y = cutils.o cache-utils.o qemu-option.o module.o async.o
block-obj-y = cutils.o cache-utils.o qemu-malloc.o qemu-option.o module.o async.o
block-obj-y += nbd.o block.o aio.o aes.o qemu-config.o qemu-progress.o qemu-sockets.o
block-obj-y += $(coroutine-obj-y) $(qobject-obj-y) $(version-obj-y)
block-obj-$(CONFIG_POSIX) += posix-aio-compat.o
block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
block-obj-$(CONFIG_POSIX) += compatfd.o
block-nested-y += raw.o cow.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat.o
block-nested-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o
block-nested-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o
block-nested-y += qed-check.o
block-nested-y += parallels.o nbd.o blkdebug.o sheepdog.o blkverify.o
block-nested-y += stream.o
block-nested-$(CONFIG_WIN32) += raw-win32.o
block-nested-$(CONFIG_POSIX) += raw-posix.o
block-nested-$(CONFIG_LIBISCSI) += iscsi.o
block-nested-$(CONFIG_CURL) += curl.o
block-nested-$(CONFIG_RBD) += rbd.o
@@ -81,28 +50,29 @@ ifeq ($(CONFIG_VIRTIO)$(CONFIG_VIRTFS)$(CONFIG_PCI),yyy)
# Lots of the fsdev/9pcode is pulled in by vl.c via qemu_fsdev_add.
# only pull in the actual virtio-9p device if we also enabled virtio.
CONFIG_REALLY_VIRTFS=y
fsdev-nested-y = qemu-fsdev.o virtio-9p-marshal.o
fsdev-nested-y = qemu-fsdev.o
else
fsdev-nested-y = qemu-fsdev-dummy.o
endif
fsdev-obj-$(CONFIG_VIRTFS) += $(addprefix fsdev/, $(fsdev-nested-y))
######################################################################
# Target independent part of system emulation. The long term path is to
# suppress *all* target specific code in case of system emulation, i.e. a
# single QEMU executable should support all CPUs and machines.
# libqemu_common.a: Target independent part of system emulation. The
# long term path is to suppress *all* target specific code in case of
# system emulation, i.e. a single QEMU executable should support all
# CPUs and machines.
common-obj-y = $(block-obj-y) blockdev.o
common-obj-y += $(net-obj-y)
common-obj-y += $(qom-obj-twice-y)
common-obj-y += $(qobject-obj-y)
common-obj-$(CONFIG_LINUX) += $(fsdev-obj-$(CONFIG_LINUX))
common-obj-y += readline.o console.o cursor.o
common-obj-y += readline.o console.o cursor.o qemu-error.o
common-obj-y += $(oslib-obj-y)
common-obj-$(CONFIG_WIN32) += os-win32.o
common-obj-$(CONFIG_POSIX) += os-posix.o
common-obj-y += tcg-runtime.o host-utils.o main-loop.o
common-obj-y += irq.o input.o
common-obj-y += tcg-runtime.o host-utils.o
common-obj-y += irq.o ioport.o input.o
common-obj-$(CONFIG_PTIMER) += ptimer.o
common-obj-$(CONFIG_MAX7310) += max7310.o
common-obj-$(CONFIG_WM8750) += wm8750.o
@@ -120,20 +90,17 @@ common-obj-y += i2c.o smbus.o smbus_eeprom.o
common-obj-y += eeprom93xx.o
common-obj-y += scsi-disk.o cdrom.o
common-obj-y += scsi-generic.o scsi-bus.o
common-obj-y += hid.o
common-obj-y += usb/core.o usb/bus.o usb/desc.o usb/dev-hub.o
common-obj-y += usb/host-$(HOST_USB).o
common-obj-y += usb/dev-hid.o usb/dev-storage.o usb/dev-wacom.o
common-obj-y += usb/dev-serial.o usb/dev-network.o usb/dev-audio.o
common-obj-y += usb.o usb-hub.o usb-$(HOST_USB).o usb-hid.o usb-msd.o usb-wacom.o
common-obj-y += usb-serial.o usb-net.o usb-bus.o usb-desc.o
common-obj-$(CONFIG_SSI) += ssi.o
common-obj-$(CONFIG_SSI_SD) += ssi-sd.o
common-obj-$(CONFIG_SD) += sd.o
common-obj-y += bt.o bt-host.o bt-vhci.o bt-l2cap.o bt-sdp.o bt-hci.o bt-hid.o
common-obj-y += bt-hci-csr.o usb/dev-bluetooth.o
common-obj-y += bt.o bt-host.o bt-vhci.o bt-l2cap.o bt-sdp.o bt-hci.o bt-hid.o usb-bt.o
common-obj-y += bt-hci-csr.o
common-obj-y += buffered_file.o migration.o migration-tcp.o
common-obj-y += qemu-char.o #aio.o
common-obj-y += qemu-char.o savevm.o #aio.o
common-obj-y += msmouse.o ps2.o
common-obj-y += qdev.o qdev-properties.o qdev-monitor.o
common-obj-y += qdev.o qdev-properties.o
common-obj-y += block-migration.o iohandler.o
common-obj-y += pflib.o
common-obj-y += bitmap.o bitops.o
@@ -179,13 +146,13 @@ common-obj-y += $(addprefix ui/, $(ui-obj-y))
common-obj-$(CONFIG_VNC) += $(addprefix ui/, $(vnc-obj-y))
common-obj-y += iov.o acl.o
common-obj-$(CONFIG_POSIX) += compatfd.o
#common-obj-$(CONFIG_POSIX) += compatfd.o
common-obj-y += notify.o event_notifier.o
common-obj-y += qemu-timer.o qemu-timer-common.o
slirp-obj-y = cksum.o if.o ip_icmp.o ip_input.o ip_output.o
slirp-obj-y += slirp.o mbuf.o misc.o sbuf.o socket.o tcp_input.o tcp_output.o
slirp-obj-y += tcp_subr.o tcp_timer.o udp.o bootp.o tftp.o arp_table.o
slirp-obj-y += tcp_subr.o tcp_timer.o udp.o bootp.o tftp.o
common-obj-$(CONFIG_SLIRP) += $(addprefix slirp/, $(slirp-obj-y))
# xen backend driver support
@@ -199,24 +166,17 @@ user-obj-y =
user-obj-y += envlist.o path.o
user-obj-y += tcg-runtime.o host-utils.o
user-obj-y += cutils.o cache-utils.o
user-obj-y += module.o
user-obj-y += qemu-user.o
user-obj-y += $(trace-obj-y)
user-obj-y += $(qom-obj-twice-y)
######################################################################
# libhw
hw-obj-y =
hw-obj-y += vl.o loader.o
hw-obj-y += loader.o
hw-obj-$(CONFIG_VIRTIO) += virtio-console.o
hw-obj-y += usb/libhw.o
hw-obj-$(CONFIG_VIRTIO_PCI) += virtio-pci.o
hw-obj-y += fw_cfg.o
hw-obj-$(CONFIG_PCI) += pci.o pci_bridge.o pci_bridge_dev.o
hw-obj-$(CONFIG_PCI) += pci_bridge.o
hw-obj-$(CONFIG_PCI) += msix.o msi.o
hw-obj-$(CONFIG_PCI) += shpc.o
hw-obj-$(CONFIG_PCI) += slotid_cap.o
hw-obj-$(CONFIG_PCI) += pci_host.o pcie_host.o
hw-obj-$(CONFIG_PCI) += ioh3420.o xio3130_upstream.o xio3130_downstream.o
hw-obj-y += watchdog.o
@@ -232,28 +192,27 @@ hw-obj-$(CONFIG_EMPTY_SLOT) += empty_slot.o
hw-obj-$(CONFIG_SERIAL) += serial.o
hw-obj-$(CONFIG_PARALLEL) += parallel.o
hw-obj-$(CONFIG_I8254) += i8254_common.o i8254.o
hw-obj-$(CONFIG_PCSPK) += pcspk.o
# Moved back to Makefile.target due to #include qemu-kvm.h:
#hw-obj-$(CONFIG_I8254) += i8254.o
#hw-obj-$(CONFIG_PCSPK) += pcspk.o
hw-obj-$(CONFIG_PCKBD) += pckbd.o
hw-obj-$(CONFIG_USB_UHCI) += usb/hcd-uhci.o
hw-obj-$(CONFIG_USB_OHCI) += usb/hcd-ohci.o
hw-obj-$(CONFIG_USB_EHCI) += usb/hcd-ehci.o
hw-obj-$(CONFIG_USB_XHCI) += usb/hcd-xhci.o
hw-obj-$(CONFIG_USB_UHCI) += usb-uhci.o
hw-obj-$(CONFIG_USB_OHCI) += usb-ohci.o
hw-obj-$(CONFIG_USB_EHCI) += usb-ehci.o
hw-obj-$(CONFIG_FDC) += fdc.o
hw-obj-$(CONFIG_ACPI) += acpi.o acpi_piix4.o
# needs fixes for cpu hotplug, so moved to Makefile.target:
# hw-obj-$(CONFIG_ACPI) += acpi.o acpi_piix4.o
hw-obj-$(CONFIG_APM) += pm_smbus.o apm.o
hw-obj-$(CONFIG_DMA) += dma.o
hw-obj-$(CONFIG_I82374) += i82374.o
hw-obj-$(CONFIG_HPET) += hpet.o
hw-obj-$(CONFIG_APPLESMC) += applesmc.o
hw-obj-$(CONFIG_SMARTCARD) += usb/dev-smartcard-reader.o ccid-card-passthru.o
hw-obj-$(CONFIG_SMARTCARD) += usb-ccid.o ccid-card-passthru.o
hw-obj-$(CONFIG_SMARTCARD_NSS) += ccid-card-emulated.o
hw-obj-$(CONFIG_USB_REDIR) += usb/redirect.o
hw-obj-$(CONFIG_I8259) += i8259_common.o i8259.o
hw-obj-$(CONFIG_USB_REDIR) += usb-redir.o
# PPC devices
hw-obj-$(CONFIG_OPENPIC) += openpic.o
hw-obj-$(CONFIG_PREP_PCI) += prep_pci.o
hw-obj-$(CONFIG_I82378) += i82378.o
# Mac shared devices
hw-obj-$(CONFIG_MACIO) += macio.o
hw-obj-$(CONFIG_CUDA) += cuda.o
@@ -271,8 +230,6 @@ hw-obj-$(CONFIG_PPCE500_PCI) += ppce500_pci.o
# MIPS devices
hw-obj-$(CONFIG_PIIX4) += piix4.o
hw-obj-$(CONFIG_G364FB) += g364fb.o
hw-obj-$(CONFIG_JAZZ_LED) += jazz_led.o
# PCI watchdog devices
hw-obj-$(CONFIG_PCI) += wdt_i6300esb.o
@@ -290,7 +247,6 @@ hw-obj-$(CONFIG_RTL8139_PCI) += rtl8139.o
hw-obj-$(CONFIG_SMC91C111) += smc91c111.o
hw-obj-$(CONFIG_LAN9118) += lan9118.o
hw-obj-$(CONFIG_NE2000_ISA) += ne2000-isa.o
hw-obj-$(CONFIG_OPENCORES_ETH) += opencores_eth.o
# IDE
hw-obj-$(CONFIG_IDE_CORE) += ide/core.o ide/atapi.o
@@ -317,15 +273,12 @@ hw-obj-$(CONFIG_VGA_ISA) += vga-isa.o
hw-obj-$(CONFIG_VGA_ISA_MM) += vga-isa-mm.o
hw-obj-$(CONFIG_VMWARE_VGA) += vmware_vga.o
hw-obj-$(CONFIG_VMMOUSE) += vmmouse.o
hw-obj-$(CONFIG_VGA_CIRRUS) += cirrus_vga.o
hw-obj-$(CONFIG_RC4030) += rc4030.o
hw-obj-$(CONFIG_DP8393X) += dp8393x.o
hw-obj-$(CONFIG_DS1225Y) += ds1225y.o
hw-obj-$(CONFIG_MIPSNET) += mipsnet.o
hw-obj-y += qtest.o
# Sound
sound-obj-y =
sound-obj-$(CONFIG_SB16) += sb16.o
@@ -339,16 +292,13 @@ sound-obj-$(CONFIG_HDA) += intel-hda.o hda-audio.o
adlib.o fmopl.o: QEMU_CFLAGS += -DBUILD_Y8950=0
hw-obj-$(CONFIG_SOUND) += $(sound-obj-y)
9pfs-nested-$(CONFIG_VIRTFS) = virtio-9p.o
9pfs-nested-$(CONFIG_VIRTFS) += virtio-9p-local.o virtio-9p-xattr.o
9pfs-nested-$(CONFIG_VIRTFS) += virtio-9p-xattr-user.o virtio-9p-posix-acl.o
9pfs-nested-$(CONFIG_VIRTFS) += virtio-9p-coth.o cofs.o codir.o cofile.o
9pfs-nested-$(CONFIG_VIRTFS) += coxattr.o virtio-9p-synth.o
9pfs-nested-$(CONFIG_OPEN_BY_HANDLE) += virtio-9p-handle.o
9pfs-nested-$(CONFIG_VIRTFS) += virtio-9p-proxy.o
9pfs-nested-$(CONFIG_VIRTFS) = virtio-9p.o virtio-9p-debug.o
9pfs-nested-$(CONFIG_VIRTFS) += virtio-9p-local.o virtio-9p-xattr.o
9pfs-nested-$(CONFIG_VIRTFS) += virtio-9p-xattr-user.o virtio-9p-posix-acl.o
hw-obj-$(CONFIG_REALLY_VIRTFS) += $(addprefix 9pfs/, $(9pfs-nested-y))
######################################################################
# libdis
# NOTE: the disassembler code is only needed for debugging
@@ -367,28 +317,22 @@ libdis-$(CONFIG_PPC_DIS) += ppc-dis.o
libdis-$(CONFIG_S390_DIS) += s390-dis.o
libdis-$(CONFIG_SH4_DIS) += sh4-dis.o
libdis-$(CONFIG_SPARC_DIS) += sparc-dis.o
libdis-$(CONFIG_LM32_DIS) += lm32-dis.o
######################################################################
# trace
ifeq ($(TRACE_BACKEND),dtrace)
TRACE_H_EXTRA_DEPS=trace-dtrace.h
trace.h: trace.h-timestamp trace-dtrace.h
else
trace.h: trace.h-timestamp
endif
trace.h: trace.h-timestamp $(TRACE_H_EXTRA_DEPS)
trace.h-timestamp: $(SRC_PATH)/trace-events $(BUILD_DIR)/config-host.mak
$(call quiet-command,$(TRACETOOL) \
--format=h \
--backend=$(TRACE_BACKEND) \
< $< > $@," GEN trace.h")
trace.h-timestamp: $(SRC_PATH)/trace-events config-host.mak
$(call quiet-command,sh $(SRC_PATH)/scripts/tracetool --$(TRACE_BACKEND) -h < $< > $@," GEN trace.h")
@cmp -s $@ trace.h || cp $@ trace.h
trace.c: trace.c-timestamp
trace.c-timestamp: $(SRC_PATH)/trace-events $(BUILD_DIR)/config-host.mak
$(call quiet-command,$(TRACETOOL) \
--format=c \
--backend=$(TRACE_BACKEND) \
< $< > $@," GEN trace.c")
trace.c-timestamp: $(SRC_PATH)/trace-events config-host.mak
$(call quiet-command,sh $(SRC_PATH)/scripts/tracetool --$(TRACE_BACKEND) -c < $< > $@," GEN trace.c")
@cmp -s $@ trace.c || cp $@ trace.c
trace.o: trace.c $(GENERATED_HEADERS)
@@ -400,43 +344,32 @@ trace-dtrace.h: trace-dtrace.dtrace
# but that gets picked up by QEMU's Makefile as an external dependency
# rule file. So we use '.dtrace' instead
trace-dtrace.dtrace: trace-dtrace.dtrace-timestamp
trace-dtrace.dtrace-timestamp: $(SRC_PATH)/trace-events $(BUILD_DIR)/config-host.mak
$(call quiet-command,$(TRACETOOL) \
--format=d \
--backend=$(TRACE_BACKEND) \
< $< > $@," GEN trace-dtrace.dtrace")
trace-dtrace.dtrace-timestamp: $(SRC_PATH)/trace-events config-host.mak
$(call quiet-command,sh $(SRC_PATH)/scripts/tracetool --$(TRACE_BACKEND) -d < $< > $@," GEN trace-dtrace.dtrace")
@cmp -s $@ trace-dtrace.dtrace || cp $@ trace-dtrace.dtrace
trace-dtrace.o: trace-dtrace.dtrace $(GENERATED_HEADERS)
$(call quiet-command,dtrace -o $@ -G -s $<, " GEN trace-dtrace.o")
$(call quiet-command,dtrace -o $@ -G -s $<, " GEN trace-dtrace.o")
ifeq ($(LIBTOOL),)
trace-dtrace.lo: trace-dtrace.dtrace
@echo "missing libtool. please install and rerun configure."; exit 1
else
trace-dtrace.lo: trace-dtrace.dtrace
$(call quiet-command,$(LIBTOOL) --mode=compile --tag=CC dtrace -o $@ -G -s $<, " lt GEN trace-dtrace.o")
$(call quiet-command,libtool --mode=compile --tag=CC dtrace -o $@ -G -s $<, " lt GEN trace-dtrace.o")
endif
trace/simple.o: trace/simple.c $(GENERATED_HEADERS)
simpletrace.o: simpletrace.c $(GENERATED_HEADERS)
trace-obj-$(CONFIG_TRACE_DTRACE) += trace-dtrace.o
ifneq ($(TRACE_BACKEND),dtrace)
ifeq ($(TRACE_BACKEND),dtrace)
trace-obj-y = trace-dtrace.o
else
trace-obj-y = trace.o
ifeq ($(TRACE_BACKEND),simple)
trace-obj-y += simpletrace.o
user-obj-y += qemu-timer-common.o
endif
endif
trace-nested-$(CONFIG_TRACE_DEFAULT) += default.o
trace-nested-$(CONFIG_TRACE_SIMPLE) += simple.o
trace-obj-$(CONFIG_TRACE_SIMPLE) += qemu-timer-common.o
trace-nested-$(CONFIG_TRACE_STDERR) += stderr.o
trace-nested-y += control.o
trace-obj-y += $(addprefix trace/, $(trace-nested-y))
$(trace-obj-y): $(GENERATED_HEADERS)
######################################################################
# smartcard
@@ -446,30 +379,12 @@ libcacard-y = cac.o event.o vcard.o vreader.o vcard_emul_nss.o vcard_emul_type.o
######################################################################
# qapi
qapi-nested-y = qapi-visit-core.o qapi-dealloc-visitor.o qmp-input-visitor.o
qapi-nested-y += qmp-output-visitor.o qmp-registry.o qmp-dispatch.o
qapi-nested-y += string-input-visitor.o string-output-visitor.o
qapi-nested-y = qapi-visit-core.o qmp-input-visitor.o qmp-output-visitor.o qapi-dealloc-visitor.o
qapi-nested-y += qmp-registry.o qmp-dispatch.o
qapi-obj-y = $(addprefix qapi/, $(qapi-nested-y))
common-obj-y += qmp-marshal.o qapi-visit.o qapi-types.o
common-obj-y += qmp.o hmp.o
universal-obj-y += $(qapi-obj-y)
######################################################################
# guest agent
qga-nested-y = commands.o guest-agent-command-state.o
qga-nested-$(CONFIG_POSIX) += commands-posix.o channel-posix.o
qga-nested-$(CONFIG_WIN32) += commands-win32.o channel-win32.o service-win32.o
qga-obj-y = $(addprefix qga/, $(qga-nested-y))
qga-obj-y += qemu-ga.o module.o
qga-obj-$(CONFIG_WIN32) += oslib-win32.o
qga-obj-$(CONFIG_POSIX) += oslib-posix.o qemu-sockets.o qemu-option.o
vl.o: QEMU_CFLAGS+=$(GPROF_CFLAGS)
vl.o: QEMU_CFLAGS+=$(SDL_CFLAGS)
QEMU_CFLAGS+=$(GLIB_CFLAGS)
vl.o: QEMU_CFLAGS+=$(GLIB_CFLAGS)

View File

@@ -22,24 +22,19 @@ QEMU_CFLAGS += -I.. -I$(TARGET_PATH) -DNEED_CPU_H
include $(SRC_PATH)/Makefile.objs
QEMU_CFLAGS+=-I$(SRC_PATH)/include
ifdef CONFIG_USER_ONLY
# user emulator name
QEMU_PROG=qemu-$(TARGET_ARCH2)
else
# system emulator name
ifneq (,$(findstring -mwindows,$(LIBS)))
# Terminate program name with a 'w' because the linker builds a windows executable.
QEMU_PROGW=qemu-system-$(TARGET_ARCH2)w$(EXESUF)
endif # windows executable
ifeq ($(TARGET_ARCH), i386)
QEMU_PROG=qemu$(EXESUF)
else
QEMU_PROG=qemu-system-$(TARGET_ARCH2)$(EXESUF)
endif
endif
PROGS=$(QEMU_PROG)
ifdef QEMU_PROGW
PROGS+=$(QEMU_PROGW)
endif
STPFILES=
ifndef CONFIG_HAIKU
@@ -49,7 +44,7 @@ endif
config-target.h: config-target.h-timestamp
config-target.h-timestamp: config-target.mak
ifdef CONFIG_TRACE_SYSTEMTAP
ifdef CONFIG_SYSTEMTAP_TRACE
stap: $(QEMU_PROG).stp
ifdef CONFIG_USER_ONLY
@@ -58,14 +53,13 @@ else
TARGET_TYPE=system
endif
$(QEMU_PROG).stp: $(SRC_PATH)/trace-events
$(call quiet-command,$(TRACETOOL) \
--format=stap \
--backend=$(TRACE_BACKEND) \
--binary=$(bindir)/$(QEMU_PROG) \
--target-arch=$(TARGET_ARCH) \
--target-type=$(TARGET_TYPE) \
< $< > $@," GEN $(QEMU_PROG).stp")
$(QEMU_PROG).stp:
$(call quiet-command,sh $(SRC_PATH)/scripts/tracetool \
--$(TRACE_BACKEND) \
--binary $(bindir)/$(QEMU_PROG) \
--target-arch $(TARGET_ARCH) \
--target-type $(TARGET_TYPE) \
--stap < $(SRC_PATH)/trace-events > $(QEMU_PROG).stp," GEN $(QEMU_PROG).stp")
else
stap:
endif
@@ -77,42 +71,36 @@ all: $(PROGS) stap
#########################################################
# cpu emulator library
libobj-y = exec.o translate-all.o cpu-exec.o translate.o
libobj-y += tcg/tcg.o tcg/optimize.o
libobj-$(CONFIG_TCG_INTERPRETER) += tci.o
libobj-y = exec.o cpu-exec.o
libobj-$(CONFIG_NO_CPU_EMULATION) += fake-exec.o
libobj-$(CONFIG_CPU_EMULATION) += translate-all.o translate.o
libobj-$(CONFIG_CPU_EMULATION) += tcg/tcg.o
libobj-y += fpu/softfloat.o
ifneq ($(TARGET_BASE_ARCH), sparc)
ifneq ($(TARGET_BASE_ARCH), alpha)
libobj-y += op_helper.o
libobj-y += op_helper.o helper.o
ifeq ($(TARGET_BASE_ARCH), i386)
libobj-y += cpuid.o
endif
endif
libobj-y += helper.o
ifneq ($(TARGET_BASE_ARCH), ppc)
libobj-y += cpu.o
endif
libobj-$(TARGET_SPARC64) += vis_helper.o
libobj-$(CONFIG_NEED_MMU) += mmu.o
libobj-$(CONFIG_KVM) += kvm-tpr-opt.o
libobj-$(TARGET_ARM) += neon_helper.o iwmmxt_helper.o
ifeq ($(TARGET_BASE_ARCH), sparc)
libobj-y += fop_helper.o cc_helper.o win_helper.o mmu_helper.o ldst_helper.o
endif
libobj-$(TARGET_SPARC) += int32_helper.o
libobj-$(TARGET_SPARC64) += int64_helper.o
libobj-$(TARGET_ALPHA) += int_helper.o fpu_helper.o sys_helper.o mem_helper.o
libobj-y += disas.o
libobj-$(CONFIG_TCI_DIS) += tci-dis.o
tci-dis.o: QEMU_CFLAGS += -I$(SRC_PATH)/tcg -I$(SRC_PATH)/tcg/tci
$(libobj-y): $(GENERATED_HEADERS)
# HELPER_CFLAGS is used for all the legacy code compiled with static register
# libqemu
translate.o: translate.c cpu.h
translate-all.o: translate-all.c cpu.h
tcg/tcg.o: cpu.h
# HELPER_CFLAGS is used for all the code compiled with static register
# variables
ifneq ($(TARGET_BASE_ARCH), sparc)
op_helper.o: QEMU_CFLAGS += $(HELPER_CFLAGS)
endif
user-exec.o: QEMU_CFLAGS += $(HELPER_CFLAGS)
op_helper.o user-exec.o: QEMU_CFLAGS += $(HELPER_CFLAGS)
# Note: this is a workaround. The real fix is to avoid compiling
# cpu_signal_handler() in user-exec.c.
@@ -128,7 +116,7 @@ $(call set-vpath, $(SRC_PATH)/linux-user:$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR
QEMU_CFLAGS+=-I$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR) -I$(SRC_PATH)/linux-user
obj-y = main.o syscall.o strace.o mmap.o signal.o thunk.o \
elfload.o linuxload.o uaccess.o gdbstub.o cpu-uname.o \
user-exec.o $(oslib-obj-y)
qemu-malloc.o user-exec.o $(oslib-obj-y)
obj-$(TARGET_HAS_BFLT) += flatload.o
@@ -145,13 +133,39 @@ obj-m68k-y += m68k-sim.o m68k-semi.o
$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y): $(GENERATED_HEADERS)
obj-y += $(addprefix ../, $(universal-obj-y))
obj-y += $(addprefix ../libuser/, $(user-obj-y))
obj-y += $(addprefix ../libdis-user/, $(libdis-y))
obj-y += $(libobj-y)
endif #CONFIG_LINUX_USER
#########################################################
# Darwin user emulator target
ifdef CONFIG_DARWIN_USER
$(call set-vpath, $(SRC_PATH)/darwin-user)
QEMU_CFLAGS+=-I$(SRC_PATH)/darwin-user -I$(SRC_PATH)/darwin-user/$(TARGET_ARCH)
# Leave some space for the regular program loading zone
LDFLAGS+=-Wl,-segaddr,__STD_PROG_ZONE,0x1000 -image_base 0x0e000000
LIBS+=-lmx
obj-y = main.o commpage.o machload.o mmap.o signal.o syscall.o thunk.o \
gdbstub.o user-exec.o
obj-i386-y += ioport-user.o
$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y): $(GENERATED_HEADERS)
obj-y += $(addprefix ../libuser/, $(user-obj-y))
obj-y += $(addprefix ../libdis-user/, $(libdis-y))
obj-y += $(libobj-y)
endif #CONFIG_DARWIN_USER
#########################################################
# BSD user emulator target
@@ -168,7 +182,6 @@ obj-i386-y += ioport-user.o
$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y): $(GENERATED_HEADERS)
obj-y += $(addprefix ../, $(universal-obj-y))
obj-y += $(addprefix ../libuser/, $(user-obj-y))
obj-y += $(addprefix ../libdis-user/, $(libdis-y))
obj-y += $(libobj-y)
@@ -179,33 +192,31 @@ endif #CONFIG_BSD_USER
# System emulator target
ifdef CONFIG_SOFTMMU
obj-y = arch_init.o cpus.o monitor.o machine.o gdbstub.o balloon.o ioport.o
obj-y = arch_init.o cpus.o monitor.o machine.o gdbstub.o vl.o balloon.o
# virtio has to be here due to weird dependency between PCI and virtio-net.
# need to fix this properly
obj-$(CONFIG_NO_PCI) += pci-stub.o
obj-$(CONFIG_PCI) += pci.o
obj-$(CONFIG_VIRTIO) += virtio.o virtio-blk.o virtio-balloon.o virtio-net.o virtio-serial-bus.o
obj-$(CONFIG_VIRTIO) += virtio-scsi.o
obj-y += vhost_net.o
obj-$(CONFIG_VHOST_NET) += vhost.o
obj-$(CONFIG_REALLY_VIRTFS) += 9pfs/virtio-9p-device.o
obj-y += rwhandler.o
obj-$(CONFIG_KVM) += kvm.o kvm-all.o
obj-$(CONFIG_NO_KVM) += kvm-stub.o
obj-$(CONFIG_VGA) += vga.o
obj-y += memory.o savevm.o cputlb.o
LIBS+=-lz
obj-i386-$(CONFIG_KVM) += hyperv.o
QEMU_CFLAGS += $(VNC_TLS_CFLAGS)
QEMU_CFLAGS += $(VNC_SASL_CFLAGS)
QEMU_CFLAGS += $(VNC_JPEG_CFLAGS)
QEMU_CFLAGS += $(VNC_PNG_CFLAGS)
QEMU_CFLAGS += $(GLIB_CFLAGS)
# xen support
obj-$(CONFIG_XEN) += xen-all.o xen_machine_pv.o xen_domainbuild.o xen-mapcache.o
obj-$(CONFIG_NO_XEN) += xen-stub.o
obj-i386-$(CONFIG_XEN) += xen_platform.o xen_apic.o
obj-i386-$(CONFIG_XEN) += xen_platform.o
# Inter-VM PCI shared memory
CONFIG_IVSHMEM =
@@ -216,45 +227,55 @@ ifeq ($(CONFIG_KVM), y)
endif
obj-$(CONFIG_IVSHMEM) += ivshmem.o
# Generic hotplugging
obj-y += device-hotplug.o
# Hardware support
obj-i386-y += mc146818rtc.o pc.o
obj-i386-y += apic_common.o apic.o kvmvapic.o
obj-i386-y += sga.o ioapic_common.o ioapic.o piix_pci.o
obj-i386-y += vga.o
obj-i386-y += mc146818rtc.o i8259.o pc.o
obj-i386-y += cirrus_vga.o sga.o apic.o ioapic.o piix_pci.o
obj-i386-y += vmport.o
obj-i386-y += pci-hotplug.o smbios.o wdt_ib700.o
obj-i386-y += device-hotplug.o pci-hotplug.o smbios.o wdt_ib700.o
obj-i386-y += extboot.o
obj-i386-y += debugcon.o multiboot.o
obj-i386-y += pc_piix.o
obj-i386-y += pc_sysfw.o
obj-i386-$(CONFIG_KVM) += kvm/clock.o kvm/apic.o kvm/i8259.o kvm/ioapic.o kvm/i8254.o
obj-i386-$(CONFIG_KVM) += kvmclock.o
obj-i386-$(CONFIG_SPICE) += qxl.o qxl-logger.o qxl-render.o
obj-i386-y += testdev.o
obj-i386-y += acpi.o acpi_piix4.o
obj-i386-y += pcspk.o i8254.o
obj-i386-$(CONFIG_KVM_PIT) += i8254-kvm.o
obj-i386-$(CONFIG_KVM_DEVICE_ASSIGNMENT) += device-assignment.o
# Hardware support
obj-ia64-y += ide.o pckbd.o vga.o $(SOUND_HW) dma.o $(AUDIODRV)
obj-ia64-y += fdc.o mc146818rtc.o serial.o i8259.o ipf.o
obj-ia64-y += cirrus_vga.o parallel.o acpi.o piix_pci.o
obj-ia64-y += usb-uhci.o
obj-ia64-$(CONFIG_KVM_DEVICE_ASSIGNMENT) += device-assignment.o
# shared objects
obj-ppc-y = ppc.o ppc_booke.o
obj-ppc-y = ppc.o
obj-ppc-y += vga.o
# PREP target
obj-ppc-y += mc146818rtc.o
obj-ppc-y += i8259.o mc146818rtc.o
obj-ppc-y += ppc_prep.o
# OldWorld PowerMac
obj-ppc-y += ppc_oldworld.o
# NewWorld PowerMac
obj-ppc-y += ppc_newworld.o
# IBM pSeries (sPAPR)
obj-ppc-$(CONFIG_PSERIES) += spapr.o spapr_hcall.o spapr_rtas.o spapr_vio.o
obj-ppc-$(CONFIG_PSERIES) += xics.o spapr_vty.o spapr_llan.o spapr_vscsi.o
obj-ppc-$(CONFIG_PSERIES) += spapr_pci.o device-hotplug.o pci-hotplug.o
ifeq ($(CONFIG_FDT)$(TARGET_PPC64),yy)
obj-ppc-y += spapr.o spapr_hcall.o spapr_rtas.o spapr_vio.o
obj-ppc-y += xics.o spapr_vty.o spapr_llan.o spapr_vscsi.o
endif
# PowerPC 4xx boards
obj-ppc-y += ppc4xx_devs.o ppc4xx_pci.o ppc405_uc.o ppc405_boards.o
obj-ppc-y += ppc440_bamboo.o
obj-ppc-y += ppc440.o ppc440_bamboo.o
# PowerPC E500 boards
obj-ppc-y += ppce500_mpc8544ds.o mpc8544_guts.o ppce500_spin.o
obj-ppc-y += ppce500_mpc8544ds.o mpc8544_guts.o
# PowerPC 440 Xilinx ML507 reference board.
obj-ppc-y += virtex_ml507.o
obj-ppc-$(CONFIG_KVM) += kvm_ppc.o
obj-ppc-$(CONFIG_FDT) += device_tree.o
# PowerPC OpenPIC
obj-ppc-y += openpic.o
# Xilinx PPC peripherals
obj-ppc-y += xilinx_intc.o
@@ -285,13 +306,17 @@ obj-lm32-y += milkymist-vgafb.o
obj-lm32-y += framebuffer.o
obj-mips-y = mips_r4k.o mips_jazz.o mips_malta.o mips_mipssim.o
obj-mips-y += pcspk.o i8254.o
obj-mips-y += acpi.o acpi_piix4.o
obj-mips-y += mips_addr.o mips_timer.o mips_int.o
obj-mips-y += vga.o i8259.o
obj-mips-y += g364fb.o jazz_led.o
obj-mips-y += gt64xxx.o mc146818rtc.o
obj-mips-y += cirrus_vga.o
obj-mips-$(CONFIG_FULONG) += bonito.o vt82c686.o mips_fulong2e.o
obj-microblaze-y = petalogix_s3adsp1800_mmu.o
obj-microblaze-y += petalogix_ml605_mmu.o
obj-microblaze-y += microblaze_boot.o
obj-microblaze-y += microblaze_pic_cpu.o
obj-microblaze-y += xilinx_intc.o
@@ -306,6 +331,7 @@ obj-microblaze-$(CONFIG_FDT) += device_tree.o
# Boards
obj-cris-y = cris_pic_cpu.o
obj-cris-y += cris-boot.o
obj-cris-y += etraxfs.o axis_dev88.o
obj-cris-y += axis_dev88.o
# IO blocks
@@ -317,7 +343,9 @@ obj-cris-y += etraxfs_ser.o
ifeq ($(TARGET_ARCH), sparc64)
obj-sparc-y = sun4u.o apb_pci.o
obj-sparc-y += vga.o
obj-sparc-y += mc146818rtc.o
obj-sparc-y += cirrus_vga.o
else
obj-sparc-y = sun4m.o lance.o tcx.o sun4m_iommu.o slavio_intctl.o
obj-sparc-y += slavio_timer.o slavio_misc.o sparc32_dma.o
@@ -330,22 +358,9 @@ endif
obj-arm-y = integratorcp.o versatilepb.o arm_pic.o arm_timer.o
obj-arm-y += arm_boot.o pl011.o pl031.o pl050.o pl080.o pl110.o pl181.o pl190.o
obj-arm-y += versatile_pci.o
obj-arm-y += versatile_i2c.o
obj-arm-y += cadence_uart.o
obj-arm-y += cadence_ttc.o
obj-arm-y += cadence_gem.o
obj-arm-y += xilinx_zynq.o zynq_slcr.o
obj-arm-y += arm_gic.o
obj-arm-y += realview_gic.o realview.o arm_sysctl.o arm11mpcore.o a9mpcore.o
obj-arm-y += exynos4210_gic.o exynos4210_combiner.o exynos4210.o
obj-arm-y += exynos4_boards.o exynos4210_uart.o exynos4210_pwm.o
obj-arm-y += exynos4210_pmu.o exynos4210_mct.o exynos4210_fimd.o
obj-arm-y += arm_l2x0.o
obj-arm-y += arm_mptimer.o a15mpcore.o
obj-arm-y += armv7m.o armv7m_nvic.o stellaris.o pl022.o stellaris_enet.o
obj-arm-y += highbank.o
obj-arm-y += pl061.o
obj-arm-y += xgmac.o
obj-arm-y += arm-semi.o
obj-arm-y += pxa2xx.o pxa2xx_pic.o pxa2xx_gpio.o pxa2xx_timer.o pxa2xx_dma.o
obj-arm-y += pxa2xx_lcd.o pxa2xx_mmci.o pxa2xx_pcmcia.o pxa2xx_keypad.o
@@ -356,16 +371,16 @@ obj-arm-y += omap1.o omap_lcdc.o omap_dma.o omap_clk.o omap_mmc.o omap_i2c.o \
obj-arm-y += omap2.o omap_dss.o soc_dma.o omap_gptimer.o omap_synctimer.o \
omap_gpmc.o omap_sdrc.o omap_spi.o omap_tap.o omap_l4.o
obj-arm-y += omap_sx1.o palm.o tsc210x.o
obj-arm-y += nseries.o blizzard.o onenand.o cbus.o tusb6010.o usb/hcd-musb.o
obj-arm-y += nseries.o blizzard.o onenand.o vga.o cbus.o tusb6010.o usb-musb.o
obj-arm-y += mst_fpga.o mainstone.o
obj-arm-y += z2.o
obj-arm-y += musicpal.o bitbang_i2c.o marvell_88w8618_audio.o
obj-arm-y += framebuffer.o
obj-arm-y += syborg.o syborg_fb.o syborg_interrupt.o syborg_keyboard.o
obj-arm-y += syborg_serial.o syborg_timer.o syborg_pointer.o syborg_rtc.o
obj-arm-y += syborg_virtio.o
obj-arm-y += vexpress.o
obj-arm-y += strongarm.o
obj-arm-y += collie.o
obj-arm-y += pl041.o lm4549.o
obj-arm-$(CONFIG_FDT) += device_tree.o
obj-sh4-y = shix.o r2d.o sh7750.o sh7750_regnames.o tc58128.o
obj-sh4-y += sh_timer.o sh_serial.o sh_intc.o sh_pci.o sm501.o
@@ -376,52 +391,39 @@ obj-m68k-y += m68k-semi.o dummy_m68k.o
obj-s390x-y = s390-virtio-bus.o s390-virtio.o
obj-alpha-y = mc146818rtc.o
obj-alpha-y += alpha_pci.o alpha_dp264.o alpha_typhoon.o
obj-alpha-y = i8259.o mc146818rtc.o
obj-alpha-y += vga.o cirrus_vga.o
obj-xtensa-y += xtensa_pic.o
obj-xtensa-y += xtensa_sim.o
obj-xtensa-y += xtensa_lx60.o
obj-xtensa-y += xtensa-semi.o
obj-xtensa-y += core-dc232b.o
obj-xtensa-y += core-dc233c.o
obj-xtensa-y += core-fsf.o
ifeq ($(TARGET_ARCH), ia64)
firmware.o: firmware.c
$(CC) $(HELPER_CFLAGS) $(CPPFLAGS) $(BASE_CFLAGS) -c -o $@ $<
endif
main.o: QEMU_CFLAGS+=$(GPROF_CFLAGS)
monitor.o: hmp-commands.h qmp-commands-old.h
monitor.o: hmp-commands.h qmp-commands.h
$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y): $(GENERATED_HEADERS)
obj-y += $(addprefix ../, $(universal-obj-y))
obj-y += $(addprefix ../, $(common-obj-y))
obj-y += $(addprefix ../libdis/, $(libdis-y))
obj-y += $(libobj-y)
obj-y += $(addprefix $(HWDIR)/, $(hw-obj-y))
obj-y += $(addprefix ../, $(trace-obj-y))
endif # CONFIG_SOFTMMU
ifndef CONFIG_LINUX_USER
ifndef CONFIG_BSD_USER
# libcacard needs qemu-thread support, and besides is only needed by devices
# so not requires with linux-user / bsd-user targets
# so not requires with linux-user targets
obj-$(CONFIG_SMARTCARD_NSS) += $(addprefix ../libcacard/, $(libcacard-y))
endif # CONFIG_BSD_USER
endif # CONFIG_LINUX_USER
obj-y += $(addprefix ../, $(trace-obj-y))
obj-$(CONFIG_GDBSTUB_XML) += gdbstub-xml.o
ifdef QEMU_PROGW
# The linker builds a windows executable. Make also a console executable.
$(QEMU_PROGW): $(obj-y) $(obj-$(TARGET_BASE_ARCH)-y)
$(call LINK,$^)
$(QEMU_PROG): $(QEMU_PROGW)
$(call quiet-command,$(OBJCOPY) --subsystem console $(QEMU_PROGW) $(QEMU_PROG)," GEN $(TARGET_DIR)$(QEMU_PROG)")
else
$(QEMU_PROG): $(obj-y) $(obj-$(TARGET_BASE_ARCH)-y)
$(call LINK,$^)
endif
$(call LINK,$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y))
gdbstub-xml.c: $(TARGET_XML_FILES) $(SRC_PATH)/scripts/feature_to_c.sh
$(call quiet-command,rm -f $@ && $(SHELL) $(SRC_PATH)/scripts/feature_to_c.sh $@ $(TARGET_XML_FILES)," GEN $(TARGET_DIR)$@")
@@ -429,14 +431,14 @@ gdbstub-xml.c: $(TARGET_XML_FILES) $(SRC_PATH)/scripts/feature_to_c.sh
hmp-commands.h: $(SRC_PATH)/hmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
qmp-commands-old.h: $(SRC_PATH)/qmp-commands.hx
qmp-commands.h: $(SRC_PATH)/qmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
clean:
rm -f *.o *.a *~ $(PROGS) nwfpe/*.o fpu/*.o
rm -f *.d */*.d tcg/*.o ide/*.o 9pfs/*.o kvm/*.o
rm -f hmp-commands.h qmp-commands-old.h gdbstub-xml.c
ifdef CONFIG_TRACE_SYSTEMTAP
rm -f *.d */*.d tcg/*.o ide/*.o 9pfs/*.o
rm -f hmp-commands.h qmp-commands.h gdbstub-xml.c
ifdef CONFIG_SYSTEMTAP_TRACE
rm -f *.stp
endif
@@ -447,9 +449,9 @@ ifneq ($(STRIP),)
$(STRIP) $(patsubst %,"$(DESTDIR)$(bindir)/%",$(PROGS))
endif
endif
ifdef CONFIG_TRACE_SYSTEMTAP
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset"
$(INSTALL_DATA) $(QEMU_PROG).stp "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset"
ifdef CONFIG_SYSTEMTAP_TRACE
$(INSTALL_DIR) "$(DESTDIR)$(datadir)/../systemtap/tapset"
$(INSTALL_DATA) $(QEMU_PROG).stp "$(DESTDIR)$(datadir)/../systemtap/tapset"
endif
# Include automatically generated dependency files

View File

@@ -9,7 +9,6 @@ include $(SRC_PATH)/rules.mak
$(call set-vpath, $(SRC_PATH))
QEMU_CFLAGS+=-I..
QEMU_CFLAGS += -I$(SRC_PATH)/include
include $(SRC_PATH)/Makefile.objs
@@ -18,9 +17,7 @@ all: $(user-obj-y)
@true
clean:
for d in . trace; do \
rm -f $$d/*.o $$d/*.d $$d/*.a $$d/*~; \
done
rm -f *.o *.d *.a *~
# Include automatically generated dependency files
-include $(wildcard *.d */*.d)

126
QMP/qmp
View File

@@ -1,126 +0,0 @@
#!/usr/bin/python
#
# QMP command line tool
#
# Copyright IBM, Corp. 2011
#
# Authors:
# Anthony Liguori <aliguori@us.ibm.com>
#
# This work is licensed under the terms of the GNU GPLv2 or later.
# See the COPYING file in the top-level directory.
import sys, os
from qmp import QEMUMonitorProtocol
def print_response(rsp, prefix=[]):
if type(rsp) == list:
i = 0
for item in rsp:
if prefix == []:
prefix = ['item']
print_response(item, prefix[:-1] + ['%s[%d]' % (prefix[-1], i)])
i += 1
elif type(rsp) == dict:
for key in rsp.keys():
print_response(rsp[key], prefix + [key])
else:
if len(prefix):
print '%s: %s' % ('.'.join(prefix), rsp)
else:
print '%s' % (rsp)
def main(args):
path = None
# Use QMP_PATH if it's set
if os.environ.has_key('QMP_PATH'):
path = os.environ['QMP_PATH']
while len(args):
arg = args[0]
if arg.startswith('--'):
arg = arg[2:]
if arg.find('=') == -1:
value = True
else:
arg, value = arg.split('=', 1)
if arg in ['path']:
if type(value) == str:
path = value
elif arg in ['help']:
os.execlp('man', 'man', 'qmp')
else:
print 'Unknown argument "%s"' % arg
args = args[1:]
else:
break
if not path:
print "QMP path isn't set, use --path=qmp-monitor-address or set QMP_PATH"
return 1
if len(args):
command, args = args[0], args[1:]
else:
print 'No command found'
print 'Usage: "qmp [--path=qmp-monitor-address] qmp-cmd arguments"'
return 1
if command in ['help']:
os.execlp('man', 'man', 'qmp')
srv = QEMUMonitorProtocol(path)
srv.connect()
def do_command(srv, cmd, **kwds):
rsp = srv.cmd(cmd, kwds)
if rsp.has_key('error'):
raise Exception(rsp['error']['desc'])
return rsp['return']
commands = map(lambda x: x['name'], do_command(srv, 'query-commands'))
srv.close()
if command not in commands:
fullcmd = 'qmp-%s' % command
try:
os.environ['QMP_PATH'] = path
os.execvp(fullcmd, [fullcmd] + args)
except OSError, (errno, msg):
if errno == 2:
print 'Command "%s" not found.' % (fullcmd)
return 1
raise
return 0
srv = QEMUMonitorProtocol(path)
srv.connect()
arguments = {}
for arg in args:
if not arg.startswith('--'):
print 'Unknown argument "%s"' % arg
return 1
arg = arg[2:]
if arg.find('=') == -1:
value = True
else:
arg, value = arg.split('=', 1)
if arg in ['help']:
os.execlp('man', 'man', 'qmp-%s' % command)
return 1
arguments[arg] = value
rsp = do_command(srv, command, **arguments)
print_response(rsp)
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))

View File

@@ -26,24 +26,6 @@ Example:
Note: If action is "stop", a STOP event will eventually follow the
BLOCK_IO_ERROR event.
DEVICE_TRAY_MOVED
-----------------
It's emitted whenever the tray of a removable device is moved by the guest
or by HMP/QMP commands.
Data:
- "device": device name (json-string)
- "tray-open": true if the tray has been opened or false if it has been closed
(json-bool)
{ "event": "DEVICE_TRAY_MOVED",
"data": { "device": "ide1-cd0",
"tray-open": true
},
"timestamp": { "seconds": 1265044230, "microseconds": 450486 } }
RESET
-----
@@ -144,7 +126,7 @@ the authentication ID is not provided.
VNC_DISCONNECTED
----------------
Emitted when the connection is closed.
Emitted when the conection is closed.
Data:
@@ -282,56 +264,3 @@ Example:
Note: If action is "reset", "shutdown", or "pause" the WATCHDOG event is
followed respectively by the RESET, SHUTDOWN, or STOP events.
BLOCK_JOB_COMPLETED
-------------------
Emitted when a block job has completed.
Data:
- "type": Job type ("stream" for image streaming, json-string)
- "device": Device name (json-string)
- "len": Maximum progress value (json-int)
- "offset": Current progress value (json-int)
On success this is equal to len.
On failure this is less than len.
- "speed": Rate limit, bytes per second (json-int)
- "error": Error message (json-string, optional)
Only present on failure. This field contains a human-readable
error message. There are no semantics other than that streaming
has failed and clients should not try to interpret the error
string.
Example:
{ "event": "BLOCK_JOB_COMPLETED",
"data": { "type": "stream", "device": "virtio-disk0",
"len": 10737418240, "offset": 10737418240,
"speed": 0 },
"timestamp": { "seconds": 1267061043, "microseconds": 959568 } }
BLOCK_JOB_CANCELLED
-------------------
Emitted when a block job has been cancelled.
Data:
- "type": Job type ("stream" for image streaming, json-string)
- "device": Device name (json-string)
- "len": Maximum progress value (json-int)
- "offset": Current progress value (json-int)
On success this is equal to len.
On failure this is less than len.
- "speed": Rate limit, bytes per second (json-int)
Example:
{ "event": "BLOCK_JOB_CANCELLED",
"data": { "type": "stream", "device": "virtio-disk0",
"len": 10737418240, "offset": 134217728,
"speed": 0 },
"timestamp": { "seconds": 1267061043, "microseconds": 959568 } }

View File

@@ -209,27 +209,13 @@ incompatible way are disabled by default and will be advertised by the
capabilities array (section '2.2 Server Greeting'). Thus, Clients can check
that array and enable the capabilities they support.
The QMP Server performs a type check on the arguments to a command. It
generates an error if a value does not have the expected type for its
key, or if it does not understand a key that the Client included. The
strictness of the Server catches wrong assumptions of Clients about
the Server's schema. Clients can assume that, when such validation
errors occur, they will be reported before the command generated any
side effect.
Additionally, Clients must not assume any particular:
However, Clients must not assume any particular:
- Length of json-arrays
- Size of json-objects; in particular, future versions of QEMU may add
new keys and Clients should be able to ignore them.
- Size of json-objects or length of json-arrays
- Order of json-object members or json-array elements
- Amount of errors generated by a command, that is, new errors can be added
to any existing command in newer versions of the Server
Of course, the Server does guarantee to send valid JSON. But apart from
this, a Client should be "conservative in what they send, and liberal in
what they accept".
6. Downstream extension of QMP
------------------------------

View File

@@ -128,12 +128,6 @@ class QEMUMonitorProtocol:
qmp_cmd['id'] = id
return self.cmd_obj(qmp_cmd)
def command(self, cmd, **kwds):
ret = self.cmd(cmd, kwds)
if ret.has_key('error'):
raise Exception(ret['error']['desc'])
return ret['return']
def get_events(self, wait=False):
"""
Get a list of available QMP events.

View File

@@ -1,138 +0,0 @@
#!/usr/bin/python
##
# QEMU Object Model test tools
#
# Copyright IBM, Corp. 2012
#
# Authors:
# Anthony Liguori <aliguori@us.ibm.com>
#
# This work is licensed under the terms of the GNU GPL, version 2 or later. See
# the COPYING file in the top-level directory.
##
import fuse, stat
from fuse import Fuse
import os, posix
from errno import *
from qmp import QEMUMonitorProtocol
fuse.fuse_python_api = (0, 2)
class QOMFS(Fuse):
def __init__(self, qmp, *args, **kwds):
Fuse.__init__(self, *args, **kwds)
self.qmp = qmp
self.qmp.connect()
self.ino_map = {}
self.ino_count = 1
def get_ino(self, path):
if self.ino_map.has_key(path):
return self.ino_map[path]
self.ino_map[path] = self.ino_count
self.ino_count += 1
return self.ino_map[path]
def is_object(self, path):
try:
items = self.qmp.command('qom-list', path=path)
return True
except:
return False
def is_property(self, path):
try:
path, prop = path.rsplit('/', 1)
for item in self.qmp.command('qom-list', path=path):
if item['name'] == prop:
return True
return False
except:
return False
def is_link(self, path):
try:
path, prop = path.rsplit('/', 1)
for item in self.qmp.command('qom-list', path=path):
if item['name'] == prop:
if item['type'].startswith('link<'):
return True
return False
return False
except:
return False
def read(self, path, length, offset):
if not self.is_property(path):
return -ENOENT
path, prop = path.rsplit('/', 1)
try:
data = str(self.qmp.command('qom-get', path=path, property=prop))
data += '\n' # make values shell friendly
except:
return -EPERM
if offset > len(data):
return ''
return str(data[offset:][:length])
def readlink(self, path):
if not self.is_link(path):
return False
path, prop = path.rsplit('/', 1)
prefix = '/'.join(['..'] * (len(path.split('/')) - 1))
return prefix + str(self.qmp.command('qom-get', path=path,
property=prop))
def getattr(self, path):
if self.is_link(path):
value = posix.stat_result((0755 | stat.S_IFLNK,
self.get_ino(path),
0,
2,
1000,
1000,
4096,
0,
0,
0))
elif self.is_object(path):
value = posix.stat_result((0755 | stat.S_IFDIR,
self.get_ino(path),
0,
2,
1000,
1000,
4096,
0,
0,
0))
elif self.is_property(path):
value = posix.stat_result((0644 | stat.S_IFREG,
self.get_ino(path),
0,
1,
1000,
1000,
4096,
0,
0,
0))
else:
value = -ENOENT
return value
def readdir(self, path, offset):
yield fuse.Direntry('.')
yield fuse.Direntry('..')
for item in self.qmp.command('qom-list', path=path):
yield fuse.Direntry(str(item['name']))
if __name__ == '__main__':
import sys, os
fs = QOMFS(QEMUMonitorProtocol(os.environ['QMP_SOCKET']))
fs.main(sys.argv)

View File

@@ -1,67 +0,0 @@
#!/usr/bin/python
##
# QEMU Object Model test tools
#
# Copyright IBM, Corp. 2011
#
# Authors:
# Anthony Liguori <aliguori@us.ibm.com>
#
# This work is licensed under the terms of the GNU GPL, version 2 or later. See
# the COPYING file in the top-level directory.
##
import sys
import os
from qmp import QEMUMonitorProtocol
cmd, args = sys.argv[0], sys.argv[1:]
socket_path = None
path = None
prop = None
def usage():
return '''environment variables:
QMP_SOCKET=<path | addr:port>
usage:
%s [-h] [-s <QMP socket path | addr:port>] <path>.<property>
''' % cmd
def usage_error(error_msg = "unspecified error"):
sys.stderr.write('%s\nERROR: %s\n' % (usage(), error_msg))
exit(1)
if len(args) > 0:
if args[0] == "-h":
print usage()
exit(0);
elif args[0] == "-s":
try:
socket_path = args[1]
except:
usage_error("missing argument: QMP socket path or address");
args = args[2:]
if not socket_path:
if os.environ.has_key('QMP_SOCKET'):
socket_path = os.environ['QMP_SOCKET']
else:
usage_error("no QMP socket path or address given");
if len(args) > 0:
try:
path, prop = args[0].rsplit('.', 1)
except:
usage_error("invalid format for path/property/value")
else:
usage_error("not enough arguments")
srv = QEMUMonitorProtocol(socket_path)
srv.connect()
rsp = srv.command('qom-get', path=path, property=prop)
if type(rsp) == dict:
for i in rsp.keys():
print '%s: %s' % (i, rsp[i])
else:
print rsp

View File

@@ -1,64 +0,0 @@
#!/usr/bin/python
##
# QEMU Object Model test tools
#
# Copyright IBM, Corp. 2011
#
# Authors:
# Anthony Liguori <aliguori@us.ibm.com>
#
# This work is licensed under the terms of the GNU GPL, version 2 or later. See
# the COPYING file in the top-level directory.
##
import sys
import os
from qmp import QEMUMonitorProtocol
cmd, args = sys.argv[0], sys.argv[1:]
socket_path = None
path = None
prop = None
def usage():
return '''environment variables:
QMP_SOCKET=<path | addr:port>
usage:
%s [-h] [-s <QMP socket path | addr:port>] [<path>]
''' % cmd
def usage_error(error_msg = "unspecified error"):
sys.stderr.write('%s\nERROR: %s\n' % (usage(), error_msg))
exit(1)
if len(args) > 0:
if args[0] == "-h":
print usage()
exit(0);
elif args[0] == "-s":
try:
socket_path = args[1]
except:
usage_error("missing argument: QMP socket path or address");
args = args[2:]
if not socket_path:
if os.environ.has_key('QMP_SOCKET'):
socket_path = os.environ['QMP_SOCKET']
else:
usage_error("no QMP socket path or address given");
srv = QEMUMonitorProtocol(socket_path)
srv.connect()
if len(args) == 0:
print '/'
sys.exit(0)
for item in srv.command('qom-list', path=args[0]):
if item['type'].startswith('child<'):
print '%s/' % item['name']
elif item['type'].startswith('link<'):
print '@%s/' % item['name']
else:
print '%s' % item['name']

View File

@@ -1,64 +0,0 @@
#!/usr/bin/python
##
# QEMU Object Model test tools
#
# Copyright IBM, Corp. 2011
#
# Authors:
# Anthony Liguori <aliguori@us.ibm.com>
#
# This work is licensed under the terms of the GNU GPL, version 2 or later. See
# the COPYING file in the top-level directory.
##
import sys
import os
from qmp import QEMUMonitorProtocol
cmd, args = sys.argv[0], sys.argv[1:]
socket_path = None
path = None
prop = None
value = None
def usage():
return '''environment variables:
QMP_SOCKET=<path | addr:port>
usage:
%s [-h] [-s <QMP socket path | addr:port>] <path>.<property> <value>
''' % cmd
def usage_error(error_msg = "unspecified error"):
sys.stderr.write('%s\nERROR: %s\n' % (usage(), error_msg))
exit(1)
if len(args) > 0:
if args[0] == "-h":
print usage()
exit(0);
elif args[0] == "-s":
try:
socket_path = args[1]
except:
usage_error("missing argument: QMP socket path or address");
args = args[2:]
if not socket_path:
if os.environ.has_key('QMP_SOCKET'):
socket_path = os.environ['QMP_SOCKET']
else:
usage_error("no QMP socket path or address given");
if len(args) > 1:
try:
path, prop = args[0].rsplit('.', 1)
except:
usage_error("invalid format for path/property/value")
value = args[1]
else:
usage_error("not enough arguments")
srv = QEMUMonitorProtocol(socket_path)
srv.connect()
print srv.command('qom-set', path=path, property=prop, value=sys.argv[2])

4
README
View File

@@ -1,3 +1,3 @@
Read the documentation in qemu-doc.html or on http://wiki.qemu.org
Read the documentation in qemu-doc.html.
- QEMU team
Fabrice Bellard.

View File

@@ -1 +1 @@
1.1.2
0.15.1

View File

@@ -151,7 +151,7 @@ struct external_lineno {
#define E_FILNMLEN 14 /* # characters in a file name */
#define E_DIMNUM 4 /* # array dimensions in auxiliary entry */
struct QEMU_PACKED external_syment
struct __attribute__((packed)) external_syment
{
union {
char e_name[E_SYMNMLEN];

18
acl.c
View File

@@ -55,8 +55,8 @@ qemu_acl *qemu_acl_init(const char *aclname)
if (acl)
return acl;
acl = g_malloc(sizeof(*acl));
acl->aclname = g_strdup(aclname);
acl = qemu_malloc(sizeof(*acl));
acl->aclname = qemu_strdup(aclname);
/* Deny by default, so there is no window of "open
* access" between QEMU starting, and the user setting
* up ACLs in the monitor */
@@ -65,7 +65,7 @@ qemu_acl *qemu_acl_init(const char *aclname)
acl->nentries = 0;
QTAILQ_INIT(&acl->entries);
acls = g_realloc(acls, sizeof(*acls) * (nacls +1));
acls = qemu_realloc(acls, sizeof(*acls) * (nacls +1));
acls[nacls] = acl;
nacls++;
@@ -95,13 +95,13 @@ int qemu_acl_party_is_allowed(qemu_acl *acl,
void qemu_acl_reset(qemu_acl *acl)
{
qemu_acl_entry *entry, *next_entry;
qemu_acl_entry *entry;
/* Put back to deny by default, so there is no window
* of "open access" while the user re-initializes the
* access control list */
acl->defaultDeny = 1;
QTAILQ_FOREACH_SAFE(entry, &acl->entries, next, next_entry) {
QTAILQ_FOREACH(entry, &acl->entries, next) {
QTAILQ_REMOVE(&acl->entries, entry, next);
free(entry->match);
free(entry);
@@ -116,8 +116,8 @@ int qemu_acl_append(qemu_acl *acl,
{
qemu_acl_entry *entry;
entry = g_malloc(sizeof(*entry));
entry->match = g_strdup(match);
entry = qemu_malloc(sizeof(*entry));
entry->match = qemu_strdup(match);
entry->deny = deny;
QTAILQ_INSERT_TAIL(&acl->entries, entry, next);
@@ -142,8 +142,8 @@ int qemu_acl_insert(qemu_acl *acl,
return qemu_acl_append(acl, deny, match);
entry = g_malloc(sizeof(*entry));
entry->match = g_strdup(match);
entry = qemu_malloc(sizeof(*entry));
entry->match = qemu_strdup(match);
entry->deny = deny;
QTAILQ_FOREACH(tmp, &acl->entries, next) {

178
aio.c
View File

@@ -9,8 +9,6 @@
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu-common.h"
@@ -35,6 +33,7 @@ struct AioHandler
IOHandler *io_read;
IOHandler *io_write;
AioFlushHandler *io_flush;
AioProcessQueue *io_process_queue;
int deleted;
void *opaque;
QLIST_ENTRY(AioHandler) node;
@@ -57,6 +56,7 @@ int qemu_aio_set_fd_handler(int fd,
IOHandler *io_read,
IOHandler *io_write,
AioFlushHandler *io_flush,
AioProcessQueue *io_process_queue,
void *opaque)
{
AioHandler *node;
@@ -75,13 +75,13 @@ int qemu_aio_set_fd_handler(int fd,
* releasing the walking_handlers lock.
*/
QLIST_REMOVE(node, node);
g_free(node);
qemu_free(node);
}
}
} else {
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_malloc0(sizeof(AioHandler));
node = qemu_mallocz(sizeof(AioHandler));
node->fd = fd;
QLIST_INSERT_HEAD(&aio_handlers, node, node);
}
@@ -89,6 +89,7 @@ int qemu_aio_set_fd_handler(int fd,
node->io_read = io_read;
node->io_write = io_write;
node->io_flush = io_flush;
node->io_process_queue = io_process_queue;
node->opaque = opaque;
}
@@ -99,96 +100,131 @@ int qemu_aio_set_fd_handler(int fd,
void qemu_aio_flush(void)
{
while (qemu_aio_wait());
AioHandler *node;
int ret;
do {
ret = 0;
/*
* If there are pending emulated aio start them now so flush
* will be able to return 1.
*/
qemu_aio_wait();
QLIST_FOREACH(node, &aio_handlers, node) {
if (node->io_flush) {
ret |= node->io_flush(node->opaque);
}
}
} while (qemu_bh_poll() || ret > 0);
}
bool qemu_aio_wait(void)
int qemu_aio_process_queue(void)
{
AioHandler *node;
fd_set rdfds, wrfds;
int max_fd = -1;
int ret;
bool busy;
/*
* If there are callbacks left that have been queued, we need to call then.
* Do not call select in this case, because it is possible that the caller
* does not need a complete flush (as is the case for qemu_aio_wait loops).
*/
if (qemu_bh_poll()) {
return true;
}
int ret = 0;
walking_handlers = 1;
FD_ZERO(&rdfds);
FD_ZERO(&wrfds);
/* fill fd sets */
busy = false;
QLIST_FOREACH(node, &aio_handlers, node) {
/* If there aren't pending AIO operations, don't invoke callbacks.
* Otherwise, if there are no AIO requests, qemu_aio_wait() would
* wait indefinitely.
*/
if (node->io_flush) {
if (node->io_flush(node->opaque) == 0) {
continue;
if (node->io_process_queue) {
if (node->io_process_queue(node->opaque)) {
ret = 1;
}
busy = true;
}
if (!node->deleted && node->io_read) {
FD_SET(node->fd, &rdfds);
max_fd = MAX(max_fd, node->fd + 1);
}
if (!node->deleted && node->io_write) {
FD_SET(node->fd, &wrfds);
max_fd = MAX(max_fd, node->fd + 1);
}
}
walking_handlers = 0;
/* No AIO operations? Get us out of here */
if (!busy) {
return false;
}
return ret;
}
/* wait until next event */
ret = select(max_fd, &rdfds, &wrfds, NULL, NULL);
void qemu_aio_wait(void)
{
int ret;
if (qemu_bh_poll())
return;
/*
* If there are callbacks left that have been queued, we need to call then.
* Return afterwards to avoid waiting needlessly in select().
*/
if (qemu_aio_process_queue())
return;
do {
AioHandler *node;
fd_set rdfds, wrfds;
int max_fd = -1;
/* if we have any readable fds, dispatch event */
if (ret > 0) {
walking_handlers = 1;
/* we have to walk very carefully in case
* qemu_aio_set_fd_handler is called while we're walking */
node = QLIST_FIRST(&aio_handlers);
while (node) {
AioHandler *tmp;
FD_ZERO(&rdfds);
FD_ZERO(&wrfds);
if (!node->deleted &&
FD_ISSET(node->fd, &rdfds) &&
node->io_read) {
node->io_read(node->opaque);
/* fill fd sets */
QLIST_FOREACH(node, &aio_handlers, node) {
/* If there aren't pending AIO operations, don't invoke callbacks.
* Otherwise, if there are no AIO requests, qemu_aio_wait() would
* wait indefinitely.
*/
if (node->io_flush && node->io_flush(node->opaque) == 0)
continue;
if (!node->deleted && node->io_read) {
FD_SET(node->fd, &rdfds);
max_fd = MAX(max_fd, node->fd + 1);
}
if (!node->deleted &&
FD_ISSET(node->fd, &wrfds) &&
node->io_write) {
node->io_write(node->opaque);
}
tmp = node;
node = QLIST_NEXT(node, node);
if (tmp->deleted) {
QLIST_REMOVE(tmp, node);
g_free(tmp);
if (!node->deleted && node->io_write) {
FD_SET(node->fd, &wrfds);
max_fd = MAX(max_fd, node->fd + 1);
}
}
walking_handlers = 0;
}
return true;
/* No AIO operations? Get us out of here */
if (max_fd == -1)
break;
/* wait until next event */
ret = select(max_fd, &rdfds, &wrfds, NULL, NULL);
if (ret == -1 && errno == EINTR)
continue;
/* if we have any readable fds, dispatch event */
if (ret > 0) {
walking_handlers = 1;
/* we have to walk very carefully in case
* qemu_aio_set_fd_handler is called while we're walking */
node = QLIST_FIRST(&aio_handlers);
while (node) {
AioHandler *tmp;
if (!node->deleted &&
FD_ISSET(node->fd, &rdfds) &&
node->io_read) {
node->io_read(node->opaque);
}
if (!node->deleted &&
FD_ISSET(node->fd, &wrfds) &&
node->io_write) {
node->io_write(node->opaque);
}
tmp = node;
node = QLIST_NEXT(node, node);
if (tmp->deleted) {
QLIST_REMOVE(tmp, node);
qemu_free(tmp);
}
}
walking_handlers = 0;
}
} while (ret == 0);
}

View File

@@ -41,8 +41,6 @@
#include "net.h"
#include "gdbstub.h"
#include "hw/smbios.h"
#include "exec-memory.h"
#include "hw/pcspk.h"
#ifdef TARGET_SPARC
int graphic_width = 1024;
@@ -54,6 +52,7 @@ int graphic_height = 600;
int graphic_depth = 15;
#endif
const char arch_config_name[] = CONFIG_QEMU_CONFDIR "/target-" TARGET_ARCH ".conf";
#if defined(TARGET_ALPHA)
#define QEMU_ARCH QEMU_ARCH_ALPHA
@@ -79,8 +78,6 @@ int graphic_depth = 15;
#define QEMU_ARCH QEMU_ARCH_SH4
#elif defined(TARGET_SPARC)
#define QEMU_ARCH QEMU_ARCH_SPARC
#elif defined(TARGET_XTENSA)
#define QEMU_ARCH QEMU_ARCH_XTENSA
#endif
const uint32_t arch_type = QEMU_ARCH;
@@ -95,65 +92,14 @@ const uint32_t arch_type = QEMU_ARCH;
#define RAM_SAVE_FLAG_EOS 0x10
#define RAM_SAVE_FLAG_CONTINUE 0x20
#ifdef __ALTIVEC__
#include <altivec.h>
#define VECTYPE vector unsigned char
#define SPLAT(p) vec_splat(vec_ld(0, p), 0)
#define ALL_EQ(v1, v2) vec_all_eq(v1, v2)
/* altivec.h may redefine the bool macro as vector type.
* Reset it to POSIX semantics. */
#undef bool
#define bool _Bool
#elif defined __SSE2__
#include <emmintrin.h>
#define VECTYPE __m128i
#define SPLAT(p) _mm_set1_epi8(*(p))
#define ALL_EQ(v1, v2) (_mm_movemask_epi8(_mm_cmpeq_epi8(v1, v2)) == 0xFFFF)
#else
#define VECTYPE unsigned long
#define SPLAT(p) (*(p) * (~0UL / 255))
#define ALL_EQ(v1, v2) ((v1) == (v2))
#endif
static struct defconfig_file {
const char *filename;
/* Indicates it is an user config file (disabled by -no-user-config) */
bool userconfig;
} default_config_files[] = {
{ CONFIG_QEMU_DATADIR "/cpus-" TARGET_ARCH ".conf", false },
{ CONFIG_QEMU_CONFDIR "/qemu.conf", true },
{ CONFIG_QEMU_CONFDIR "/target-" TARGET_ARCH ".conf", true },
{ NULL }, /* end of list */
};
int qemu_read_default_config_files(bool userconfig)
static int is_dup_page(uint8_t *page, uint8_t ch)
{
int ret;
struct defconfig_file *f;
for (f = default_config_files; f->filename; f++) {
if (!userconfig && f->userconfig) {
continue;
}
ret = qemu_read_config_file(f->filename);
if (ret < 0 && ret != -ENOENT) {
return ret;
}
}
return 0;
}
static int is_dup_page(uint8_t *page)
{
VECTYPE *p = (VECTYPE *)page;
VECTYPE val = SPLAT(page);
uint32_t val = ch << 24 | ch << 16 | ch << 8 | ch;
uint32_t *array = (uint32_t *)page;
int i;
for (i = 0; i < TARGET_PAGE_SIZE / sizeof(VECTYPE); i++) {
if (!ALL_EQ(val, p[i])) {
for (i = 0; i < (TARGET_PAGE_SIZE / 4); i++) {
if (array[i] != val) {
return 0;
}
}
@@ -168,25 +114,26 @@ static int ram_save_block(QEMUFile *f)
{
RAMBlock *block = last_block;
ram_addr_t offset = last_offset;
ram_addr_t current_addr;
int bytes_sent = 0;
MemoryRegion *mr;
if (!block)
block = QLIST_FIRST(&ram_list.blocks);
current_addr = block->offset + offset;
do {
mr = block->mr;
if (memory_region_get_dirty(mr, offset, TARGET_PAGE_SIZE,
DIRTY_MEMORY_MIGRATION)) {
if (cpu_physical_memory_get_dirty(current_addr, MIGRATION_DIRTY_FLAG)) {
uint8_t *p;
int cont = (block == last_block) ? RAM_SAVE_FLAG_CONTINUE : 0;
memory_region_reset_dirty(mr, offset, TARGET_PAGE_SIZE,
DIRTY_MEMORY_MIGRATION);
cpu_physical_memory_reset_dirty(current_addr,
current_addr + TARGET_PAGE_SIZE,
MIGRATION_DIRTY_FLAG);
p = memory_region_get_ram_ptr(mr) + offset;
p = block->host + offset;
if (is_dup_page(p)) {
if (is_dup_page(p, *p)) {
qemu_put_be64(f, offset | cont | RAM_SAVE_FLAG_COMPRESS);
if (!cont) {
qemu_put_byte(f, strlen(block->idstr));
@@ -216,7 +163,10 @@ static int ram_save_block(QEMUFile *f)
if (!block)
block = QLIST_FIRST(&ram_list.blocks);
}
} while (block != last_block || offset != last_offset);
current_addr = block->offset + offset;
} while (current_addr != last_block->offset + last_offset);
last_block = block;
last_offset = offset;
@@ -233,9 +183,9 @@ static ram_addr_t ram_save_remaining(void)
QLIST_FOREACH(block, &ram_list.blocks, next) {
ram_addr_t addr;
for (addr = 0; addr < block->length; addr += TARGET_PAGE_SIZE) {
if (memory_region_get_dirty(block->mr, addr, TARGET_PAGE_SIZE,
DIRTY_MEMORY_MIGRATION)) {
for (addr = block->offset; addr < block->offset + block->length;
addr += TARGET_PAGE_SIZE) {
if (cpu_physical_memory_get_dirty(addr, MIGRATION_DIRTY_FLAG)) {
count++;
}
}
@@ -269,8 +219,12 @@ static int block_compar(const void *a, const void *b)
{
RAMBlock * const *ablock = a;
RAMBlock * const *bblock = b;
return strcmp((*ablock)->idstr, (*bblock)->idstr);
if ((*ablock)->offset < (*bblock)->offset) {
return -1;
} else if ((*ablock)->offset > (*bblock)->offset) {
return 1;
}
return 0;
}
static void sort_ram_list(void)
@@ -281,7 +235,7 @@ static void sort_ram_list(void)
QLIST_FOREACH(block, &ram_list.blocks, next) {
++n;
}
blocks = g_malloc(n * sizeof *blocks);
blocks = qemu_malloc(n * sizeof *blocks);
n = 0;
QLIST_FOREACH_SAFE(block, &ram_list.blocks, next, nblock) {
blocks[n++] = block;
@@ -291,23 +245,25 @@ static void sort_ram_list(void)
while (--n >= 0) {
QLIST_INSERT_HEAD(&ram_list.blocks, blocks[n], next);
}
g_free(blocks);
qemu_free(blocks);
}
int ram_save_live(QEMUFile *f, int stage, void *opaque)
int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
{
ram_addr_t addr;
uint64_t bytes_transferred_last;
double bwidth = 0;
uint64_t expected_time = 0;
int ret;
if (stage < 0) {
memory_global_dirty_log_stop();
cpu_physical_memory_set_dirty_tracking(0);
return 0;
}
memory_global_sync_dirty_bitmap(get_system_memory());
if (cpu_physical_sync_dirty_bitmap(0, TARGET_PHYS_ADDR_MAX) != 0) {
qemu_file_set_error(f);
return 0;
}
if (stage == 1) {
RAMBlock *block;
@@ -318,15 +274,17 @@ int ram_save_live(QEMUFile *f, int stage, void *opaque)
/* Make sure all dirty bits are set */
QLIST_FOREACH(block, &ram_list.blocks, next) {
for (addr = 0; addr < block->length; addr += TARGET_PAGE_SIZE) {
if (!memory_region_get_dirty(block->mr, addr, TARGET_PAGE_SIZE,
DIRTY_MEMORY_MIGRATION)) {
memory_region_set_dirty(block->mr, addr, TARGET_PAGE_SIZE);
for (addr = block->offset; addr < block->offset + block->length;
addr += TARGET_PAGE_SIZE) {
if (!cpu_physical_memory_get_dirty(addr,
MIGRATION_DIRTY_FLAG)) {
cpu_physical_memory_set_dirty(addr);
}
}
}
memory_global_dirty_log_start();
/* Enable dirty memory tracking */
cpu_physical_memory_set_dirty_tracking(1);
qemu_put_be64(f, ram_bytes_total() | RAM_SAVE_FLAG_MEM_SIZE);
@@ -340,7 +298,7 @@ int ram_save_live(QEMUFile *f, int stage, void *opaque)
bytes_transferred_last = bytes_transferred;
bwidth = qemu_get_clock_ns(rt_clock);
while ((ret = qemu_file_rate_limit(f)) == 0) {
while (!qemu_file_rate_limit(f)) {
int bytes_sent;
bytes_sent = ram_save_block(f);
@@ -350,10 +308,6 @@ int ram_save_live(QEMUFile *f, int stage, void *opaque)
}
}
if (ret < 0) {
return ret;
}
bwidth = qemu_get_clock_ns(rt_clock) - bwidth;
bwidth = (bytes_transferred - bytes_transferred_last) / bwidth;
@@ -371,7 +325,7 @@ int ram_save_live(QEMUFile *f, int stage, void *opaque)
while ((bytes_sent = ram_save_block(f)) != 0) {
bytes_transferred += bytes_sent;
}
memory_global_dirty_log_stop();
cpu_physical_memory_set_dirty_tracking(0);
}
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
@@ -395,7 +349,7 @@ static inline void *host_from_stream_offset(QEMUFile *f,
return NULL;
}
return memory_region_get_ram_ptr(block->mr) + offset;
return block->host + offset;
}
len = qemu_get_byte(f);
@@ -404,7 +358,7 @@ static inline void *host_from_stream_offset(QEMUFile *f,
QLIST_FOREACH(block, &ram_list.blocks, next) {
if (!strncmp(id, block->idstr, sizeof(id)))
return memory_region_get_ram_ptr(block->mr) + offset;
return block->host + offset;
}
fprintf(stderr, "Can't find block %s!\n", id);
@@ -415,9 +369,8 @@ int ram_load(QEMUFile *f, void *opaque, int version_id)
{
ram_addr_t addr;
int flags;
int error;
if (version_id < 4 || version_id > 4) {
if (version_id < 3 || version_id > 4) {
return -EINVAL;
}
@@ -428,7 +381,11 @@ int ram_load(QEMUFile *f, void *opaque, int version_id)
addr &= TARGET_PAGE_MASK;
if (flags & RAM_SAVE_FLAG_MEM_SIZE) {
if (version_id == 4) {
if (version_id == 3) {
if (addr != ram_bytes_total()) {
return -EINVAL;
}
} else {
/* Synchronize RAM block list */
char id[256];
ram_addr_t length;
@@ -466,7 +423,10 @@ int ram_load(QEMUFile *f, void *opaque, int version_id)
void *host;
uint8_t ch;
host = host_from_stream_offset(f, addr, flags);
if (version_id == 3)
host = qemu_get_ram_ptr(addr);
else
host = host_from_stream_offset(f, addr, flags);
if (!host) {
return -EINVAL;
}
@@ -482,19 +442,26 @@ int ram_load(QEMUFile *f, void *opaque, int version_id)
} else if (flags & RAM_SAVE_FLAG_PAGE) {
void *host;
host = host_from_stream_offset(f, addr, flags);
if (version_id == 3)
host = qemu_get_ram_ptr(addr);
else
host = host_from_stream_offset(f, addr, flags);
qemu_get_buffer(f, host, TARGET_PAGE_SIZE);
}
error = qemu_file_get_error(f);
if (error) {
return error;
if (qemu_file_has_error(f)) {
return -EIO;
}
} while (!(flags & RAM_SAVE_FLAG_EOS));
return 0;
}
void qemu_service_io(void)
{
qemu_notify_event();
}
#ifdef HAS_AUDIO
struct soundhw {
const char *name;
@@ -502,14 +469,14 @@ struct soundhw {
int enabled;
int isa;
union {
int (*init_isa) (ISABus *bus);
int (*init_isa) (qemu_irq *pic);
int (*init_pci) (PCIBus *bus);
} init;
};
static struct soundhw soundhw[] = {
#ifdef HAS_AUDIO_CHOICE
#ifdef CONFIG_PCSPK
#if defined(TARGET_I386) || defined(TARGET_MIPS)
{
"pcspk",
"PC speaker",
@@ -657,15 +624,15 @@ void select_soundhw(const char *optarg)
}
}
void audio_init(ISABus *isa_bus, PCIBus *pci_bus)
void audio_init(qemu_irq *isa_pic, PCIBus *pci_bus)
{
struct soundhw *c;
for (c = soundhw; c->name; ++c) {
if (c->enabled) {
if (c->isa) {
if (isa_bus) {
c->init.init_isa(isa_bus);
if (isa_pic) {
c->init.init_isa(isa_pic);
}
} else {
if (pci_bus) {
@@ -679,7 +646,7 @@ void audio_init(ISABus *isa_bus, PCIBus *pci_bus)
void select_soundhw(const char *optarg)
{
}
void audio_init(ISABus *isa_bus, PCIBus *pci_bus)
void audio_init(qemu_irq *isa_pic, PCIBus *pci_bus)
{
}
#endif

View File

@@ -1,6 +1,8 @@
#ifndef QEMU_ARCH_INIT_H
#define QEMU_ARCH_INIT_H
extern const char arch_config_name[];
enum {
QEMU_ARCH_ALL = -1,
QEMU_ARCH_ALPHA = 1,
@@ -15,7 +17,6 @@ enum {
QEMU_ARCH_S390X = 512,
QEMU_ARCH_SH4 = 1024,
QEMU_ARCH_SPARC = 2048,
QEMU_ARCH_XTENSA = 4096,
};
extern const uint32_t arch_type;
@@ -25,7 +26,7 @@ void do_acpitable_option(const char *optarg);
void do_smbios_option(const char *optarg);
void cpudef_init(void);
int audio_available(void);
void audio_init(ISABus *isa_bus, PCIBus *pci_bus);
void audio_init(qemu_irq *isa_pic, PCIBus *pci_bus);
int tcg_available(void);
int kvm_available(void);
int xen_available(void);

View File

@@ -1624,7 +1624,7 @@ arm_decode_shift (long given, fprintf_function func, void *stream,
}
/* Print one coprocessor instruction on INFO->STREAM.
Return true if the instruction matched, false if this is not a
Return true if the instuction matched, false if this is not a
recognised coprocessor instruction. */
static bfd_boolean
@@ -2214,7 +2214,7 @@ print_arm_address (bfd_vma pc, struct disassemble_info *info, long given)
}
/* Print one neon instruction on INFO->STREAM.
Return true if the instruction matched, false if this is not a
Return true if the instuction matched, false if this is not a
recognised neon instruction. */
static bfd_boolean
@@ -3927,7 +3927,7 @@ print_insn_arm (bfd_vma pc, struct disassemble_info *info)
n = last_mapping_sym - 1;
/* No mapping symbol found at this address. Look backwards
for a preceding one. */
for a preceeding one. */
for (; n >= 0; n--)
{
if (get_sym_code_type (info, n, &type))

View File

@@ -37,26 +37,26 @@
#include "hw/arm-misc.h"
#endif
#define TARGET_SYS_OPEN 0x01
#define TARGET_SYS_CLOSE 0x02
#define TARGET_SYS_WRITEC 0x03
#define TARGET_SYS_WRITE0 0x04
#define TARGET_SYS_WRITE 0x05
#define TARGET_SYS_READ 0x06
#define TARGET_SYS_READC 0x07
#define TARGET_SYS_ISTTY 0x09
#define TARGET_SYS_SEEK 0x0a
#define TARGET_SYS_FLEN 0x0c
#define TARGET_SYS_TMPNAM 0x0d
#define TARGET_SYS_REMOVE 0x0e
#define TARGET_SYS_RENAME 0x0f
#define TARGET_SYS_CLOCK 0x10
#define TARGET_SYS_TIME 0x11
#define TARGET_SYS_SYSTEM 0x12
#define TARGET_SYS_ERRNO 0x13
#define TARGET_SYS_GET_CMDLINE 0x15
#define TARGET_SYS_HEAPINFO 0x16
#define TARGET_SYS_EXIT 0x18
#define SYS_OPEN 0x01
#define SYS_CLOSE 0x02
#define SYS_WRITEC 0x03
#define SYS_WRITE0 0x04
#define SYS_WRITE 0x05
#define SYS_READ 0x06
#define SYS_READC 0x07
#define SYS_ISTTY 0x09
#define SYS_SEEK 0x0a
#define SYS_FLEN 0x0c
#define SYS_TMPNAM 0x0d
#define SYS_REMOVE 0x0e
#define SYS_RENAME 0x0f
#define SYS_CLOCK 0x10
#define SYS_TIME 0x11
#define SYS_SYSTEM 0x12
#define SYS_ERRNO 0x13
#define SYS_GET_CMDLINE 0x15
#define SYS_HEAPINFO 0x16
#define SYS_EXIT 0x18
#ifndef O_BINARY
#define O_BINARY 0
@@ -108,7 +108,7 @@ static inline uint32_t set_swi_errno(TaskState *ts, uint32_t code)
return code;
}
#else
static inline uint32_t set_swi_errno(CPUARMState *env, uint32_t code)
static inline uint32_t set_swi_errno(CPUState *env, uint32_t code)
{
return code;
}
@@ -122,7 +122,7 @@ static target_ulong arm_semi_syscall_len;
static target_ulong syscall_err;
#endif
static void arm_semi_cb(CPUARMState *env, target_ulong ret, target_ulong err)
static void arm_semi_cb(CPUState *env, target_ulong ret, target_ulong err)
{
#ifdef CONFIG_USER_ONLY
TaskState *ts = env->opaque;
@@ -138,11 +138,11 @@ static void arm_semi_cb(CPUARMState *env, target_ulong ret, target_ulong err)
} else {
/* Fixup syscalls that use nonstardard return conventions. */
switch (env->regs[0]) {
case TARGET_SYS_WRITE:
case TARGET_SYS_READ:
case SYS_WRITE:
case SYS_READ:
env->regs[0] = arm_semi_syscall_len - ret;
break;
case TARGET_SYS_SEEK:
case SYS_SEEK:
env->regs[0] = 0;
break;
default:
@@ -152,7 +152,7 @@ static void arm_semi_cb(CPUARMState *env, target_ulong ret, target_ulong err)
}
}
static void arm_semi_flen_cb(CPUARMState *env, target_ulong ret, target_ulong err)
static void arm_semi_flen_cb(CPUState *env, target_ulong ret, target_ulong err)
{
/* The size is always stored in big-endian order, extract
the value. We assume the size always fit in 32 bits. */
@@ -174,7 +174,7 @@ static void arm_semi_flen_cb(CPUARMState *env, target_ulong ret, target_ulong er
__arg; \
})
#define SET_ARG(n, val) put_user_ual(val, args + (n) * 4)
uint32_t do_arm_semihosting(CPUARMState *env)
uint32_t do_arm_semihosting(CPUState *env)
{
target_ulong args;
char * s;
@@ -184,42 +184,41 @@ uint32_t do_arm_semihosting(CPUARMState *env)
#ifdef CONFIG_USER_ONLY
TaskState *ts = env->opaque;
#else
CPUARMState *ts = env;
CPUState *ts = env;
#endif
nr = env->regs[0];
args = env->regs[1];
switch (nr) {
case TARGET_SYS_OPEN:
case SYS_OPEN:
if (!(s = lock_user_string(ARG(0))))
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
if (ARG(1) >= 12) {
unlock_user(s, ARG(0), 0);
if (ARG(1) >= 12)
return (uint32_t)-1;
}
if (strcmp(s, ":tt") == 0) {
int result_fileno = ARG(1) < 4 ? STDIN_FILENO : STDOUT_FILENO;
unlock_user(s, ARG(0), 0);
return result_fileno;
if (ARG(1) < 4)
return STDIN_FILENO;
else
return STDOUT_FILENO;
}
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "open,%s,%x,1a4", ARG(0),
(int)ARG(2)+1, gdb_open_modeflags[ARG(1)]);
ret = env->regs[0];
return env->regs[0];
} else {
ret = set_swi_errno(ts, open(s, open_modeflags[ARG(1)], 0644));
}
unlock_user(s, ARG(0), 0);
return ret;
case TARGET_SYS_CLOSE:
case SYS_CLOSE:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "close,%x", ARG(0));
return env->regs[0];
} else {
return set_swi_errno(ts, close(ARG(0)));
}
case TARGET_SYS_WRITEC:
case SYS_WRITEC:
{
char c;
@@ -234,7 +233,7 @@ uint32_t do_arm_semihosting(CPUARMState *env)
return write(STDERR_FILENO, &c, 1);
}
}
case TARGET_SYS_WRITE0:
case SYS_WRITE0:
if (!(s = lock_user_string(args)))
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
@@ -247,7 +246,7 @@ uint32_t do_arm_semihosting(CPUARMState *env)
}
unlock_user(s, args, 0);
return ret;
case TARGET_SYS_WRITE:
case SYS_WRITE:
len = ARG(2);
if (use_gdb_syscalls()) {
arm_semi_syscall_len = len;
@@ -263,7 +262,7 @@ uint32_t do_arm_semihosting(CPUARMState *env)
return -1;
return len - ret;
}
case TARGET_SYS_READ:
case SYS_READ:
len = ARG(2);
if (use_gdb_syscalls()) {
arm_semi_syscall_len = len;
@@ -281,17 +280,17 @@ uint32_t do_arm_semihosting(CPUARMState *env)
return -1;
return len - ret;
}
case TARGET_SYS_READC:
case SYS_READC:
/* XXX: Read from debug cosole. Not implemented. */
return 0;
case TARGET_SYS_ISTTY:
case SYS_ISTTY:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "isatty,%x", ARG(0));
return env->regs[0];
} else {
return isatty(ARG(0));
}
case TARGET_SYS_SEEK:
case SYS_SEEK:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "lseek,%x,%x,0", ARG(0), ARG(1));
return env->regs[0];
@@ -301,7 +300,7 @@ uint32_t do_arm_semihosting(CPUARMState *env)
return -1;
return 0;
}
case TARGET_SYS_FLEN:
case SYS_FLEN:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_flen_cb, "fstat,%x,%x",
ARG(0), env->regs[13]-64);
@@ -313,10 +312,10 @@ uint32_t do_arm_semihosting(CPUARMState *env)
return -1;
return buf.st_size;
}
case TARGET_SYS_TMPNAM:
case SYS_TMPNAM:
/* XXX: Not implemented. */
return -1;
case TARGET_SYS_REMOVE:
case SYS_REMOVE:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "unlink,%s", ARG(0), (int)ARG(1)+1);
ret = env->regs[0];
@@ -328,7 +327,7 @@ uint32_t do_arm_semihosting(CPUARMState *env)
unlock_user(s, ARG(0), 0);
}
return ret;
case TARGET_SYS_RENAME:
case SYS_RENAME:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "rename,%s,%s",
ARG(0), (int)ARG(1)+1, ARG(2), (int)ARG(3)+1);
@@ -348,11 +347,11 @@ uint32_t do_arm_semihosting(CPUARMState *env)
unlock_user(s, ARG(0), 0);
return ret;
}
case TARGET_SYS_CLOCK:
case SYS_CLOCK:
return clock() / (CLOCKS_PER_SEC / 100);
case TARGET_SYS_TIME:
case SYS_TIME:
return set_swi_errno(ts, time(NULL));
case TARGET_SYS_SYSTEM:
case SYS_SYSTEM:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "system,%s", ARG(0), (int)ARG(1)+1);
return env->regs[0];
@@ -364,13 +363,13 @@ uint32_t do_arm_semihosting(CPUARMState *env)
unlock_user(s, ARG(0), 0);
return ret;
}
case TARGET_SYS_ERRNO:
case SYS_ERRNO:
#ifdef CONFIG_USER_ONLY
return ts->swi_errno;
#else
return syscall_err;
#endif
case TARGET_SYS_GET_CMDLINE:
case SYS_GET_CMDLINE:
{
/* Build a command-line from the original argv.
*
@@ -453,7 +452,7 @@ uint32_t do_arm_semihosting(CPUARMState *env)
return status;
}
case TARGET_SYS_HEAPINFO:
case SYS_HEAPINFO:
{
uint32_t *ptr;
uint32_t limit;
@@ -499,7 +498,7 @@ uint32_t do_arm_semihosting(CPUARMState *env)
#endif
return 0;
}
case TARGET_SYS_EXIT:
case SYS_EXIT:
gdb_exit(env, 0);
exit(0);
default:

131
async.c
View File

@@ -24,10 +24,93 @@
#include "qemu-common.h"
#include "qemu-aio.h"
#include "main-loop.h"
/* Anchor of the list of Bottom Halves belonging to the context */
static struct QEMUBH *first_bh;
/*
* An AsyncContext protects the callbacks of AIO requests and Bottom Halves
* against interfering with each other. A typical example is qcow2 that accepts
* asynchronous requests, but relies for manipulation of its metadata on
* synchronous bdrv_read/write that doesn't trigger any callbacks.
*
* However, these functions are often emulated using AIO which means that AIO
* callbacks must be run - but at the same time we must not run callbacks of
* other requests as they might start to modify metadata and corrupt the
* internal state of the caller of bdrv_read/write.
*
* To achieve the desired semantics we switch into a new AsyncContext.
* Callbacks must only be run if they belong to the current AsyncContext.
* Otherwise they need to be queued until their own context is active again.
* This is how you can make qemu_aio_wait() wait only for your own callbacks.
*
* The AsyncContexts form a stack. When you leave a AsyncContexts, you always
* return to the old ("parent") context.
*/
struct AsyncContext {
/* Consecutive number of the AsyncContext (position in the stack) */
int id;
/* Anchor of the list of Bottom Halves belonging to the context */
struct QEMUBH *first_bh;
/* Link to parent context */
struct AsyncContext *parent;
};
/* The currently active AsyncContext */
static struct AsyncContext *async_context = &(struct AsyncContext) { 0 };
/*
* Enter a new AsyncContext. Already scheduled Bottom Halves and AIO callbacks
* won't be called until this context is left again.
*/
void async_context_push(void)
{
struct AsyncContext *new = qemu_mallocz(sizeof(*new));
new->parent = async_context;
new->id = async_context->id + 1;
async_context = new;
}
/* Run queued AIO completions and destroy Bottom Half */
static void bh_run_aio_completions(void *opaque)
{
QEMUBH **bh = opaque;
qemu_bh_delete(*bh);
qemu_free(bh);
qemu_aio_process_queue();
}
/*
* Leave the currently active AsyncContext. All Bottom Halves belonging to the
* old context are executed before changing the context.
*/
void async_context_pop(void)
{
struct AsyncContext *old = async_context;
QEMUBH **bh;
/* Flush the bottom halves, we don't want to lose them */
while (qemu_bh_poll());
/* Switch back to the parent context */
async_context = async_context->parent;
qemu_free(old);
if (async_context == NULL) {
abort();
}
/* Schedule BH to run any queued AIO completions as soon as possible */
bh = qemu_malloc(sizeof(*bh));
*bh = qemu_bh_new(bh_run_aio_completions, bh);
qemu_bh_schedule(*bh);
}
/*
* Returns the ID of the currently active AsyncContext
*/
int get_async_context_id(void)
{
return async_context->id;
}
/***********************************************************/
/* bottom halves (can be seen as timers which expire ASAP) */
@@ -35,20 +118,20 @@ static struct QEMUBH *first_bh;
struct QEMUBH {
QEMUBHFunc *cb;
void *opaque;
int scheduled;
int idle;
int deleted;
QEMUBH *next;
bool scheduled;
bool idle;
bool deleted;
};
QEMUBH *qemu_bh_new(QEMUBHFunc *cb, void *opaque)
{
QEMUBH *bh;
bh = g_malloc0(sizeof(QEMUBH));
bh = qemu_mallocz(sizeof(QEMUBH));
bh->cb = cb;
bh->opaque = opaque;
bh->next = first_bh;
first_bh = bh;
bh->next = async_context->first_bh;
async_context->first_bh = bh;
return bh;
}
@@ -56,12 +139,9 @@ int qemu_bh_poll(void)
{
QEMUBH *bh, **bhp, *next;
int ret;
static int nesting = 0;
nesting++;
ret = 0;
for (bh = first_bh; bh; bh = next) {
for (bh = async_context->first_bh; bh; bh = next) {
next = bh->next;
if (!bh->deleted && bh->scheduled) {
bh->scheduled = 0;
@@ -72,20 +152,15 @@ int qemu_bh_poll(void)
}
}
nesting--;
/* remove deleted bhs */
if (!nesting) {
bhp = &first_bh;
while (*bhp) {
bh = *bhp;
if (bh->deleted) {
*bhp = bh->next;
g_free(bh);
} else {
bhp = &bh->next;
}
}
bhp = &async_context->first_bh;
while (*bhp) {
bh = *bhp;
if (bh->deleted) {
*bhp = bh->next;
qemu_free(bh);
} else
bhp = &bh->next;
}
return ret;
@@ -120,11 +195,11 @@ void qemu_bh_delete(QEMUBH *bh)
bh->deleted = 1;
}
void qemu_bh_update_timeout(uint32_t *timeout)
void qemu_bh_update_timeout(int *timeout)
{
QEMUBH *bh;
for (bh = first_bh; bh; bh = bh->next) {
for (bh = async_context->first_bh; bh; bh = bh->next) {
if (!bh->deleted && bh->scheduled) {
if (bh->idle) {
/* idle bottom halves will be polled at least

View File

@@ -136,7 +136,7 @@ static void alsa_fini_poll (struct pollhlp *hlp)
for (i = 0; i < hlp->count; ++i) {
qemu_set_fd_handler (pfds[i].fd, NULL, NULL, NULL);
}
g_free (pfds);
qemu_free (pfds);
}
hlp->pfds = NULL;
hlp->count = 0;
@@ -260,7 +260,7 @@ static int alsa_poll_helper (snd_pcm_t *handle, struct pollhlp *hlp, int mask)
if (err < 0) {
alsa_logerr (err, "Could not initialize poll mode\n"
"Could not obtain poll descriptors\n");
g_free (pfds);
qemu_free (pfds);
return -1;
}
@@ -288,7 +288,7 @@ static int alsa_poll_helper (snd_pcm_t *handle, struct pollhlp *hlp, int mask)
while (i--) {
qemu_set_fd_handler (pfds[i].fd, NULL, NULL, NULL);
}
g_free (pfds);
qemu_free (pfds);
return -1;
}
}
@@ -816,7 +816,7 @@ static void alsa_fini_out (HWVoiceOut *hw)
alsa_anal_close (&alsa->handle, &alsa->pollhlp);
if (alsa->pcm_buf) {
g_free (alsa->pcm_buf);
qemu_free (alsa->pcm_buf);
alsa->pcm_buf = NULL;
}
}
@@ -979,7 +979,7 @@ static void alsa_fini_in (HWVoiceIn *hw)
alsa_anal_close (&alsa->handle, &alsa->pollhlp);
if (alsa->pcm_buf) {
g_free (alsa->pcm_buf);
qemu_free (alsa->pcm_buf);
alsa->pcm_buf = NULL;
}
}

View File

@@ -196,7 +196,7 @@ void *audio_calloc (const char *funcname, int nmemb, size_t size)
return NULL;
}
return g_malloc0 (len);
return qemu_mallocz (len);
}
static char *audio_alloc_prefix (const char *s)
@@ -210,7 +210,7 @@ static char *audio_alloc_prefix (const char *s)
}
len = strlen (s);
r = g_malloc (len + sizeof (qemu_prefix));
r = qemu_malloc (len + sizeof (qemu_prefix));
u = r + sizeof (qemu_prefix) - 1;
@@ -425,7 +425,7 @@ static void audio_print_options (const char *prefix,
printf (" %s\n", opt->descr);
}
g_free (uprefix);
qemu_free (uprefix);
}
static void audio_process_options (const char *prefix,
@@ -462,7 +462,7 @@ static void audio_process_options (const char *prefix,
* (includes trailing zero) + zero + underscore (on behalf of
* sizeof) */
optlen = len + preflen + sizeof (qemu_prefix) + 1;
optname = g_malloc (optlen);
optname = qemu_malloc (optlen);
pstrcpy (optname, optlen, qemu_prefix);
@@ -507,7 +507,7 @@ static void audio_process_options (const char *prefix,
opt->overriddenp = &opt->overridden;
}
*opt->overriddenp = !def;
g_free (optname);
qemu_free (optname);
}
}
@@ -585,20 +585,17 @@ static int audio_pcm_info_eq (struct audio_pcm_info *info, struct audsettings *a
switch (as->fmt) {
case AUD_FMT_S8:
sign = 1;
/* fall through */
case AUD_FMT_U8:
break;
case AUD_FMT_S16:
sign = 1;
/* fall through */
case AUD_FMT_U16:
bits = 16;
break;
case AUD_FMT_S32:
sign = 1;
/* fall through */
case AUD_FMT_U32:
bits = 32;
break;
@@ -781,7 +778,7 @@ static void audio_detach_capture (HWVoiceOut *hw)
QLIST_REMOVE (sw, entries);
QLIST_REMOVE (sc, entries);
g_free (sc);
qemu_free (sc);
if (was_active) {
/* We have removed soft voice from the capture:
this might have changed the overall status of the capture
@@ -821,7 +818,7 @@ static int audio_attach_capture (HWVoiceOut *hw)
sw->rate = st_rate_start (sw->info.freq, hw_cap->info.freq);
if (!sw->rate) {
dolog ("Could not start rate conversion for `%s'\n", SW_NAME (sw));
g_free (sw);
qemu_free (sw);
return -1;
}
QLIST_INSERT_HEAD (&hw_cap->sw_head, sw, entries);
@@ -957,9 +954,7 @@ int audio_pcm_sw_read (SWVoiceIn *sw, void *buf, int size)
total += isamp;
}
if (!(hw->ctl_caps & VOICE_VOLUME_CAP)) {
mixeng_volume (sw->buf, ret, &sw->vol);
}
mixeng_volume (sw->buf, ret, &sw->vol);
sw->clip (buf, sw->buf, ret);
sw->total_hw_samples_acquired += total;
@@ -1043,10 +1038,7 @@ int audio_pcm_sw_write (SWVoiceOut *sw, void *buf, int size)
swlim = audio_MIN (swlim, samples);
if (swlim) {
sw->conv (sw->buf, buf, swlim);
if (!(sw->hw->ctl_caps & VOICE_VOLUME_CAP)) {
mixeng_volume (sw->buf, swlim, &sw->vol);
}
mixeng_volume (sw->buf, swlim, &sw->vol);
}
while (swlim) {
@@ -1673,7 +1665,7 @@ static void audio_pp_nb_voices (const char *typ, int nb)
printf ("Theoretically supports many %s voices\n", typ);
break;
default:
printf ("Theoretically supports up to %d %s voices\n", nb, typ);
printf ("Theoretically supports upto %d %s voices\n", nb, typ);
break;
}
@@ -1751,7 +1743,7 @@ static int audio_driver_init (AudioState *s, struct audio_driver *drv)
}
static void audio_vm_change_state_handler (void *opaque, int running,
RunState state)
int reason)
{
AudioState *s = opaque;
HWVoiceOut *hwo = NULL;
@@ -1775,12 +1767,10 @@ static void audio_atexit (void)
HWVoiceOut *hwo = NULL;
HWVoiceIn *hwi = NULL;
while ((hwo = audio_pcm_hw_find_any_out (hwo))) {
while ((hwo = audio_pcm_hw_find_any_enabled_out (hwo))) {
SWVoiceCap *sc;
if (hwo->enabled) {
hwo->pcm_ops->ctl_out (hwo, VOICE_DISABLE);
}
hwo->pcm_ops->ctl_out (hwo, VOICE_DISABLE);
hwo->pcm_ops->fini_out (hwo);
for (sc = hwo->cap_head.lh_first; sc; sc = sc->entries.le_next) {
@@ -1793,10 +1783,8 @@ static void audio_atexit (void)
}
}
while ((hwi = audio_pcm_hw_find_any_in (hwi))) {
if (hwi->enabled) {
hwi->pcm_ops->ctl_in (hwi, VOICE_DISABLE);
}
while ((hwi = audio_pcm_hw_find_any_enabled_in (hwi))) {
hwi->pcm_ops->ctl_in (hwi, VOICE_DISABLE);
hwi->pcm_ops->fini_in (hwi);
}
@@ -1919,7 +1907,7 @@ static void audio_init (void)
void AUD_register_card (const char *name, QEMUSoundCard *card)
{
audio_init ();
card->name = g_strdup (name);
card->name = qemu_strdup (name);
memset (&card->entries, 0, sizeof (card->entries));
QLIST_INSERT_HEAD (&glob_audio_state.card_head, card, entries);
}
@@ -1927,7 +1915,7 @@ void AUD_register_card (const char *name, QEMUSoundCard *card)
void AUD_remove_card (QEMUSoundCard *card)
{
QLIST_REMOVE (card, entries);
g_free (card->name);
qemu_free (card->name);
}
@@ -2012,11 +2000,11 @@ CaptureVoiceOut *AUD_add_capture (
return cap;
err3:
g_free (cap->hw.mix_buf);
qemu_free (cap->hw.mix_buf);
err2:
g_free (cap);
qemu_free (cap);
err1:
g_free (cb);
qemu_free (cb);
err0:
return NULL;
}
@@ -2030,7 +2018,7 @@ void AUD_del_capture (CaptureVoiceOut *cap, void *cb_opaque)
if (cb->opaque == cb_opaque) {
cb->ops.destroy (cb_opaque);
QLIST_REMOVE (cb, entries);
g_free (cb);
qemu_free (cb);
if (!cap->cb_head.lh_first) {
SWVoiceOut *sw = cap->hw.sw_head.lh_first, *sw1;
@@ -2048,11 +2036,11 @@ void AUD_del_capture (CaptureVoiceOut *cap, void *cb_opaque)
}
QLIST_REMOVE (sw, entries);
QLIST_REMOVE (sc, entries);
g_free (sc);
qemu_free (sc);
sw = sw1;
}
QLIST_REMOVE (cap, entries);
g_free (cap);
qemu_free (cap);
}
return;
}
@@ -2062,29 +2050,17 @@ void AUD_del_capture (CaptureVoiceOut *cap, void *cb_opaque)
void AUD_set_volume_out (SWVoiceOut *sw, int mute, uint8_t lvol, uint8_t rvol)
{
if (sw) {
HWVoiceOut *hw = sw->hw;
sw->vol.mute = mute;
sw->vol.l = nominal_volume.l * lvol / 255;
sw->vol.r = nominal_volume.r * rvol / 255;
if (hw->pcm_ops->ctl_out) {
hw->pcm_ops->ctl_out (hw, VOICE_VOLUME, sw);
}
}
}
void AUD_set_volume_in (SWVoiceIn *sw, int mute, uint8_t lvol, uint8_t rvol)
{
if (sw) {
HWVoiceIn *hw = sw->hw;
sw->vol.mute = mute;
sw->vol.l = nominal_volume.l * lvol / 255;
sw->vol.r = nominal_volume.r * rvol / 255;
if (hw->pcm_ops->ctl_in) {
hw->pcm_ops->ctl_in (hw, VOICE_VOLUME, sw);
}
}
}

View File

@@ -82,7 +82,6 @@ typedef struct HWVoiceOut {
int samples;
QLIST_HEAD (sw_out_listhead, SWVoiceOut) sw_head;
QLIST_HEAD (sw_cap_listhead, SWVoiceCap) cap_head;
int ctl_caps;
struct audio_pcm_ops *pcm_ops;
QLIST_ENTRY (HWVoiceOut) entries;
} HWVoiceOut;
@@ -102,7 +101,6 @@ typedef struct HWVoiceIn {
int samples;
QLIST_HEAD (sw_in_listhead, SWVoiceIn) sw_head;
int ctl_caps;
struct audio_pcm_ops *pcm_ops;
QLIST_ENTRY (HWVoiceIn) entries;
} HWVoiceIn;
@@ -152,7 +150,6 @@ struct audio_driver {
int max_voices_in;
int voice_size_out;
int voice_size_in;
int ctl_caps;
};
struct audio_pcm_ops {
@@ -234,9 +231,6 @@ void audio_run (const char *msg);
#define VOICE_ENABLE 1
#define VOICE_DISABLE 2
#define VOICE_VOLUME 3
#define VOICE_VOLUME_CAP (1 << VOICE_VOLUME)
static inline int audio_ring_dist (int dst, int src, int len)
{

View File

@@ -72,7 +72,7 @@ static void glue (audio_init_nb_voices_, TYPE) (struct audio_driver *drv)
static void glue (audio_pcm_hw_free_resources_, TYPE) (HW *hw)
{
if (HWBUF) {
g_free (HWBUF);
qemu_free (HWBUF);
}
HWBUF = NULL;
@@ -93,7 +93,7 @@ static int glue (audio_pcm_hw_alloc_resources_, TYPE) (HW *hw)
static void glue (audio_pcm_sw_free_resources_, TYPE) (SW *sw)
{
if (sw->buf) {
g_free (sw->buf);
qemu_free (sw->buf);
}
if (sw->rate) {
@@ -123,7 +123,7 @@ static int glue (audio_pcm_sw_alloc_resources_, TYPE) (SW *sw)
sw->rate = st_rate_start (sw->hw->info.freq, sw->info.freq);
#endif
if (!sw->rate) {
g_free (sw->buf);
qemu_free (sw->buf);
sw->buf = NULL;
return -1;
}
@@ -160,10 +160,10 @@ static int glue (audio_pcm_sw_init_, TYPE) (
[sw->info.swap_endianness]
[audio_bits_to_index (sw->info.bits)];
sw->name = g_strdup (name);
sw->name = qemu_strdup (name);
err = glue (audio_pcm_sw_alloc_resources_, TYPE) (sw);
if (err) {
g_free (sw->name);
qemu_free (sw->name);
sw->name = NULL;
}
return err;
@@ -173,7 +173,7 @@ static void glue (audio_pcm_sw_fini_, TYPE) (SW *sw)
{
glue (audio_pcm_sw_free_resources_, TYPE) (sw);
if (sw->name) {
g_free (sw->name);
qemu_free (sw->name);
sw->name = NULL;
}
}
@@ -201,7 +201,7 @@ static void glue (audio_pcm_hw_gc_, TYPE) (HW **hwp)
glue (s->nb_hw_voices_, TYPE) += 1;
glue (audio_pcm_hw_free_resources_ ,TYPE) (hw);
glue (hw->pcm_ops->fini_, TYPE) (hw);
g_free (hw);
qemu_free (hw);
*hwp = NULL;
}
}
@@ -263,8 +263,6 @@ static HW *glue (audio_pcm_hw_add_new_, TYPE) (struct audsettings *as)
}
hw->pcm_ops = drv->pcm_ops;
hw->ctl_caps = drv->ctl_caps;
QLIST_INIT (&hw->sw_head);
#ifdef DAC
QLIST_INIT (&hw->cap_head);
@@ -302,7 +300,7 @@ static HW *glue (audio_pcm_hw_add_new_, TYPE) (struct audsettings *as)
err1:
glue (hw->pcm_ops->fini_, TYPE) (hw);
err0:
g_free (hw);
qemu_free (hw);
return NULL;
}
@@ -370,7 +368,7 @@ err3:
glue (audio_pcm_hw_del_sw_, TYPE) (sw);
glue (audio_pcm_hw_gc_, TYPE) (&hw);
err2:
g_free (sw);
qemu_free (sw);
err1:
return NULL;
}
@@ -380,7 +378,7 @@ static void glue (audio_close_, TYPE) (SW *sw)
glue (audio_pcm_sw_fini_, TYPE) (sw);
glue (audio_pcm_hw_del_sw_, TYPE) (sw);
glue (audio_pcm_hw_gc_, TYPE) (&sw->hw);
g_free (sw);
qemu_free (sw);
}
void glue (AUD_close_, TYPE) (QEMUSoundCard *card, SW *sw)

View File

@@ -201,7 +201,7 @@ static int qesd_init_out (HWVoiceOut *hw, struct audsettings *as)
case AUD_FMT_S32:
case AUD_FMT_U32:
dolog ("Will use 16 instead of 32 bit samples\n");
/* fall through */
case AUD_FMT_S16:
case AUD_FMT_U16:
deffmt:
@@ -246,7 +246,7 @@ static int qesd_init_out (HWVoiceOut *hw, struct audsettings *as)
esd->fd = -1;
fail1:
g_free (esd->pcm_buf);
qemu_free (esd->pcm_buf);
esd->pcm_buf = NULL;
return -1;
}
@@ -270,7 +270,7 @@ static void qesd_fini_out (HWVoiceOut *hw)
audio_pt_fini (&esd->pt, AUDIO_FUNC);
g_free (esd->pcm_buf);
qemu_free (esd->pcm_buf);
esd->pcm_buf = NULL;
}
@@ -453,7 +453,7 @@ static int qesd_init_in (HWVoiceIn *hw, struct audsettings *as)
esd->fd = -1;
fail1:
g_free (esd->pcm_buf);
qemu_free (esd->pcm_buf);
esd->pcm_buf = NULL;
return -1;
}
@@ -477,7 +477,7 @@ static void qesd_fini_in (HWVoiceIn *hw)
audio_pt_fini (&esd->pt, AUDIO_FUNC);
g_free (esd->pcm_buf);
qemu_free (esd->pcm_buf);
esd->pcm_buf = NULL;
}

View File

@@ -343,7 +343,7 @@ static void fmod_fini_out (HWVoiceOut *hw)
static int fmod_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int mode, channel;
int bits16, mode, channel;
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
struct audsettings obt_as = *as;
@@ -374,6 +374,7 @@ static int fmod_init_out (HWVoiceOut *hw, struct audsettings *as)
/* FMOD always operates on little endian frames? */
obt_as.endianness = 0;
audio_pcm_init_info (&hw->info, &obt_as);
bits16 = (mode & FSOUND_16BITS) != 0;
hw->samples = conf.nb_samples;
return 0;
}
@@ -404,7 +405,7 @@ static int fmod_ctl_out (HWVoiceOut *hw, int cmd, ...)
static int fmod_init_in (HWVoiceIn *hw, struct audsettings *as)
{
int mode;
int bits16, mode;
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
struct audsettings obt_as = *as;
@@ -431,6 +432,7 @@ static int fmod_init_in (HWVoiceIn *hw, struct audsettings *as)
/* FMOD always operates on little endian frames? */
obt_as.endianness = 0;
audio_pcm_init_info (&hw->info, &obt_as);
bits16 = (mode & FSOUND_16BITS) != 0;
hw->samples = conf.nb_samples;
return 0;
}

View File

@@ -33,8 +33,7 @@
#define ENDIAN_CONVERT(v) (v)
/* Signed 8 bit */
#define BSIZE 8
#define ITYPE int
#define IN_T int8_t
#define IN_MIN SCHAR_MIN
#define IN_MAX SCHAR_MAX
#define SIGNED
@@ -43,29 +42,25 @@
#undef SIGNED
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
/* Unsigned 8 bit */
#define BSIZE 8
#define ITYPE uint
#define IN_T uint8_t
#define IN_MIN 0
#define IN_MAX UCHAR_MAX
#define SHIFT 8
#include "mixeng_template.h"
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
#undef ENDIAN_CONVERT
#undef ENDIAN_CONVERSION
/* Signed 16 bit */
#define BSIZE 16
#define ITYPE int
#define IN_T int16_t
#define IN_MIN SHRT_MIN
#define IN_MAX SHRT_MAX
#define SIGNED
@@ -83,13 +78,11 @@
#undef SIGNED
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
/* Unsigned 16 bit */
#define BSIZE 16
#define ITYPE uint
#define IN_T uint16_t
#define IN_MIN 0
#define IN_MAX USHRT_MAX
#define SHIFT 16
@@ -105,13 +98,11 @@
#undef ENDIAN_CONVERSION
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
/* Signed 32 bit */
#define BSIZE 32
#define ITYPE int
#define IN_T int32_t
#define IN_MIN INT32_MIN
#define IN_MAX INT32_MAX
#define SIGNED
@@ -129,13 +120,11 @@
#undef SIGNED
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
/* Unsigned 32 bit */
#define BSIZE 32
#define ITYPE uint
#define IN_T uint32_t
#define IN_MIN 0
#define IN_MAX UINT32_MAX
#define SHIFT 32
@@ -151,8 +140,7 @@
#undef ENDIAN_CONVERSION
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
t_sample *mixeng_conv[2][2][2][3] = {
@@ -338,7 +326,7 @@ void *st_rate_start (int inrate, int outrate)
void st_rate_stop (void *opaque)
{
g_free (opaque);
qemu_free (opaque);
}
void mixeng_clear (struct st_sample *buf, int len)

View File

@@ -31,8 +31,7 @@
#define HALF (IN_MAX >> 1)
#endif
#define ET glue (ENDIAN_CONVERSION, glue (glue (glue (_, ITYPE), BSIZE), _t))
#define IN_T glue (glue (ITYPE, BSIZE), _t)
#define ET glue (ENDIAN_CONVERSION, glue (_, IN_T))
#ifdef FLOAT_MIXENG
static mixeng_real inline glue (conv_, ET) (IN_T v)
@@ -151,4 +150,3 @@ static void glue (glue (clip_, ET), _from_mono)
#undef ET
#undef HALF
#undef IN_T

View File

@@ -508,7 +508,7 @@ static void oss_fini_out (HWVoiceOut *hw)
}
}
else {
g_free (oss->pcm_buf);
qemu_free (oss->pcm_buf);
}
oss->pcm_buf = NULL;
}
@@ -741,7 +741,7 @@ static void oss_fini_in (HWVoiceIn *hw)
oss_anal_close (&oss->fd);
if (oss->pcm_buf) {
g_free (oss->pcm_buf);
qemu_free (oss->pcm_buf);
oss->pcm_buf = NULL;
}
}

View File

@@ -2,7 +2,8 @@
#include "qemu-common.h"
#include "audio.h"
#include <pulse/pulseaudio.h>
#include <pulse/simple.h>
#include <pulse/error.h>
#define AUDIO_CAP "pulseaudio"
#include "audio_int.h"
@@ -14,7 +15,7 @@ typedef struct {
int live;
int decr;
int rpos;
pa_stream *stream;
pa_simple *s;
void *pcm_buf;
struct audio_pt pt;
} PAVoiceOut;
@@ -25,23 +26,17 @@ typedef struct {
int dead;
int incr;
int wpos;
pa_stream *stream;
pa_simple *s;
void *pcm_buf;
struct audio_pt pt;
const void *read_data;
size_t read_index, read_length;
} PAVoiceIn;
typedef struct {
static struct {
int samples;
char *server;
char *sink;
char *source;
pa_threaded_mainloop *mainloop;
pa_context *context;
} paaudio;
static paaudio glob_paaudio = {
} conf = {
.samples = 4096,
};
@@ -56,146 +51,6 @@ static void GCC_FMT_ATTR (2, 3) qpa_logerr (int err, const char *fmt, ...)
AUD_log (AUDIO_CAP, "Reason: %s\n", pa_strerror (err));
}
#ifndef PA_CONTEXT_IS_GOOD
static inline int PA_CONTEXT_IS_GOOD(pa_context_state_t x)
{
return
x == PA_CONTEXT_CONNECTING ||
x == PA_CONTEXT_AUTHORIZING ||
x == PA_CONTEXT_SETTING_NAME ||
x == PA_CONTEXT_READY;
}
#endif
#ifndef PA_STREAM_IS_GOOD
static inline int PA_STREAM_IS_GOOD(pa_stream_state_t x)
{
return
x == PA_STREAM_CREATING ||
x == PA_STREAM_READY;
}
#endif
#define CHECK_SUCCESS_GOTO(c, rerror, expression, label) \
do { \
if (!(expression)) { \
if (rerror) { \
*(rerror) = pa_context_errno ((c)->context); \
} \
goto label; \
} \
} while (0);
#define CHECK_DEAD_GOTO(c, stream, rerror, label) \
do { \
if (!(c)->context || !PA_CONTEXT_IS_GOOD (pa_context_get_state((c)->context)) || \
!(stream) || !PA_STREAM_IS_GOOD (pa_stream_get_state ((stream)))) { \
if (((c)->context && pa_context_get_state ((c)->context) == PA_CONTEXT_FAILED) || \
((stream) && pa_stream_get_state ((stream)) == PA_STREAM_FAILED)) { \
if (rerror) { \
*(rerror) = pa_context_errno ((c)->context); \
} \
} else { \
if (rerror) { \
*(rerror) = PA_ERR_BADSTATE; \
} \
} \
goto label; \
} \
} while (0);
static int qpa_simple_read (PAVoiceIn *p, void *data, size_t length, int *rerror)
{
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_lock (g->mainloop);
CHECK_DEAD_GOTO (g, p->stream, rerror, unlock_and_fail);
while (length > 0) {
size_t l;
while (!p->read_data) {
int r;
r = pa_stream_peek (p->stream, &p->read_data, &p->read_length);
CHECK_SUCCESS_GOTO (g, rerror, r == 0, unlock_and_fail);
if (!p->read_data) {
pa_threaded_mainloop_wait (g->mainloop);
CHECK_DEAD_GOTO (g, p->stream, rerror, unlock_and_fail);
} else {
p->read_index = 0;
}
}
l = p->read_length < length ? p->read_length : length;
memcpy (data, (const uint8_t *) p->read_data+p->read_index, l);
data = (uint8_t *) data + l;
length -= l;
p->read_index += l;
p->read_length -= l;
if (!p->read_length) {
int r;
r = pa_stream_drop (p->stream);
p->read_data = NULL;
p->read_length = 0;
p->read_index = 0;
CHECK_SUCCESS_GOTO (g, rerror, r == 0, unlock_and_fail);
}
}
pa_threaded_mainloop_unlock (g->mainloop);
return 0;
unlock_and_fail:
pa_threaded_mainloop_unlock (g->mainloop);
return -1;
}
static int qpa_simple_write (PAVoiceOut *p, const void *data, size_t length, int *rerror)
{
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_lock (g->mainloop);
CHECK_DEAD_GOTO (g, p->stream, rerror, unlock_and_fail);
while (length > 0) {
size_t l;
int r;
while (!(l = pa_stream_writable_size (p->stream))) {
pa_threaded_mainloop_wait (g->mainloop);
CHECK_DEAD_GOTO (g, p->stream, rerror, unlock_and_fail);
}
CHECK_SUCCESS_GOTO (g, rerror, l != (size_t) -1, unlock_and_fail);
if (l > length) {
l = length;
}
r = pa_stream_write (p->stream, data, l, NULL, 0LL, PA_SEEK_RELATIVE);
CHECK_SUCCESS_GOTO (g, rerror, r >= 0, unlock_and_fail);
data = (const uint8_t *) data + l;
length -= l;
}
pa_threaded_mainloop_unlock (g->mainloop);
return 0;
unlock_and_fail:
pa_threaded_mainloop_unlock (g->mainloop);
return -1;
}
static void *qpa_thread_out (void *arg)
{
PAVoiceOut *pa = arg;
@@ -222,7 +77,7 @@ static void *qpa_thread_out (void *arg)
}
}
decr = to_mix = audio_MIN (pa->live, glob_paaudio.samples >> 2);
decr = to_mix = audio_MIN (pa->live, conf.samples >> 2);
rpos = pa->rpos;
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
@@ -236,8 +91,8 @@ static void *qpa_thread_out (void *arg)
hw->clip (pa->pcm_buf, src, chunk);
if (qpa_simple_write (pa, pa->pcm_buf,
chunk << hw->info.shift, &error) < 0) {
if (pa_simple_write (pa->s, pa->pcm_buf,
chunk << hw->info.shift, &error) < 0) {
qpa_logerr (error, "pa_simple_write failed\n");
return NULL;
}
@@ -314,7 +169,7 @@ static void *qpa_thread_in (void *arg)
}
}
incr = to_grab = audio_MIN (pa->dead, glob_paaudio.samples >> 2);
incr = to_grab = audio_MIN (pa->dead, conf.samples >> 2);
wpos = pa->wpos;
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
@@ -326,8 +181,8 @@ static void *qpa_thread_in (void *arg)
int chunk = audio_MIN (to_grab, hw->samples - wpos);
void *buf = advance (pa->pcm_buf, wpos);
if (qpa_simple_read (pa, buf,
chunk << hw->info.shift, &error) < 0) {
if (pa_simple_read (pa->s, buf,
chunk << hw->info.shift, &error) < 0) {
qpa_logerr (error, "pa_simple_read failed\n");
return NULL;
}
@@ -428,112 +283,6 @@ static audfmt_e pa_to_audfmt (pa_sample_format_t fmt, int *endianness)
}
}
static void context_state_cb (pa_context *c, void *userdata)
{
paaudio *g = &glob_paaudio;
switch (pa_context_get_state(c)) {
case PA_CONTEXT_READY:
case PA_CONTEXT_TERMINATED:
case PA_CONTEXT_FAILED:
pa_threaded_mainloop_signal (g->mainloop, 0);
break;
case PA_CONTEXT_UNCONNECTED:
case PA_CONTEXT_CONNECTING:
case PA_CONTEXT_AUTHORIZING:
case PA_CONTEXT_SETTING_NAME:
break;
}
}
static void stream_state_cb (pa_stream *s, void * userdata)
{
paaudio *g = &glob_paaudio;
switch (pa_stream_get_state (s)) {
case PA_STREAM_READY:
case PA_STREAM_FAILED:
case PA_STREAM_TERMINATED:
pa_threaded_mainloop_signal (g->mainloop, 0);
break;
case PA_STREAM_UNCONNECTED:
case PA_STREAM_CREATING:
break;
}
}
static void stream_request_cb (pa_stream *s, size_t length, void *userdata)
{
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_signal (g->mainloop, 0);
}
static pa_stream *qpa_simple_new (
const char *server,
const char *name,
pa_stream_direction_t dir,
const char *dev,
const char *stream_name,
const pa_sample_spec *ss,
const pa_channel_map *map,
const pa_buffer_attr *attr,
int *rerror)
{
paaudio *g = &glob_paaudio;
int r;
pa_stream *stream;
pa_threaded_mainloop_lock (g->mainloop);
stream = pa_stream_new (g->context, name, ss, map);
if (!stream) {
goto fail;
}
pa_stream_set_state_callback (stream, stream_state_cb, g);
pa_stream_set_read_callback (stream, stream_request_cb, g);
pa_stream_set_write_callback (stream, stream_request_cb, g);
if (dir == PA_STREAM_PLAYBACK) {
r = pa_stream_connect_playback (stream, dev, attr,
PA_STREAM_INTERPOLATE_TIMING
#ifdef PA_STREAM_ADJUST_LATENCY
|PA_STREAM_ADJUST_LATENCY
#endif
|PA_STREAM_AUTO_TIMING_UPDATE, NULL, NULL);
} else {
r = pa_stream_connect_record (stream, dev, attr,
PA_STREAM_INTERPOLATE_TIMING
#ifdef PA_STREAM_ADJUST_LATENCY
|PA_STREAM_ADJUST_LATENCY
#endif
|PA_STREAM_AUTO_TIMING_UPDATE);
}
if (r < 0) {
goto fail;
}
pa_threaded_mainloop_unlock (g->mainloop);
return stream;
fail:
pa_threaded_mainloop_unlock (g->mainloop);
if (stream) {
pa_stream_unref (stream);
}
*rerror = pa_context_errno (g->context);
return NULL;
}
static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int error;
@@ -557,24 +306,24 @@ static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
obt_as.fmt = pa_to_audfmt (ss.format, &obt_as.endianness);
pa->stream = qpa_simple_new (
glob_paaudio.server,
pa->s = pa_simple_new (
conf.server,
"qemu",
PA_STREAM_PLAYBACK,
glob_paaudio.sink,
conf.sink,
"pcm.playback",
&ss,
NULL, /* channel map */
&ba, /* buffering attributes */
&error
);
if (!pa->stream) {
if (!pa->s) {
qpa_logerr (error, "pa_simple_new for playback failed\n");
goto fail1;
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = glob_paaudio.samples;
hw->samples = conf.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->rpos = hw->rpos;
if (!pa->pcm_buf) {
@@ -590,13 +339,11 @@ static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
return 0;
fail3:
g_free (pa->pcm_buf);
qemu_free (pa->pcm_buf);
pa->pcm_buf = NULL;
fail2:
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
}
pa_simple_free (pa->s);
pa->s = NULL;
fail1:
return -1;
}
@@ -614,24 +361,24 @@ static int qpa_init_in (HWVoiceIn *hw, struct audsettings *as)
obt_as.fmt = pa_to_audfmt (ss.format, &obt_as.endianness);
pa->stream = qpa_simple_new (
glob_paaudio.server,
pa->s = pa_simple_new (
conf.server,
"qemu",
PA_STREAM_RECORD,
glob_paaudio.source,
conf.source,
"pcm.capture",
&ss,
NULL, /* channel map */
NULL, /* buffering attributes */
&error
);
if (!pa->stream) {
if (!pa->s) {
qpa_logerr (error, "pa_simple_new for capture failed\n");
goto fail1;
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = glob_paaudio.samples;
hw->samples = conf.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->wpos = hw->wpos;
if (!pa->pcm_buf) {
@@ -647,13 +394,11 @@ static int qpa_init_in (HWVoiceIn *hw, struct audsettings *as)
return 0;
fail3:
g_free (pa->pcm_buf);
qemu_free (pa->pcm_buf);
pa->pcm_buf = NULL;
fail2:
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
}
pa_simple_free (pa->s);
pa->s = NULL;
fail1:
return -1;
}
@@ -668,13 +413,13 @@ static void qpa_fini_out (HWVoiceOut *hw)
audio_pt_unlock_and_signal (&pa->pt, AUDIO_FUNC);
audio_pt_join (&pa->pt, &ret, AUDIO_FUNC);
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
if (pa->s) {
pa_simple_free (pa->s);
pa->s = NULL;
}
audio_pt_fini (&pa->pt, AUDIO_FUNC);
g_free (pa->pcm_buf);
qemu_free (pa->pcm_buf);
pa->pcm_buf = NULL;
}
@@ -688,225 +433,64 @@ static void qpa_fini_in (HWVoiceIn *hw)
audio_pt_unlock_and_signal (&pa->pt, AUDIO_FUNC);
audio_pt_join (&pa->pt, &ret, AUDIO_FUNC);
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
if (pa->s) {
pa_simple_free (pa->s);
pa->s = NULL;
}
audio_pt_fini (&pa->pt, AUDIO_FUNC);
g_free (pa->pcm_buf);
qemu_free (pa->pcm_buf);
pa->pcm_buf = NULL;
}
static int qpa_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
PAVoiceOut *pa = (PAVoiceOut *) hw;
pa_operation *op;
pa_cvolume v;
paaudio *g = &glob_paaudio;
#ifdef PA_CHECK_VERSION /* macro is present in 0.9.16+ */
pa_cvolume_init (&v); /* function is present in 0.9.13+ */
#endif
switch (cmd) {
case VOICE_VOLUME:
{
SWVoiceOut *sw;
va_list ap;
va_start (ap, cmd);
sw = va_arg (ap, SWVoiceOut *);
va_end (ap);
v.channels = 2;
v.values[0] = ((PA_VOLUME_NORM - PA_VOLUME_MUTED) * sw->vol.l) / UINT32_MAX;
v.values[1] = ((PA_VOLUME_NORM - PA_VOLUME_MUTED) * sw->vol.r) / UINT32_MAX;
pa_threaded_mainloop_lock (g->mainloop);
op = pa_context_set_sink_input_volume (g->context,
pa_stream_get_index (pa->stream),
&v, NULL, NULL);
if (!op)
qpa_logerr (pa_context_errno (g->context),
"set_sink_input_volume() failed\n");
else
pa_operation_unref (op);
op = pa_context_set_sink_input_mute (g->context,
pa_stream_get_index (pa->stream),
sw->vol.mute, NULL, NULL);
if (!op) {
qpa_logerr (pa_context_errno (g->context),
"set_sink_input_mute() failed\n");
} else {
pa_operation_unref (op);
}
pa_threaded_mainloop_unlock (g->mainloop);
}
}
(void) hw;
(void) cmd;
return 0;
}
static int qpa_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
PAVoiceIn *pa = (PAVoiceIn *) hw;
pa_operation *op;
pa_cvolume v;
paaudio *g = &glob_paaudio;
#ifdef PA_CHECK_VERSION
pa_cvolume_init (&v);
#endif
switch (cmd) {
case VOICE_VOLUME:
{
SWVoiceIn *sw;
va_list ap;
va_start (ap, cmd);
sw = va_arg (ap, SWVoiceIn *);
va_end (ap);
v.channels = 2;
v.values[0] = ((PA_VOLUME_NORM - PA_VOLUME_MUTED) * sw->vol.l) / UINT32_MAX;
v.values[1] = ((PA_VOLUME_NORM - PA_VOLUME_MUTED) * sw->vol.r) / UINT32_MAX;
pa_threaded_mainloop_lock (g->mainloop);
/* FIXME: use the upcoming "set_source_output_{volume,mute}" */
op = pa_context_set_source_volume_by_index (g->context,
pa_stream_get_device_index (pa->stream),
&v, NULL, NULL);
if (!op) {
qpa_logerr (pa_context_errno (g->context),
"set_source_volume() failed\n");
} else {
pa_operation_unref(op);
}
op = pa_context_set_source_mute_by_index (g->context,
pa_stream_get_index (pa->stream),
sw->vol.mute, NULL, NULL);
if (!op) {
qpa_logerr (pa_context_errno (g->context),
"set_source_mute() failed\n");
} else {
pa_operation_unref (op);
}
pa_threaded_mainloop_unlock (g->mainloop);
}
}
(void) hw;
(void) cmd;
return 0;
}
/* common */
static void *qpa_audio_init (void)
{
paaudio *g = &glob_paaudio;
g->mainloop = pa_threaded_mainloop_new ();
if (!g->mainloop) {
goto fail;
}
g->context = pa_context_new (pa_threaded_mainloop_get_api (g->mainloop), glob_paaudio.server);
if (!g->context) {
goto fail;
}
pa_context_set_state_callback (g->context, context_state_cb, g);
if (pa_context_connect (g->context, glob_paaudio.server, 0, NULL) < 0) {
qpa_logerr (pa_context_errno (g->context),
"pa_context_connect() failed\n");
goto fail;
}
pa_threaded_mainloop_lock (g->mainloop);
if (pa_threaded_mainloop_start (g->mainloop) < 0) {
goto unlock_and_fail;
}
for (;;) {
pa_context_state_t state;
state = pa_context_get_state (g->context);
if (state == PA_CONTEXT_READY) {
break;
}
if (!PA_CONTEXT_IS_GOOD (state)) {
qpa_logerr (pa_context_errno (g->context),
"Wrong context state\n");
goto unlock_and_fail;
}
/* Wait until the context is ready */
pa_threaded_mainloop_wait (g->mainloop);
}
pa_threaded_mainloop_unlock (g->mainloop);
return &glob_paaudio;
unlock_and_fail:
pa_threaded_mainloop_unlock (g->mainloop);
fail:
AUD_log (AUDIO_CAP, "Failed to initialize PA context");
return NULL;
return &conf;
}
static void qpa_audio_fini (void *opaque)
{
paaudio *g = opaque;
if (g->mainloop) {
pa_threaded_mainloop_stop (g->mainloop);
}
if (g->context) {
pa_context_disconnect (g->context);
pa_context_unref (g->context);
g->context = NULL;
}
if (g->mainloop) {
pa_threaded_mainloop_free (g->mainloop);
}
g->mainloop = NULL;
(void) opaque;
}
struct audio_option qpa_options[] = {
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &glob_paaudio.samples,
.valp = &conf.samples,
.descr = "buffer size in samples"
},
{
.name = "SERVER",
.tag = AUD_OPT_STR,
.valp = &glob_paaudio.server,
.valp = &conf.server,
.descr = "server address"
},
{
.name = "SINK",
.tag = AUD_OPT_STR,
.valp = &glob_paaudio.sink,
.valp = &conf.sink,
.descr = "sink device name"
},
{
.name = "SOURCE",
.tag = AUD_OPT_STR,
.valp = &glob_paaudio.source,
.valp = &conf.source,
.descr = "source device name"
},
{ /* End of list */ }
@@ -937,6 +521,5 @@ struct audio_driver pa_audio_driver = {
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (PAVoiceOut),
.voice_size_in = sizeof (PAVoiceIn),
.ctl_caps = VOICE_VOLUME_CAP
.voice_size_in = sizeof (PAVoiceIn)
};

View File

@@ -202,26 +202,7 @@ static int line_out_ctl (HWVoiceOut *hw, int cmd, ...)
}
spice_server_playback_stop (&out->sin);
break;
case VOICE_VOLUME:
{
#if ((SPICE_INTERFACE_PLAYBACK_MAJOR >= 1) && (SPICE_INTERFACE_PLAYBACK_MINOR >= 2))
SWVoiceOut *sw;
va_list ap;
uint16_t vol[2];
va_start (ap, cmd);
sw = va_arg (ap, SWVoiceOut *);
va_end (ap);
vol[0] = sw->vol.l / ((1ULL << 16) + 1);
vol[1] = sw->vol.r / ((1ULL << 16) + 1);
spice_server_playback_set_volume (&out->sin, 2, vol);
spice_server_playback_set_mute (&out->sin, sw->vol.mute);
#endif
break;
}
}
return 0;
}
@@ -323,26 +304,7 @@ static int line_in_ctl (HWVoiceIn *hw, int cmd, ...)
in->active = 0;
spice_server_record_stop (&in->sin);
break;
case VOICE_VOLUME:
{
#if ((SPICE_INTERFACE_RECORD_MAJOR >= 2) && (SPICE_INTERFACE_RECORD_MINOR >= 2))
SWVoiceIn *sw;
va_list ap;
uint16_t vol[2];
va_start (ap, cmd);
sw = va_arg (ap, SWVoiceIn *);
va_end (ap);
vol[0] = sw->vol.l / ((1ULL << 16) + 1);
vol[1] = sw->vol.r / ((1ULL << 16) + 1);
spice_server_record_set_volume (&in->sin, 2, vol);
spice_server_record_set_mute (&in->sin, sw->vol.mute);
#endif
break;
}
}
return 0;
}
@@ -375,9 +337,6 @@ struct audio_driver spice_audio_driver = {
.max_voices_in = 1,
.voice_size_out = sizeof (SpiceVoiceOut),
.voice_size_in = sizeof (SpiceVoiceIn),
#if ((SPICE_INTERFACE_PLAYBACK_MAJOR >= 1) && (SPICE_INTERFACE_PLAYBACK_MINOR >= 2))
.ctl_caps = VOICE_VOLUME_CAP
#endif
};
void qemu_spice_audio_init (void)

View File

@@ -30,7 +30,7 @@
typedef struct WAVVoiceOut {
HWVoiceOut hw;
FILE *f;
QEMUFile *f;
int64_t old_ticks;
void *pcm_buf;
int total_samples;
@@ -76,10 +76,7 @@ static int wav_run_out (HWVoiceOut *hw, int live)
dst = advance (wav->pcm_buf, rpos << hw->info.shift);
hw->clip (dst, src, convert_samples);
if (fwrite (dst, convert_samples << hw->info.shift, 1, wav->f) != 1) {
dolog ("wav_run_out: fwrite of %d bytes failed\nReaons: %s\n",
convert_samples << hw->info.shift, strerror (errno));
}
qemu_put_buffer (wav->f, dst, convert_samples << hw->info.shift);
rpos = (rpos + convert_samples) % hw->samples;
samples -= convert_samples;
@@ -155,20 +152,16 @@ static int wav_init_out (HWVoiceOut *hw, struct audsettings *as)
le_store (hdr + 28, hw->info.freq << (bits16 + stereo), 4);
le_store (hdr + 32, 1 << (bits16 + stereo), 2);
wav->f = fopen (conf.wav_path, "wb");
wav->f = qemu_fopen (conf.wav_path, "wb");
if (!wav->f) {
dolog ("Failed to open wave file `%s'\nReason: %s\n",
conf.wav_path, strerror (errno));
g_free (wav->pcm_buf);
qemu_free (wav->pcm_buf);
wav->pcm_buf = NULL;
return -1;
}
if (fwrite (hdr, sizeof (hdr), 1, wav->f) != 1) {
dolog ("wav_init_out: failed to write header\nReason: %s\n",
strerror(errno));
return -1;
}
qemu_put_buffer (wav->f, hdr, sizeof (hdr));
return 0;
}
@@ -187,35 +180,16 @@ static void wav_fini_out (HWVoiceOut *hw)
le_store (rlen, rifflen, 4);
le_store (dlen, datalen, 4);
if (fseek (wav->f, 4, SEEK_SET)) {
dolog ("wav_fini_out: fseek to rlen failed\nReason: %s\n",
strerror(errno));
goto doclose;
}
if (fwrite (rlen, 4, 1, wav->f) != 1) {
dolog ("wav_fini_out: failed to write rlen\nReason: %s\n",
strerror (errno));
goto doclose;
}
if (fseek (wav->f, 32, SEEK_CUR)) {
dolog ("wav_fini_out: fseek to dlen failed\nReason: %s\n",
strerror (errno));
goto doclose;
}
if (fwrite (dlen, 4, 1, wav->f) != 1) {
dolog ("wav_fini_out: failed to write dlen\nReaons: %s\n",
strerror (errno));
goto doclose;
}
qemu_fseek (wav->f, 4, SEEK_SET);
qemu_put_buffer (wav->f, rlen, 4);
doclose:
if (fclose (wav->f)) {
dolog ("wav_fini_out: fclose %p failed\nReason: %s\n",
wav->f, strerror (errno));
}
qemu_fseek (wav->f, 32, SEEK_CUR);
qemu_put_buffer (wav->f, dlen, 4);
qemu_fclose (wav->f);
wav->f = NULL;
g_free (wav->pcm_buf);
qemu_free (wav->pcm_buf);
wav->pcm_buf = NULL;
}

View File

@@ -3,7 +3,7 @@
#include "audio.h"
typedef struct {
FILE *f;
QEMUFile *f;
int bytes;
char *path;
int freq;
@@ -35,50 +35,27 @@ static void wav_destroy (void *opaque)
uint8_t dlen[4];
uint32_t datalen = wav->bytes;
uint32_t rifflen = datalen + 36;
Monitor *mon = cur_mon;
if (wav->f) {
le_store (rlen, rifflen, 4);
le_store (dlen, datalen, 4);
if (fseek (wav->f, 4, SEEK_SET)) {
monitor_printf (mon, "wav_destroy: rlen fseek failed\nReason: %s\n",
strerror (errno));
goto doclose;
}
if (fwrite (rlen, 4, 1, wav->f) != 1) {
monitor_printf (mon, "wav_destroy: rlen fwrite failed\nReason %s\n",
strerror (errno));
goto doclose;
}
if (fseek (wav->f, 32, SEEK_CUR)) {
monitor_printf (mon, "wav_destroy: dlen fseek failed\nReason %s\n",
strerror (errno));
goto doclose;
}
if (fwrite (dlen, 1, 4, wav->f) != 4) {
monitor_printf (mon, "wav_destroy: dlen fwrite failed\nReason %s\n",
strerror (errno));
goto doclose;
}
doclose:
if (fclose (wav->f)) {
fprintf (stderr, "wav_destroy: fclose failed: %s",
strerror (errno));
}
qemu_fseek (wav->f, 4, SEEK_SET);
qemu_put_buffer (wav->f, rlen, 4);
qemu_fseek (wav->f, 32, SEEK_CUR);
qemu_put_buffer (wav->f, dlen, 4);
qemu_fclose (wav->f);
}
g_free (wav->path);
qemu_free (wav->path);
}
static void wav_capture (void *opaque, void *buf, int size)
{
WAVState *wav = opaque;
if (fwrite (buf, size, 1, wav->f) != 1) {
monitor_printf (cur_mon, "wav_capture: fwrite error\nReason: %s",
strerror (errno));
}
qemu_put_buffer (wav->f, buf, size);
wav->bytes += size;
}
@@ -94,9 +71,9 @@ static void wav_capture_info (void *opaque)
WAVState *wav = opaque;
char *path = wav->path;
monitor_printf (cur_mon, "Capturing audio(%d,%d,%d) to %s: %d bytes\n",
wav->freq, wav->bits, wav->nchannels,
path ? path : "<not available>", wav->bytes);
monitor_printf(cur_mon, "Capturing audio(%d,%d,%d) to %s: %d bytes\n",
wav->freq, wav->bits, wav->nchannels,
path ? path : "<not available>", wav->bytes);
}
static struct capture_ops wav_capture_ops = {
@@ -121,13 +98,13 @@ int wav_start_capture (CaptureState *s, const char *path, int freq,
CaptureVoiceOut *cap;
if (bits != 8 && bits != 16) {
monitor_printf (mon, "incorrect bit count %d, must be 8 or 16\n", bits);
monitor_printf(mon, "incorrect bit count %d, must be 8 or 16\n", bits);
return -1;
}
if (nchannels != 1 && nchannels != 2) {
monitor_printf (mon, "incorrect channel count %d, must be 1 or 2\n",
nchannels);
monitor_printf(mon, "incorrect channel count %d, must be 1 or 2\n",
nchannels);
return -1;
}
@@ -143,7 +120,7 @@ int wav_start_capture (CaptureState *s, const char *path, int freq,
ops.capture = wav_capture;
ops.destroy = wav_destroy;
wav = g_malloc0 (sizeof (*wav));
wav = qemu_mallocz (sizeof (*wav));
shift = bits16 + stereo;
hdr[34] = bits16 ? 0x10 : 0x08;
@@ -153,42 +130,32 @@ int wav_start_capture (CaptureState *s, const char *path, int freq,
le_store (hdr + 28, freq << shift, 4);
le_store (hdr + 32, 1 << shift, 2);
wav->f = fopen (path, "wb");
wav->f = qemu_fopen (path, "wb");
if (!wav->f) {
monitor_printf (mon, "Failed to open wave file `%s'\nReason: %s\n",
path, strerror (errno));
g_free (wav);
monitor_printf(mon, "Failed to open wave file `%s'\nReason: %s\n",
path, strerror (errno));
qemu_free (wav);
return -1;
}
wav->path = g_strdup (path);
wav->path = qemu_strdup (path);
wav->bits = bits;
wav->nchannels = nchannels;
wav->freq = freq;
if (fwrite (hdr, sizeof (hdr), 1, wav->f) != 1) {
monitor_printf (mon, "Failed to write header\nReason: %s\n",
strerror (errno));
goto error_free;
}
qemu_put_buffer (wav->f, hdr, sizeof (hdr));
cap = AUD_add_capture (&as, &ops, wav);
if (!cap) {
monitor_printf (mon, "Failed to add audio capture\n");
goto error_free;
monitor_printf(mon, "Failed to add audio capture\n");
qemu_free (wav->path);
qemu_fclose (wav->f);
qemu_free (wav);
return -1;
}
wav->cap = cap;
s->opaque = wav;
s->ops = wav_capture_ops;
return 0;
error_free:
g_free (wav->path);
if (fclose (wav->f)) {
monitor_printf (mon, "Failed to close wave file\nReason: %s\n",
strerror (errno));
}
g_free (wav);
return -1;
}

View File

@@ -222,9 +222,9 @@ static int winwave_init_out (HWVoiceOut *hw, struct audsettings *as)
return 0;
err4:
g_free (wave->pcm_buf);
qemu_free (wave->pcm_buf);
err3:
g_free (wave->hdrs);
qemu_free (wave->hdrs);
err2:
winwave_anal_close_out (wave);
err1:
@@ -310,10 +310,10 @@ static void winwave_fini_out (HWVoiceOut *hw)
wave->event = NULL;
}
g_free (wave->pcm_buf);
qemu_free (wave->pcm_buf);
wave->pcm_buf = NULL;
g_free (wave->hdrs);
qemu_free (wave->hdrs);
wave->hdrs = NULL;
}
@@ -349,15 +349,21 @@ static int winwave_ctl_out (HWVoiceOut *hw, int cmd, ...)
else {
hw->poll_mode = 0;
}
wave->paused = 0;
if (wave->paused) {
mr = waveOutRestart (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutRestart");
}
wave->paused = 0;
}
}
return 0;
case VOICE_DISABLE:
if (!wave->paused) {
mr = waveOutReset (wave->hwo);
mr = waveOutPause (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutReset");
winwave_logerr (mr, "waveOutPause");
}
else {
wave->paused = 1;
@@ -505,9 +511,9 @@ static int winwave_init_in (HWVoiceIn *hw, struct audsettings *as)
return 0;
err4:
g_free (wave->pcm_buf);
qemu_free (wave->pcm_buf);
err3:
g_free (wave->hdrs);
qemu_free (wave->hdrs);
err2:
winwave_anal_close_in (wave);
err1:
@@ -544,10 +550,10 @@ static void winwave_fini_in (HWVoiceIn *hw)
wave->event = NULL;
}
g_free (wave->pcm_buf);
qemu_free (wave->pcm_buf);
wave->pcm_buf = NULL;
g_free (wave->hdrs);
qemu_free (wave->hdrs);
wave->hdrs = NULL;
}

122
balloon.c
View File

@@ -25,11 +25,12 @@
*/
#include "monitor.h"
#include "qjson.h"
#include "qint.h"
#include "cpu-common.h"
#include "kvm.h"
#include "balloon.h"
#include "trace.h"
#include "qmp-commands.h"
static QEMUBalloonEvent *balloon_event_fn;
static QEMUBalloonStatus *balloon_stat_fn;
@@ -51,16 +52,6 @@ int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
return 0;
}
void qemu_remove_balloon_handler(void *opaque)
{
if (balloon_opaque != opaque) {
return;
}
balloon_event_fn = NULL;
balloon_stat_fn = NULL;
balloon_opaque = NULL;
}
static int qemu_balloon(ram_addr_t target)
{
if (!balloon_event_fn) {
@@ -71,48 +62,103 @@ static int qemu_balloon(ram_addr_t target)
return 1;
}
static int qemu_balloon_status(BalloonInfo *info)
static int qemu_balloon_status(MonitorCompletion cb, void *opaque)
{
if (!balloon_stat_fn) {
return 0;
}
balloon_stat_fn(balloon_opaque, info);
balloon_stat_fn(balloon_opaque, cb, opaque);
return 1;
}
BalloonInfo *qmp_query_balloon(Error **errp)
static void print_balloon_stat(const char *key, QObject *obj, void *opaque)
{
BalloonInfo *info;
Monitor *mon = opaque;
if (kvm_enabled() && !kvm_has_sync_mmu()) {
error_set(errp, QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return NULL;
if (strcmp(key, "actual")) {
monitor_printf(mon, ",%s=%" PRId64, key,
qint_get_int(qobject_to_qint(obj)));
}
info = g_malloc0(sizeof(*info));
if (qemu_balloon_status(info) == 0) {
error_set(errp, QERR_DEVICE_NOT_ACTIVE, "balloon");
qapi_free_BalloonInfo(info);
return NULL;
}
return info;
}
void qmp_balloon(int64_t value, Error **errp)
void monitor_print_balloon(Monitor *mon, const QObject *data)
{
if (kvm_enabled() && !kvm_has_sync_mmu()) {
error_set(errp, QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
QDict *qdict;
qdict = qobject_to_qdict(data);
if (!qdict_haskey(qdict, "actual")) {
return;
}
monitor_printf(mon, "balloon: actual=%" PRId64,
qdict_get_int(qdict, "actual") >> 20);
qdict_iter(qdict, print_balloon_stat, mon);
monitor_printf(mon, "\n");
}
/**
* do_info_balloon(): Balloon information
*
* Make an asynchronous request for balloon info. When the request completes
* a QDict will be returned according to the following specification:
*
* - "actual": current balloon value in bytes
* The following fields may or may not be present:
* - "mem_swapped_in": Amount of memory swapped in (bytes)
* - "mem_swapped_out": Amount of memory swapped out (bytes)
* - "major_page_faults": Number of major faults
* - "minor_page_faults": Number of minor faults
* - "free_mem": Total amount of free and unused memory (bytes)
* - "total_mem": Total amount of available memory (bytes)
*
* Example:
*
* { "actual": 1073741824, "mem_swapped_in": 0, "mem_swapped_out": 0,
* "major_page_faults": 142, "minor_page_faults": 239245,
* "free_mem": 1014185984, "total_mem": 1044668416 }
*/
int do_info_balloon(Monitor *mon, MonitorCompletion cb, void *opaque)
{
int ret;
if (kvm_enabled() && !kvm_has_sync_mmu()) {
qerror_report(QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return -1;
}
if (value <= 0) {
error_set(errp, QERR_INVALID_PARAMETER_VALUE, "target", "a size");
return;
}
if (qemu_balloon(value) == 0) {
error_set(errp, QERR_DEVICE_NOT_ACTIVE, "balloon");
ret = qemu_balloon_status(cb, opaque);
if (!ret) {
qerror_report(QERR_DEVICE_NOT_ACTIVE, "balloon");
return -1;
}
return 0;
}
/**
* do_balloon(): Request VM to change its memory allocation
*/
int do_balloon(Monitor *mon, const QDict *params,
MonitorCompletion cb, void *opaque)
{
int64_t target;
int ret;
if (kvm_enabled() && !kvm_has_sync_mmu()) {
qerror_report(QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return -1;
}
target = qdict_get_int(params, "value");
if (target <= 0) {
qerror_report(QERR_INVALID_PARAMETER_VALUE, "target", "a size");
return -1;
}
ret = qemu_balloon(target);
if (ret == 0) {
qerror_report(QERR_DEVICE_NOT_ACTIVE, "balloon");
return -1;
}
cb(opaque, NULL);
return 0;
}

View File

@@ -15,13 +15,17 @@
#define _QEMU_BALLOON_H
#include "monitor.h"
#include "qapi-types.h"
typedef void (QEMUBalloonEvent)(void *opaque, ram_addr_t target);
typedef void (QEMUBalloonStatus)(void *opaque, BalloonInfo *info);
typedef void (QEMUBalloonStatus)(void *opaque, MonitorCompletion cb,
void *cb_data);
int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
QEMUBalloonStatus *stat_func, void *opaque);
void qemu_remove_balloon_handler(void *opaque);
void monitor_print_balloon(Monitor *mon, const QObject *data);
int do_info_balloon(Monitor *mon, MonitorCompletion cb, void *opaque);
int do_balloon(Monitor *mon, const QDict *params,
MonitorCompletion cb, void *opaque);
#endif

View File

@@ -91,7 +91,7 @@ int slow_bitmap_intersects(const unsigned long *bitmap1,
static inline unsigned long *bitmap_new(int nbits)
{
int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
return g_malloc0(len);
return qemu_mallocz(len);
}
static inline void bitmap_zero(unsigned long *dst, int nbits)

View File

@@ -9,8 +9,6 @@
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu-common.h"
@@ -18,6 +16,7 @@
#include "hw/hw.h"
#include "qemu-queue.h"
#include "qemu-timer.h"
#include "monitor.h"
#include "block-migration.h"
#include "migration.h"
#include "blockdev.h"
@@ -181,7 +180,7 @@ static void alloc_aio_bitmap(BlkMigDevState *bmds)
BDRV_SECTORS_PER_DIRTY_CHUNK * 8 - 1;
bitmap_size /= BDRV_SECTORS_PER_DIRTY_CHUNK * 8;
bmds->aio_bitmap = g_malloc0(bitmap_size);
bmds->aio_bitmap = qemu_mallocz(bitmap_size);
}
static void blk_mig_read_cb(void *opaque, int ret)
@@ -203,7 +202,8 @@ static void blk_mig_read_cb(void *opaque, int ret)
assert(block_mig_state.submitted >= 0);
}
static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
static int mig_save_device_bulk(Monitor *mon, QEMUFile *f,
BlkMigDevState *bmds)
{
int64_t total_sectors = bmds->total_sectors;
int64_t cur_sector = bmds->cur_sector;
@@ -235,8 +235,8 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
nr_sectors = total_sectors - cur_sector;
}
blk = g_malloc(sizeof(BlkMigBlock));
blk->buf = g_malloc(BLOCK_SIZE);
blk = qemu_malloc(sizeof(BlkMigBlock));
blk->buf = qemu_malloc(BLOCK_SIZE);
blk->bmds = bmds;
blk->sector = cur_sector;
blk->nr_sectors = nr_sectors;
@@ -251,12 +251,22 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
blk->aiocb = bdrv_aio_readv(bs, cur_sector, &blk->qiov,
nr_sectors, blk_mig_read_cb, blk);
if (!blk->aiocb) {
goto error;
}
block_mig_state.submitted++;
bdrv_reset_dirty(bs, cur_sector, nr_sectors);
bmds->cur_sector = cur_sector + nr_sectors;
return (bmds->cur_sector >= total_sectors);
error:
monitor_printf(mon, "Error reading sector %" PRId64 "\n", cur_sector);
qemu_file_set_error(f);
qemu_free(blk->buf);
qemu_free(blk);
return 0;
}
static void set_dirty_tracking(int enable)
@@ -270,6 +280,7 @@ static void set_dirty_tracking(int enable)
static void init_blk_migration_it(void *opaque, BlockDriverState *bs)
{
Monitor *mon = opaque;
BlkMigDevState *bmds;
int64_t sectors;
@@ -279,7 +290,7 @@ static void init_blk_migration_it(void *opaque, BlockDriverState *bs)
return;
}
bmds = g_malloc0(sizeof(BlkMigDevState));
bmds = qemu_mallocz(sizeof(BlkMigDevState));
bmds->bs = bs;
bmds->bulk_completed = 0;
bmds->total_sectors = sectors;
@@ -292,17 +303,19 @@ static void init_blk_migration_it(void *opaque, BlockDriverState *bs)
block_mig_state.total_sector_sum += sectors;
if (bmds->shared_base) {
DPRINTF("Start migration for %s with shared base image\n",
bs->device_name);
monitor_printf(mon, "Start migration for %s with shared base "
"image\n",
bs->device_name);
} else {
DPRINTF("Start full migration for %s\n", bs->device_name);
monitor_printf(mon, "Start full migration for %s\n",
bs->device_name);
}
QSIMPLEQ_INSERT_TAIL(&block_mig_state.bmds_list, bmds, entry);
}
}
static void init_blk_migration(QEMUFile *f)
static void init_blk_migration(Monitor *mon, QEMUFile *f)
{
block_mig_state.submitted = 0;
block_mig_state.read_done = 0;
@@ -313,10 +326,10 @@ static void init_blk_migration(QEMUFile *f)
block_mig_state.total_time = 0;
block_mig_state.reads = 0;
bdrv_iterate(init_blk_migration_it, NULL);
bdrv_iterate(init_blk_migration_it, mon);
}
static int blk_mig_save_bulked_block(QEMUFile *f)
static int blk_mig_save_bulked_block(Monitor *mon, QEMUFile *f)
{
int64_t completed_sector_sum = 0;
BlkMigDevState *bmds;
@@ -325,7 +338,7 @@ static int blk_mig_save_bulked_block(QEMUFile *f)
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
if (bmds->bulk_completed == 0) {
if (mig_save_device_bulk(f, bmds) == 1) {
if (mig_save_device_bulk(mon, f, bmds) == 1) {
/* completed bulk section for this device */
bmds->bulk_completed = 1;
}
@@ -347,7 +360,8 @@ static int blk_mig_save_bulked_block(QEMUFile *f)
block_mig_state.prev_progress = progress;
qemu_put_be64(f, (progress << BDRV_SECTOR_BITS)
| BLK_MIG_FLAG_PROGRESS);
DPRINTF("Completed %d %%\r", progress);
monitor_printf(mon, "Completed %d %%\r", progress);
monitor_flush(mon);
}
return ret;
@@ -362,18 +376,17 @@ static void blk_mig_reset_dirty_cursor(void)
}
}
static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
int is_async)
static int mig_save_device_dirty(Monitor *mon, QEMUFile *f,
BlkMigDevState *bmds, int is_async)
{
BlkMigBlock *blk;
int64_t total_sectors = bmds->total_sectors;
int64_t sector;
int nr_sectors;
int ret = -EIO;
for (sector = bmds->cur_dirty; sector < bmds->total_sectors;) {
if (bmds_aio_inflight(bmds, sector)) {
bdrv_drain_all();
qemu_aio_flush();
}
if (bdrv_get_dirty(bmds->bs, sector)) {
@@ -382,8 +395,8 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
} else {
nr_sectors = BDRV_SECTORS_PER_DIRTY_CHUNK;
}
blk = g_malloc(sizeof(BlkMigBlock));
blk->buf = g_malloc(BLOCK_SIZE);
blk = qemu_malloc(sizeof(BlkMigBlock));
blk->buf = qemu_malloc(BLOCK_SIZE);
blk->bmds = bmds;
blk->sector = sector;
blk->nr_sectors = nr_sectors;
@@ -399,17 +412,20 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
blk->aiocb = bdrv_aio_readv(bmds->bs, sector, &blk->qiov,
nr_sectors, blk_mig_read_cb, blk);
if (!blk->aiocb) {
goto error;
}
block_mig_state.submitted++;
bmds_set_aio_inflight(bmds, sector, nr_sectors, 1);
} else {
ret = bdrv_read(bmds->bs, sector, blk->buf, nr_sectors);
if (ret < 0) {
if (bdrv_read(bmds->bs, sector, blk->buf,
nr_sectors) < 0) {
goto error;
}
blk_send(f, blk);
g_free(blk->buf);
g_free(blk);
qemu_free(blk->buf);
qemu_free(blk);
}
bdrv_reset_dirty(bmds->bs, sector, nr_sectors);
@@ -422,20 +438,20 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
return (bmds->cur_dirty >= bmds->total_sectors);
error:
DPRINTF("Error reading sector %" PRId64 "\n", sector);
qemu_file_set_error(f, ret);
g_free(blk->buf);
g_free(blk);
monitor_printf(mon, "Error reading sector %" PRId64 "\n", sector);
qemu_file_set_error(f);
qemu_free(blk->buf);
qemu_free(blk);
return 0;
}
static int blk_mig_save_dirty_block(QEMUFile *f, int is_async)
static int blk_mig_save_dirty_block(Monitor *mon, QEMUFile *f, int is_async)
{
BlkMigDevState *bmds;
int ret = 0;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
if (mig_save_device_dirty(f, bmds, is_async) == 0) {
if (mig_save_device_dirty(mon, f, bmds, is_async) == 0) {
ret = 1;
break;
}
@@ -457,14 +473,14 @@ static void flush_blks(QEMUFile* f)
break;
}
if (blk->ret < 0) {
qemu_file_set_error(f, blk->ret);
qemu_file_set_error(f);
break;
}
blk_send(f, blk);
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.blk_list, entry);
g_free(blk->buf);
g_free(blk);
qemu_free(blk->buf);
qemu_free(blk);
block_mig_state.read_done--;
block_mig_state.transferred++;
@@ -504,7 +520,7 @@ static int is_stage2_completed(void)
if ((remaining_dirty / bwidth) <=
migrate_max_downtime()) {
/* finish stage2 because we think that we can finish remaining work
/* finish stage2 because we think that we can finish remaing work
below max_downtime */
return 1;
@@ -514,7 +530,7 @@ static int is_stage2_completed(void)
return 0;
}
static void blk_mig_cleanup(void)
static void blk_mig_cleanup(Monitor *mon)
{
BlkMigDevState *bmds;
BlkMigBlock *blk;
@@ -525,26 +541,26 @@ static void blk_mig_cleanup(void)
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.bmds_list, entry);
bdrv_set_in_use(bmds->bs, 0);
drive_put_ref(drive_get_by_blockdev(bmds->bs));
g_free(bmds->aio_bitmap);
g_free(bmds);
qemu_free(bmds->aio_bitmap);
qemu_free(bmds);
}
while ((blk = QSIMPLEQ_FIRST(&block_mig_state.blk_list)) != NULL) {
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.blk_list, entry);
g_free(blk->buf);
g_free(blk);
qemu_free(blk->buf);
qemu_free(blk);
}
monitor_printf(mon, "\n");
}
static int block_save_live(QEMUFile *f, int stage, void *opaque)
static int block_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
{
int ret;
DPRINTF("Enter save live stage %d submitted %d transferred %d\n",
stage, block_mig_state.submitted, block_mig_state.transferred);
if (stage < 0) {
blk_mig_cleanup();
blk_mig_cleanup(mon);
return 0;
}
@@ -555,7 +571,7 @@ static int block_save_live(QEMUFile *f, int stage, void *opaque)
}
if (stage == 1) {
init_blk_migration(f);
init_blk_migration(mon, f);
/* start track dirty blocks */
set_dirty_tracking(1);
@@ -563,10 +579,9 @@ static int block_save_live(QEMUFile *f, int stage, void *opaque)
flush_blks(f);
ret = qemu_file_get_error(f);
if (ret) {
blk_mig_cleanup();
return ret;
if (qemu_file_has_error(f)) {
blk_mig_cleanup(mon);
return 0;
}
blk_mig_reset_dirty_cursor();
@@ -578,12 +593,12 @@ static int block_save_live(QEMUFile *f, int stage, void *opaque)
qemu_file_get_rate_limit(f)) {
if (block_mig_state.bulk_completed == 0) {
/* first finish the bulk phase */
if (blk_mig_save_bulked_block(f) == 0) {
if (blk_mig_save_bulked_block(mon, f) == 0) {
/* finished saving bulk on all devices */
block_mig_state.bulk_completed = 1;
}
} else {
if (blk_mig_save_dirty_block(f, 1) == 0) {
if (blk_mig_save_dirty_block(mon, f, 1) == 0) {
/* no more dirty blocks */
break;
}
@@ -592,10 +607,9 @@ static int block_save_live(QEMUFile *f, int stage, void *opaque)
flush_blks(f);
ret = qemu_file_get_error(f);
if (ret) {
blk_mig_cleanup();
return ret;
if (qemu_file_has_error(f)) {
blk_mig_cleanup(mon);
return 0;
}
}
@@ -604,18 +618,17 @@ static int block_save_live(QEMUFile *f, int stage, void *opaque)
all async read completed */
assert(block_mig_state.submitted == 0);
while (blk_mig_save_dirty_block(f, 0) != 0);
blk_mig_cleanup();
while (blk_mig_save_dirty_block(mon, f, 0) != 0);
blk_mig_cleanup(mon);
/* report completion */
qemu_put_be64(f, (100 << BDRV_SECTOR_BITS) | BLK_MIG_FLAG_PROGRESS);
ret = qemu_file_get_error(f);
if (ret) {
return ret;
if (qemu_file_has_error(f)) {
return 0;
}
DPRINTF("Block migration completed\n");
monitor_printf(mon, "Block migration completed\n");
}
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
@@ -633,7 +646,6 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
uint8_t *buf;
int64_t total_sectors = 0;
int nr_sectors;
int ret;
do {
addr = qemu_get_be64(f);
@@ -642,6 +654,7 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
addr >>= BDRV_SECTOR_BITS;
if (flags & BLK_MIG_FLAG_DEVICE_BLOCK) {
int ret;
/* get device name */
len = qemu_get_byte(f);
qemu_get_buffer(f, (uint8_t *)device_name, len);
@@ -670,12 +683,12 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
nr_sectors = BDRV_SECTORS_PER_DIRTY_CHUNK;
}
buf = g_malloc(BLOCK_SIZE);
buf = qemu_malloc(BLOCK_SIZE);
qemu_get_buffer(f, buf, BLOCK_SIZE);
ret = bdrv_write(bs, addr, buf, nr_sectors);
g_free(buf);
qemu_free(buf);
if (ret < 0) {
return ret;
}
@@ -691,9 +704,8 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
fprintf(stderr, "Unknown flags\n");
return -EINVAL;
}
ret = qemu_file_get_error(f);
if (ret != 0) {
return ret;
if (qemu_file_has_error(f)) {
return -EIO;
}
} while (!(flags & BLK_MIG_FLAG_EOS));

2691
block.c

File diff suppressed because it is too large Load Diff

201
block.h
View File

@@ -4,7 +4,6 @@
#include "qemu-aio.h"
#include "qemu-common.h"
#include "qemu-option.h"
#include "qemu-coroutine.h"
#include "qobject.h"
/* block.c */
@@ -15,61 +14,19 @@ typedef struct BlockDriverInfo {
int cluster_size;
/* offset at which the VM state can be saved (0 if not possible) */
int64_t vm_state_offset;
bool is_dirty;
} BlockDriverInfo;
typedef struct BlockFragInfo {
uint64_t allocated_clusters;
uint64_t total_clusters;
uint64_t fragmented_clusters;
} BlockFragInfo;
typedef struct QEMUSnapshotInfo {
char id_str[128]; /* unique snapshot id */
/* the following fields are informative. They are not needed for
the consistency of the snapshot */
char name[256]; /* user chosen name */
uint64_t vm_state_size; /* VM state info size */
char name[256]; /* user choosen name */
uint32_t vm_state_size; /* VM state info size */
uint32_t date_sec; /* UTC date of the snapshot */
uint32_t date_nsec;
uint64_t vm_clock_nsec; /* VM clock relative to boot */
} QEMUSnapshotInfo;
/* Callbacks for block device models */
typedef struct BlockDevOps {
/*
* Runs when virtual media changed (monitor commands eject, change)
* Argument load is true on load and false on eject.
* Beware: doesn't run when a host device's physical media
* changes. Sure would be useful if it did.
* Device models with removable media must implement this callback.
*/
void (*change_media_cb)(void *opaque, bool load);
/*
* Runs when an eject request is issued from the monitor, the tray
* is closed, and the medium is locked.
* Device models that do not implement is_medium_locked will not need
* this callback. Device models that can lock the medium or tray might
* want to implement the callback and unlock the tray when "force" is
* true, even if they do not support eject requests.
*/
void (*eject_request_cb)(void *opaque, bool force);
/*
* Is the virtual tray open?
* Device models implement this only when the device has a tray.
*/
bool (*is_tray_open)(void *opaque);
/*
* Is the virtual medium locked into the device?
* Device models implement this only when device has such a lock.
*/
bool (*is_medium_locked)(void *opaque);
/*
* Runs when the size changed (e.g. monitor command block_resize)
*/
void (*resize_cb)(void *opaque);
} BlockDevOps;
#define BDRV_O_RDWR 0x0002
#define BDRV_O_SNAPSHOT 0x0008 /* open the file read only and save writes in a snapshot */
#define BDRV_O_NOCACHE 0x0020 /* do not use the host page cache */
@@ -77,8 +34,6 @@ typedef struct BlockDevOps {
#define BDRV_O_NATIVE_AIO 0x0080 /* use native AIO instead of the thread pool */
#define BDRV_O_NO_BACKING 0x0100 /* don't open the backing file */
#define BDRV_O_NO_FLUSH 0x0200 /* disable flushing on this disk */
#define BDRV_O_COPY_ON_READ 0x0400 /* copy read backing sectors into image */
#define BDRV_O_INCOMING 0x0800 /* consistency hint for incoming migration */
#define BDRV_O_CACHE_MASK (BDRV_O_NOCACHE | BDRV_O_CACHE_WB | BDRV_O_NO_FLUSH)
@@ -93,25 +48,15 @@ typedef enum {
typedef enum {
BDRV_ACTION_REPORT, BDRV_ACTION_IGNORE, BDRV_ACTION_STOP
} BlockQMPEventAction;
} BlockMonEventAction;
void bdrv_iostatus_enable(BlockDriverState *bs);
void bdrv_iostatus_reset(BlockDriverState *bs);
void bdrv_iostatus_disable(BlockDriverState *bs);
bool bdrv_iostatus_is_enabled(const BlockDriverState *bs);
void bdrv_iostatus_set_err(BlockDriverState *bs, int error);
void bdrv_emit_qmp_error_event(const BlockDriverState *bdrv,
BlockQMPEventAction action, int is_read);
void bdrv_mon_event(const BlockDriverState *bdrv,
BlockMonEventAction action, int is_read);
void bdrv_info_print(Monitor *mon, const QObject *data);
void bdrv_info(Monitor *mon, QObject **ret_data);
void bdrv_stats_print(Monitor *mon, const QObject *data);
void bdrv_info_stats(Monitor *mon, QObject **ret_data);
/* disk I/O throttling */
void bdrv_io_limits_enable(BlockDriverState *bs);
void bdrv_io_limits_disable(BlockDriverState *bs);
bool bdrv_io_limits_enabled(BlockDriverState *bs);
void bdrv_init(void);
void bdrv_init_with_whitelist(void);
BlockDriver *bdrv_find_protocol(const char *filename);
@@ -122,23 +67,14 @@ int bdrv_create(BlockDriver *drv, const char* filename,
int bdrv_create_file(const char* filename, QEMUOptionParameter *options);
BlockDriverState *bdrv_new(const char *device_name);
void bdrv_make_anon(BlockDriverState *bs);
void bdrv_append(BlockDriverState *bs_new, BlockDriverState *bs_top);
void bdrv_delete(BlockDriverState *bs);
int bdrv_parse_cache_flags(const char *mode, int *flags);
int bdrv_file_open(BlockDriverState **pbs, const char *filename, int flags);
int bdrv_open(BlockDriverState *bs, const char *filename, int flags,
BlockDriver *drv);
void bdrv_close(BlockDriverState *bs);
int bdrv_attach_dev(BlockDriverState *bs, void *dev);
void bdrv_attach_dev_nofail(BlockDriverState *bs, void *dev);
void bdrv_detach_dev(BlockDriverState *bs, void *dev);
void *bdrv_get_attached_dev(BlockDriverState *bs);
void bdrv_set_dev_ops(BlockDriverState *bs, const BlockDevOps *ops,
void *opaque);
void bdrv_dev_eject_request(BlockDriverState *bs, bool force);
bool bdrv_dev_has_removable_media(BlockDriverState *bs);
bool bdrv_dev_is_tray_open(BlockDriverState *bs);
bool bdrv_dev_is_medium_locked(BlockDriverState *bs);
int bdrv_attach(BlockDriverState *bs, DeviceState *qdev);
void bdrv_detach(BlockDriverState *bs, DeviceState *qdev);
DeviceState *bdrv_get_attached(BlockDriverState *bs);
int bdrv_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors);
int bdrv_write(BlockDriverState *bs, int64_t sector_num,
@@ -149,31 +85,15 @@ int bdrv_pwrite(BlockDriverState *bs, int64_t offset,
const void *buf, int count);
int bdrv_pwrite_sync(BlockDriverState *bs, int64_t offset,
const void *buf, int count);
int coroutine_fn bdrv_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
int coroutine_fn bdrv_co_copy_on_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov);
int coroutine_fn bdrv_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
/*
* Efficiently zero a region of the disk image. Note that this is a regular
* I/O request like read or write and should have a reasonable size. This
* function is not suitable for zeroing the entire image in a single request
* because it may allocate memory for the entire region.
*/
int coroutine_fn bdrv_co_write_zeroes(BlockDriverState *bs, int64_t sector_num,
int nb_sectors);
int coroutine_fn bdrv_co_is_allocated(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum);
BlockDriverState *bdrv_find_backing_image(BlockDriverState *bs,
const char *backing_file);
int bdrv_write_sync(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors);
int bdrv_truncate(BlockDriverState *bs, int64_t offset);
int64_t bdrv_getlength(BlockDriverState *bs);
int64_t bdrv_get_allocated_file_size(BlockDriverState *bs);
void bdrv_get_geometry(BlockDriverState *bs, uint64_t *nb_sectors_ptr);
void bdrv_guess_geometry(BlockDriverState *bs, int *pcyls, int *pheads, int *psecs);
int bdrv_commit(BlockDriverState *bs);
int bdrv_commit_all(void);
void bdrv_commit_all(void);
int bdrv_change_backing_file(BlockDriverState *bs,
const char *backing_file, const char *backing_fmt);
void bdrv_register(BlockDriver *bdrv);
@@ -183,12 +103,13 @@ typedef struct BdrvCheckResult {
int corruptions;
int leaks;
int check_errors;
BlockFragInfo bfi;
} BdrvCheckResult;
int bdrv_check(BlockDriverState *bs, BdrvCheckResult *res);
/* async block I/O */
typedef struct BlockDriverAIOCB BlockDriverAIOCB;
typedef void BlockDriverCompletionFunc(void *opaque, int ret);
typedef void BlockDriverDirtyHandler(BlockDriverState *bs, int64_t sector,
int sector_num);
BlockDriverAIOCB *bdrv_aio_readv(BlockDriverState *bs, int64_t sector_num,
@@ -199,9 +120,6 @@ BlockDriverAIOCB *bdrv_aio_writev(BlockDriverState *bs, int64_t sector_num,
BlockDriverCompletionFunc *cb, void *opaque);
BlockDriverAIOCB *bdrv_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque);
BlockDriverAIOCB *bdrv_aio_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque);
void bdrv_aio_cancel(BlockDriverAIOCB *acb);
typedef struct BlockRequest {
@@ -225,21 +143,12 @@ BlockDriverAIOCB *bdrv_aio_ioctl(BlockDriverState *bs,
unsigned long int req, void *buf,
BlockDriverCompletionFunc *cb, void *opaque);
/* Invalidate any cached metadata used by image formats */
void bdrv_invalidate_cache(BlockDriverState *bs);
void bdrv_invalidate_cache_all(void);
void bdrv_clear_incoming_migration_all(void);
/* Ensure contents are flushed to disk. */
int bdrv_flush(BlockDriverState *bs);
int coroutine_fn bdrv_co_flush(BlockDriverState *bs);
void bdrv_flush_all(void);
void bdrv_close_all(void);
void bdrv_drain_all(void);
int bdrv_discard(BlockDriverState *bs, int64_t sector_num, int nb_sectors);
int bdrv_co_discard(BlockDriverState *bs, int64_t sector_num, int nb_sectors);
int bdrv_has_zero_init(BlockDriverState *bs);
int bdrv_is_allocated(BlockDriverState *bs, int64_t sector_num, int nb_sectors,
int *pnum);
@@ -262,28 +171,26 @@ typedef enum FDriveType {
FDRIVE_DRV_NONE = 0x03, /* No drive connected */
} FDriveType;
typedef enum FDriveRate {
FDRIVE_RATE_500K = 0x00, /* 500 Kbps */
FDRIVE_RATE_300K = 0x01, /* 300 Kbps */
FDRIVE_RATE_250K = 0x02, /* 250 Kbps */
FDRIVE_RATE_1M = 0x03, /* 1 Mbps */
} FDriveRate;
void bdrv_get_floppy_geometry_hint(BlockDriverState *bs, int *nb_heads,
int *max_track, int *last_sect,
FDriveType drive_in, FDriveType *drive,
FDriveRate *rate);
FDriveType drive_in, FDriveType *drive);
int bdrv_get_translation_hint(BlockDriverState *bs);
void bdrv_set_on_error(BlockDriverState *bs, BlockErrorAction on_read_error,
BlockErrorAction on_write_error);
BlockErrorAction bdrv_get_on_error(BlockDriverState *bs, int is_read);
void bdrv_set_removable(BlockDriverState *bs, int removable);
int bdrv_is_removable(BlockDriverState *bs);
int bdrv_is_read_only(BlockDriverState *bs);
int bdrv_is_sg(BlockDriverState *bs);
int bdrv_enable_write_cache(BlockDriverState *bs);
int bdrv_is_inserted(BlockDriverState *bs);
int bdrv_media_changed(BlockDriverState *bs);
void bdrv_lock_medium(BlockDriverState *bs, bool locked);
void bdrv_eject(BlockDriverState *bs, bool eject_flag);
int bdrv_is_locked(BlockDriverState *bs);
void bdrv_set_locked(BlockDriverState *bs, int locked);
int bdrv_eject(BlockDriverState *bs, int eject_flag);
void bdrv_set_change_cb(BlockDriverState *bs,
void (*change_cb)(void *opaque, int reason),
void *opaque);
void bdrv_get_format(BlockDriverState *bs, char *buf, int buf_size);
BlockDriverState *bdrv_find(const char *name);
BlockDriverState *bdrv_next(BlockDriverState *bs);
@@ -303,8 +210,6 @@ int bdrv_get_info(BlockDriverState *bs, BlockDriverInfo *bdi);
const char *bdrv_get_encrypted_filename(BlockDriverState *bs);
void bdrv_get_backing_filename(BlockDriverState *bs,
char *filename, int filename_size);
void bdrv_get_full_backing_filename(BlockDriverState *bs,
char *dest, size_t sz);
int bdrv_can_snapshot(BlockDriverState *bs);
int bdrv_is_snapshot(BlockDriverState *bs);
BlockDriverState *bdrv_snapshots(void);
@@ -335,9 +240,6 @@ int bdrv_img_create(const char *filename, const char *fmt,
const char *base_filename, const char *base_fmt,
char *options, uint64_t img_size, int flags);
void bdrv_set_buffer_alignment(BlockDriverState *bs, int align);
void *qemu_blockalign(BlockDriverState *bs, size_t size);
#define BDRV_SECTORS_PER_DIRTY_CHUNK 2048
void bdrv_set_dirty_tracking(BlockDriverState *bs, int enable);
@@ -346,29 +248,9 @@ void bdrv_reset_dirty(BlockDriverState *bs, int64_t cur_sector,
int nr_sectors);
int64_t bdrv_get_dirty_count(BlockDriverState *bs);
void bdrv_enable_copy_on_read(BlockDriverState *bs);
void bdrv_disable_copy_on_read(BlockDriverState *bs);
void bdrv_set_in_use(BlockDriverState *bs, int in_use);
int bdrv_in_use(BlockDriverState *bs);
enum BlockAcctType {
BDRV_ACCT_READ,
BDRV_ACCT_WRITE,
BDRV_ACCT_FLUSH,
BDRV_MAX_IOTYPE,
};
typedef struct BlockAcctCookie {
int64_t bytes;
int64_t start_time_ns;
enum BlockAcctType type;
} BlockAcctCookie;
void bdrv_acct_start(BlockDriverState *bs, BlockAcctCookie *cookie,
int64_t bytes, enum BlockAcctType type);
void bdrv_acct_done(BlockDriverState *bs, BlockAcctCookie *cookie);
typedef enum {
BLKDBG_L1_UPDATE,
@@ -420,43 +302,4 @@ typedef enum {
#define BLKDBG_EVENT(bs, evt) bdrv_debug_event(bs, evt)
void bdrv_debug_event(BlockDriverState *bs, BlkDebugEvent event);
/* Convenience for block device models */
typedef struct BlockConf {
BlockDriverState *bs;
uint16_t physical_block_size;
uint16_t logical_block_size;
uint16_t min_io_size;
uint32_t opt_io_size;
int32_t bootindex;
uint32_t discard_granularity;
} BlockConf;
static inline unsigned int get_physical_block_exp(BlockConf *conf)
{
unsigned int exp = 0, size;
for (size = conf->physical_block_size;
size > conf->logical_block_size;
size >>= 1) {
exp++;
}
return exp;
}
#define DEFINE_BLOCK_PROPERTIES(_state, _conf) \
DEFINE_PROP_DRIVE("drive", _state, _conf.bs), \
DEFINE_PROP_BLOCKSIZE("logical_block_size", _state, \
_conf.logical_block_size, 512), \
DEFINE_PROP_BLOCKSIZE("physical_block_size", _state, \
_conf.physical_block_size, 512), \
DEFINE_PROP_UINT16("min_io_size", _state, _conf.min_io_size, 0), \
DEFINE_PROP_UINT32("opt_io_size", _state, _conf.opt_io_size, 0), \
DEFINE_PROP_INT32("bootindex", _state, _conf.bootindex, -1), \
DEFINE_PROP_UINT32("discard_granularity", _state, \
_conf.discard_granularity, 0)
#endif

View File

@@ -214,7 +214,7 @@ static int add_rule(QemuOpts *opts, void *opaque)
}
/* Set attributes common for all actions */
rule = g_malloc0(sizeof(*rule));
rule = qemu_mallocz(sizeof(*rule));
*rule = (struct BlkdebugRule) {
.event = event,
.action = d->action,
@@ -292,10 +292,10 @@ static int blkdebug_open(BlockDriverState *bs, const char *filename, int flags)
return -EINVAL;
}
config = g_strdup(filename);
config = strdup(filename);
config[c - filename] = '\0';
ret = read_config(s, config);
g_free(config);
free(config);
if (ret < 0) {
return ret;
}
@@ -392,11 +392,22 @@ static void blkdebug_close(BlockDriverState *bs)
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
QLIST_FOREACH_SAFE(rule, &s->rules[i], next, next) {
QLIST_REMOVE(rule, next);
g_free(rule);
qemu_free(rule);
}
}
}
static int blkdebug_flush(BlockDriverState *bs)
{
return bdrv_flush(bs->file);
}
static BlockDriverAIOCB *blkdebug_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
return bdrv_aio_flush(bs->file, cb, opaque);
}
static void process_rule(BlockDriverState *bs, struct BlkdebugRule *rule,
BlkdebugVars *old_vars)
{
@@ -443,9 +454,11 @@ static BlockDriver bdrv_blkdebug = {
.bdrv_file_open = blkdebug_open,
.bdrv_close = blkdebug_close,
.bdrv_flush = blkdebug_flush,
.bdrv_aio_readv = blkdebug_aio_readv,
.bdrv_aio_writev = blkdebug_aio_writev,
.bdrv_aio_flush = blkdebug_aio_flush,
.bdrv_debug_event = blkdebug_debug_event,
};

View File

@@ -87,10 +87,10 @@ static int blkverify_open(BlockDriverState *bs, const char *filename, int flags)
return -EINVAL;
}
raw = g_strdup(filename);
raw = strdup(filename);
raw[c - filename] = '\0';
ret = bdrv_file_open(&bs->file, raw, flags);
g_free(raw);
free(raw);
if (ret < 0) {
return ret;
}
@@ -116,6 +116,14 @@ static void blkverify_close(BlockDriverState *bs)
s->test_file = NULL;
}
static int blkverify_flush(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
/* Only flush test file, the raw file is not important */
return bdrv_flush(s->test_file);
}
static int64_t blkverify_getlength(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
@@ -310,10 +318,14 @@ static BlockDriverAIOCB *blkverify_aio_readv(BlockDriverState *bs,
qemu_iovec_init(&acb->raw_qiov, acb->qiov->niov);
blkverify_iovec_clone(&acb->raw_qiov, qiov, acb->buf);
bdrv_aio_readv(s->test_file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb);
bdrv_aio_readv(bs->file, sector_num, &acb->raw_qiov, nb_sectors,
blkverify_aio_cb, acb);
if (!bdrv_aio_readv(s->test_file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb)) {
blkverify_aio_cb(acb, -EIO);
}
if (!bdrv_aio_readv(bs->file, sector_num, &acb->raw_qiov, nb_sectors,
blkverify_aio_cb, acb)) {
blkverify_aio_cb(acb, -EIO);
}
return &acb->common;
}
@@ -325,10 +337,14 @@ static BlockDriverAIOCB *blkverify_aio_writev(BlockDriverState *bs,
BlkverifyAIOCB *acb = blkverify_aio_get(bs, true, sector_num, qiov,
nb_sectors, cb, opaque);
bdrv_aio_writev(s->test_file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb);
bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb);
if (!bdrv_aio_writev(s->test_file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb)) {
blkverify_aio_cb(acb, -EIO);
}
if (!bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb)) {
blkverify_aio_cb(acb, -EIO);
}
return &acb->common;
}
@@ -352,6 +368,7 @@ static BlockDriver bdrv_blkverify = {
.bdrv_file_open = blkverify_open,
.bdrv_close = blkverify_close,
.bdrv_flush = blkverify_flush,
.bdrv_aio_readv = blkverify_aio_readv,
.bdrv_aio_writev = blkverify_aio_writev,

View File

@@ -80,7 +80,6 @@ struct bochs_header {
};
typedef struct BDRVBochsState {
CoMutex lock;
uint32_t *catalog_bitmap;
int catalog_size;
@@ -137,7 +136,7 @@ static int bochs_open(BlockDriverState *bs, int flags)
}
s->catalog_size = le32_to_cpu(bochs.extra.redolog.catalog);
s->catalog_bitmap = g_malloc(s->catalog_size * 4);
s->catalog_bitmap = qemu_malloc(s->catalog_size * 4);
if (bdrv_pread(bs->file, le32_to_cpu(bochs.header), s->catalog_bitmap,
s->catalog_size * 4) != s->catalog_size * 4)
goto fail;
@@ -151,7 +150,6 @@ static int bochs_open(BlockDriverState *bs, int flags)
s->extent_size = le32_to_cpu(bochs.extra.redolog.extent);
qemu_co_mutex_init(&s->lock);
return 0;
fail:
return -1;
@@ -209,21 +207,10 @@ static int bochs_read(BlockDriverState *bs, int64_t sector_num,
return 0;
}
static coroutine_fn int bochs_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVBochsState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = bochs_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void bochs_close(BlockDriverState *bs)
{
BDRVBochsState *s = bs->opaque;
g_free(s->catalog_bitmap);
qemu_free(s->catalog_bitmap);
}
static BlockDriver bdrv_bochs = {
@@ -231,7 +218,7 @@ static BlockDriver bdrv_bochs = {
.instance_size = sizeof(BDRVBochsState),
.bdrv_probe = bochs_probe,
.bdrv_open = bochs_open,
.bdrv_read = bochs_co_read,
.bdrv_read = bochs_read,
.bdrv_close = bochs_close,
};

View File

@@ -27,10 +27,9 @@
#include <zlib.h>
typedef struct BDRVCloopState {
CoMutex lock;
uint32_t block_size;
uint32_t n_blocks;
uint64_t *offsets;
uint64_t* offsets;
uint32_t sectors_per_block;
uint32_t current_block;
uint8_t *compressed_block;
@@ -40,23 +39,21 @@ typedef struct BDRVCloopState {
static int cloop_probe(const uint8_t *buf, int buf_size, const char *filename)
{
const char *magic_version_2_0 = "#!/bin/sh\n"
"#V2.0 Format\n"
"modprobe cloop file=$0 && mount -r -t iso9660 /dev/cloop $1\n";
int length = strlen(magic_version_2_0);
if (length > buf_size) {
length = buf_size;
}
if (!memcmp(magic_version_2_0, buf, length)) {
return 2;
}
const char* magic_version_2_0="#!/bin/sh\n"
"#V2.0 Format\n"
"modprobe cloop file=$0 && mount -r -t iso9660 /dev/cloop $1\n";
int length=strlen(magic_version_2_0);
if(length>buf_size)
length=buf_size;
if(!memcmp(magic_version_2_0,buf,length))
return 2;
return 0;
}
static int cloop_open(BlockDriverState *bs, int flags)
{
BDRVCloopState *s = bs->opaque;
uint32_t offsets_size, max_compressed_block_size = 1, i;
uint32_t offsets_size,max_compressed_block_size=1,i;
bs->read_only = 1;
@@ -73,32 +70,29 @@ static int cloop_open(BlockDriverState *bs, int flags)
/* read offsets */
offsets_size = s->n_blocks * sizeof(uint64_t);
s->offsets = g_malloc(offsets_size);
s->offsets = qemu_malloc(offsets_size);
if (bdrv_pread(bs->file, 128 + 4 + 4, s->offsets, offsets_size) <
offsets_size) {
goto cloop_close;
goto cloop_close;
}
for(i=0;i<s->n_blocks;i++) {
s->offsets[i] = be64_to_cpu(s->offsets[i]);
if (i > 0) {
uint32_t size = s->offsets[i] - s->offsets[i - 1];
if (size > max_compressed_block_size) {
max_compressed_block_size = size;
}
}
s->offsets[i]=be64_to_cpu(s->offsets[i]);
if(i>0) {
uint32_t size=s->offsets[i]-s->offsets[i-1];
if(size>max_compressed_block_size)
max_compressed_block_size=size;
}
}
/* initialize zlib engine */
s->compressed_block = g_malloc(max_compressed_block_size + 1);
s->uncompressed_block = g_malloc(s->block_size);
if (inflateInit(&s->zstream) != Z_OK) {
goto cloop_close;
}
s->current_block = s->n_blocks;
s->compressed_block = qemu_malloc(max_compressed_block_size+1);
s->uncompressed_block = qemu_malloc(s->block_size);
if(inflateInit(&s->zstream) != Z_OK)
goto cloop_close;
s->current_block=s->n_blocks;
s->sectors_per_block = s->block_size/512;
bs->total_sectors = s->n_blocks * s->sectors_per_block;
qemu_co_mutex_init(&s->lock);
bs->total_sectors = s->n_blocks*s->sectors_per_block;
return 0;
cloop_close:
@@ -109,30 +103,27 @@ static inline int cloop_read_block(BlockDriverState *bs, int block_num)
{
BDRVCloopState *s = bs->opaque;
if (s->current_block != block_num) {
int ret;
uint32_t bytes = s->offsets[block_num + 1] - s->offsets[block_num];
if(s->current_block != block_num) {
int ret;
uint32_t bytes = s->offsets[block_num+1]-s->offsets[block_num];
ret = bdrv_pread(bs->file, s->offsets[block_num], s->compressed_block,
bytes);
if (ret != bytes) {
if (ret != bytes)
return -1;
}
s->zstream.next_in = s->compressed_block;
s->zstream.avail_in = bytes;
s->zstream.next_out = s->uncompressed_block;
s->zstream.avail_out = s->block_size;
ret = inflateReset(&s->zstream);
if (ret != Z_OK) {
return -1;
}
ret = inflate(&s->zstream, Z_FINISH);
if (ret != Z_STREAM_END || s->zstream.total_out != s->block_size) {
return -1;
}
s->zstream.next_in = s->compressed_block;
s->zstream.avail_in = bytes;
s->zstream.next_out = s->uncompressed_block;
s->zstream.avail_out = s->block_size;
ret = inflateReset(&s->zstream);
if(ret != Z_OK)
return -1;
ret = inflate(&s->zstream, Z_FINISH);
if(ret != Z_STREAM_END || s->zstream.total_out != s->block_size)
return -1;
s->current_block = block_num;
s->current_block = block_num;
}
return 0;
}
@@ -143,48 +134,33 @@ static int cloop_read(BlockDriverState *bs, int64_t sector_num,
BDRVCloopState *s = bs->opaque;
int i;
for (i = 0; i < nb_sectors; i++) {
uint32_t sector_offset_in_block =
((sector_num + i) % s->sectors_per_block),
block_num = (sector_num + i) / s->sectors_per_block;
if (cloop_read_block(bs, block_num) != 0) {
return -1;
}
memcpy(buf + i * 512,
s->uncompressed_block + sector_offset_in_block * 512, 512);
for(i=0;i<nb_sectors;i++) {
uint32_t sector_offset_in_block=((sector_num+i)%s->sectors_per_block),
block_num=(sector_num+i)/s->sectors_per_block;
if(cloop_read_block(bs, block_num) != 0)
return -1;
memcpy(buf+i*512,s->uncompressed_block+sector_offset_in_block*512,512);
}
return 0;
}
static coroutine_fn int cloop_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVCloopState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = cloop_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void cloop_close(BlockDriverState *bs)
{
BDRVCloopState *s = bs->opaque;
if (s->n_blocks > 0) {
g_free(s->offsets);
}
g_free(s->compressed_block);
g_free(s->uncompressed_block);
if(s->n_blocks>0)
free(s->offsets);
free(s->compressed_block);
free(s->uncompressed_block);
inflateEnd(&s->zstream);
}
static BlockDriver bdrv_cloop = {
.format_name = "cloop",
.instance_size = sizeof(BDRVCloopState),
.bdrv_probe = cloop_probe,
.bdrv_open = cloop_open,
.bdrv_read = cloop_co_read,
.bdrv_close = cloop_close,
.format_name = "cloop",
.instance_size = sizeof(BDRVCloopState),
.bdrv_probe = cloop_probe,
.bdrv_open = cloop_open,
.bdrv_read = cloop_read,
.bdrv_close = cloop_close,
};
static void bdrv_cloop_init(void)

View File

@@ -42,7 +42,6 @@ struct cow_header_v2 {
};
typedef struct BDRVCowState {
CoMutex lock;
int64_t cow_sectors_offset;
} BDRVCowState;
@@ -64,26 +63,15 @@ static int cow_open(BlockDriverState *bs, int flags)
struct cow_header_v2 cow_header;
int bitmap_size;
int64_t size;
int ret;
/* see if it is a cow image */
ret = bdrv_pread(bs->file, 0, &cow_header, sizeof(cow_header));
if (ret < 0) {
if (bdrv_pread(bs->file, 0, &cow_header, sizeof(cow_header)) !=
sizeof(cow_header)) {
goto fail;
}
if (be32_to_cpu(cow_header.magic) != COW_MAGIC) {
ret = -EINVAL;
goto fail;
}
if (be32_to_cpu(cow_header.version) != COW_VERSION) {
char version[64];
snprintf(version, sizeof(version),
"COW version %d", cow_header.version);
qerror_report(QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "cow", version);
ret = -ENOTSUP;
if (be32_to_cpu(cow_header.magic) != COW_MAGIC ||
be32_to_cpu(cow_header.version) != COW_VERSION) {
goto fail;
}
@@ -96,14 +84,13 @@ static int cow_open(BlockDriverState *bs, int flags)
bitmap_size = ((bs->total_sectors + 7) >> 3) + sizeof(cow_header);
s->cow_sectors_offset = (bitmap_size + 511) & ~511;
qemu_co_mutex_init(&s->lock);
return 0;
fail:
return ret;
return -1;
}
/*
* XXX(hch): right now these functions are extremely inefficient.
* XXX(hch): right now these functions are extremly ineffcient.
* We should just read the whole bitmap we'll need in one go instead.
*/
static inline int cow_set_bit(BlockDriverState *bs, int64_t bitnum)
@@ -143,8 +130,8 @@ static inline int is_bit_set(BlockDriverState *bs, int64_t bitnum)
/* Return true if first block has been changed (ie. current version is
* in COW file). Set the number of continuous blocks for which that
* is true. */
static int coroutine_fn cow_co_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *num_same)
static int cow_is_allocated(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *num_same)
{
int changed;
@@ -182,30 +169,28 @@ static int cow_update_bitmap(BlockDriverState *bs, int64_t sector_num,
return error;
}
static int coroutine_fn cow_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
static int cow_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVCowState *s = bs->opaque;
int ret, n;
while (nb_sectors > 0) {
if (bdrv_co_is_allocated(bs, sector_num, nb_sectors, &n)) {
if (cow_is_allocated(bs, sector_num, nb_sectors, &n)) {
ret = bdrv_pread(bs->file,
s->cow_sectors_offset + sector_num * 512,
buf, n * 512);
if (ret < 0) {
return ret;
}
if (ret != n * 512)
return -1;
} else {
if (bs->backing_hd) {
/* read from the base image */
ret = bdrv_read(bs->backing_hd, sector_num, buf, n);
if (ret < 0) {
return ret;
}
if (ret < 0)
return -1;
} else {
memset(buf, 0, n * 512);
}
memset(buf, 0, n * 512);
}
}
nb_sectors -= n;
sector_num += n;
@@ -214,17 +199,6 @@ static int coroutine_fn cow_read(BlockDriverState *bs, int64_t sector_num,
return 0;
}
static coroutine_fn int cow_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVCowState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = cow_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static int cow_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
@@ -233,36 +207,24 @@ static int cow_write(BlockDriverState *bs, int64_t sector_num,
ret = bdrv_pwrite(bs->file, s->cow_sectors_offset + sector_num * 512,
buf, nb_sectors * 512);
if (ret < 0) {
return ret;
}
if (ret != nb_sectors * 512)
return -1;
return cow_update_bitmap(bs, sector_num, nb_sectors);
}
static coroutine_fn int cow_co_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
int ret;
BDRVCowState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = cow_write(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void cow_close(BlockDriverState *bs)
{
}
static int cow_create(const char *filename, QEMUOptionParameter *options)
{
int fd, cow_fd;
struct cow_header_v2 cow_header;
struct stat st;
int64_t image_sectors = 0;
const char *image_filename = NULL;
int ret;
BlockDriverState *cow_bs;
/* Read out options */
while (options && options->name) {
@@ -274,16 +236,10 @@ static int cow_create(const char *filename, QEMUOptionParameter *options)
options++;
}
ret = bdrv_create_file(filename, options);
if (ret < 0) {
return ret;
}
ret = bdrv_file_open(&cow_bs, filename, BDRV_O_RDWR);
if (ret < 0) {
return ret;
}
cow_fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY,
0644);
if (cow_fd < 0)
return -errno;
memset(&cow_header, 0, sizeof(cow_header));
cow_header.magic = cpu_to_be32(COW_MAGIC);
cow_header.version = cpu_to_be32(COW_VERSION);
@@ -291,9 +247,16 @@ static int cow_create(const char *filename, QEMUOptionParameter *options)
/* Note: if no file, we put a dummy mtime */
cow_header.mtime = cpu_to_be32(0);
if (stat(image_filename, &st) != 0) {
fd = open(image_filename, O_RDONLY | O_BINARY);
if (fd < 0) {
close(cow_fd);
goto mtime_fail;
}
if (fstat(fd, &st) != 0) {
close(fd);
goto mtime_fail;
}
close(fd);
cow_header.mtime = cpu_to_be32(st.st_mtime);
mtime_fail:
pstrcpy(cow_header.backing_file, sizeof(cow_header.backing_file),
@@ -301,23 +264,29 @@ static int cow_create(const char *filename, QEMUOptionParameter *options)
}
cow_header.sectorsize = cpu_to_be32(512);
cow_header.size = cpu_to_be64(image_sectors * 512);
ret = bdrv_pwrite(cow_bs, 0, &cow_header, sizeof(cow_header));
if (ret < 0) {
ret = qemu_write_full(cow_fd, &cow_header, sizeof(cow_header));
if (ret != sizeof(cow_header)) {
ret = -errno;
goto exit;
}
/* resize to include at least all the bitmap */
ret = bdrv_truncate(cow_bs,
sizeof(cow_header) + ((image_sectors + 7) >> 3));
if (ret < 0) {
ret = ftruncate(cow_fd, sizeof(cow_header) + ((image_sectors + 7) >> 3));
if (ret) {
ret = -errno;
goto exit;
}
exit:
bdrv_delete(cow_bs);
close(cow_fd);
return ret;
}
static int cow_flush(BlockDriverState *bs)
{
return bdrv_flush(bs->file);
}
static QEMUOptionParameter cow_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
@@ -333,17 +302,16 @@ static QEMUOptionParameter cow_create_options[] = {
};
static BlockDriver bdrv_cow = {
.format_name = "cow",
.instance_size = sizeof(BDRVCowState),
.bdrv_probe = cow_probe,
.bdrv_open = cow_open,
.bdrv_close = cow_close,
.bdrv_create = cow_create,
.bdrv_read = cow_co_read,
.bdrv_write = cow_co_write,
.bdrv_co_is_allocated = cow_co_is_allocated,
.format_name = "cow",
.instance_size = sizeof(BDRVCowState),
.bdrv_probe = cow_probe,
.bdrv_open = cow_open,
.bdrv_read = cow_read,
.bdrv_write = cow_write,
.bdrv_close = cow_close,
.bdrv_create = cow_create,
.bdrv_flush = cow_flush,
.bdrv_is_allocated = cow_is_allocated,
.create_options = cow_create_options,
};

View File

@@ -47,12 +47,7 @@ struct BDRVCURLState;
typedef struct CURLAIOCB {
BlockDriverAIOCB common;
QEMUBH *bh;
QEMUIOVector *qiov;
int64_t sector_num;
int nb_sectors;
size_t start;
size_t end;
} CURLAIOCB;
@@ -81,7 +76,6 @@ typedef struct BDRVCURLState {
static void curl_clean_state(CURLState *s);
static void curl_multi_do(void *arg);
static int curl_aio_flush(void *opaque);
static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
void *s, void *sp)
@@ -89,17 +83,17 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd);
switch (action) {
case CURL_POLL_IN:
qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, curl_aio_flush, s);
qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, NULL, NULL, s);
break;
case CURL_POLL_OUT:
qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, curl_aio_flush, s);
qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, NULL, NULL, s);
break;
case CURL_POLL_INOUT:
qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do,
curl_aio_flush, s);
qemu_aio_set_fd_handler(fd, curl_multi_do,
curl_multi_do, NULL, NULL, s);
break;
case CURL_POLL_REMOVE:
qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL);
qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL, NULL);
break;
}
@@ -235,23 +229,6 @@ static void curl_multi_do(void *arg)
{
CURLState *state = NULL;
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE, (char**)&state);
/* ACBs for successful messages get completed in curl_read_cb */
if (msg->data.result != CURLE_OK) {
int i;
for (i = 0; i < CURL_NUM_ACB; i++) {
CURLAIOCB *acb = state->acb[i];
if (acb == NULL) {
continue;
}
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_release(acb);
state->acb[i] = NULL;
}
}
curl_clean_state(state);
break;
}
@@ -280,7 +257,7 @@ static CURLState *curl_init_state(BDRVCURLState *s)
break;
}
if (!state) {
g_usleep(100);
usleep(100);
curl_multi_do(s);
}
} while(!state);
@@ -300,8 +277,7 @@ static CURLState *curl_init_state(BDRVCURLState *s)
curl_easy_setopt(state->curl, CURLOPT_FOLLOWLOCATION, 1);
curl_easy_setopt(state->curl, CURLOPT_NOSIGNAL, 1);
curl_easy_setopt(state->curl, CURLOPT_ERRORBUFFER, state->errmsg);
curl_easy_setopt(state->curl, CURLOPT_FAILONERROR, 1);
#ifdef DEBUG_VERBOSE
curl_easy_setopt(state->curl, CURLOPT_VERBOSE, 1);
#endif
@@ -334,7 +310,7 @@ static int curl_open(BlockDriverState *bs, const char *filename, int flags)
static int inited = 0;
file = g_strdup(filename);
file = qemu_strdup(filename);
s->readahead_size = READ_AHEAD_SIZE;
/* Parse a trailing ":readahead=#:" param, if present. */
@@ -414,25 +390,10 @@ out:
curl_easy_cleanup(state->curl);
state->curl = NULL;
out_noclean:
g_free(file);
qemu_free(file);
return -EINVAL;
}
static int curl_aio_flush(void *opaque)
{
BDRVCURLState *s = opaque;
int i, j;
for (i=0; i < CURL_NUM_STATES; i++) {
for(j=0; j < CURL_NUM_ACB; j++) {
if (s->states[i].acb[j]) {
return 1;
}
}
}
return 0;
}
static void curl_aio_cancel(BlockDriverAIOCB *blockacb)
{
// Do we have to implement canceling? Seems to work without...
@@ -443,82 +404,61 @@ static AIOPool curl_aio_pool = {
.cancel = curl_aio_cancel,
};
static void curl_readv_bh_cb(void *p)
static BlockDriverAIOCB *curl_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVCURLState *s = bs->opaque;
CURLAIOCB *acb;
size_t start = sector_num * SECTOR_SIZE;
size_t end;
CURLState *state;
CURLAIOCB *acb = p;
BDRVCURLState *s = acb->common.bs->opaque;
acb = qemu_aio_get(&curl_aio_pool, bs, cb, opaque);
if (!acb)
return NULL;
qemu_bh_delete(acb->bh);
acb->bh = NULL;
size_t start = acb->sector_num * SECTOR_SIZE;
size_t end;
acb->qiov = qiov;
// In case we have the requested data already (e.g. read-ahead),
// we can just call the callback and be done.
switch (curl_find_buf(s, start, acb->nb_sectors * SECTOR_SIZE, acb)) {
switch (curl_find_buf(s, start, nb_sectors * SECTOR_SIZE, acb)) {
case FIND_RET_OK:
qemu_aio_release(acb);
// fall through
case FIND_RET_WAIT:
return;
return &acb->common;
default:
break;
}
// No cache found, so let's start a new request
state = curl_init_state(s);
if (!state) {
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_release(acb);
return;
}
if (!state)
return NULL;
acb->start = 0;
acb->end = (acb->nb_sectors * SECTOR_SIZE);
acb->end = (nb_sectors * SECTOR_SIZE);
state->buf_off = 0;
if (state->orig_buf)
g_free(state->orig_buf);
qemu_free(state->orig_buf);
state->buf_start = start;
state->buf_len = acb->end + s->readahead_size;
end = MIN(start + state->buf_len, s->len) - 1;
state->orig_buf = g_malloc(state->buf_len);
state->orig_buf = qemu_malloc(state->buf_len);
state->acb[0] = acb;
snprintf(state->range, 127, "%zd-%zd", start, end);
DPRINTF("CURL (AIO): Reading %d at %zd (%s)\n",
(acb->nb_sectors * SECTOR_SIZE), start, state->range);
(nb_sectors * SECTOR_SIZE), start, state->range);
curl_easy_setopt(state->curl, CURLOPT_RANGE, state->range);
curl_multi_add_handle(s->multi, state->curl);
curl_multi_do(s);
}
static BlockDriverAIOCB *curl_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
CURLAIOCB *acb;
acb = qemu_aio_get(&curl_aio_pool, bs, cb, opaque);
acb->qiov = qiov;
acb->sector_num = sector_num;
acb->nb_sectors = nb_sectors;
acb->bh = qemu_bh_new(curl_readv_bh_cb, acb);
if (!acb->bh) {
DPRINTF("CURL: qemu_bh_new failed\n");
return NULL;
}
qemu_bh_schedule(acb->bh);
return &acb->common;
}
@@ -536,7 +476,7 @@ static void curl_close(BlockDriverState *bs)
s->states[i].curl = NULL;
}
if (s->states[i].orig_buf) {
g_free(s->states[i].orig_buf);
qemu_free(s->states[i].orig_buf);
s->states[i].orig_buf = NULL;
}
}

View File

@@ -28,7 +28,6 @@
#include <zlib.h>
typedef struct BDRVDMGState {
CoMutex lock;
/* each chunk contains a certain number of sectors,
* offsets[i] is the offset in the .dmg file,
* lengths[i] is the length of the compressed chunk,
@@ -128,11 +127,11 @@ static int dmg_open(BlockDriverState *bs, int flags)
chunk_count = (count-204)/40;
new_size = sizeof(uint64_t) * (s->n_chunks + chunk_count);
s->types = g_realloc(s->types, new_size/2);
s->offsets = g_realloc(s->offsets, new_size);
s->lengths = g_realloc(s->lengths, new_size);
s->sectors = g_realloc(s->sectors, new_size);
s->sectorcounts = g_realloc(s->sectorcounts, new_size);
s->types = qemu_realloc(s->types, new_size/2);
s->offsets = qemu_realloc(s->offsets, new_size);
s->lengths = qemu_realloc(s->lengths, new_size);
s->sectors = qemu_realloc(s->sectors, new_size);
s->sectorcounts = qemu_realloc(s->sectorcounts, new_size);
for(i=s->n_chunks;i<s->n_chunks+chunk_count;i++) {
s->types[i] = read_uint32(bs, offset);
@@ -171,14 +170,13 @@ static int dmg_open(BlockDriverState *bs, int flags)
}
/* initialize zlib engine */
s->compressed_chunk = g_malloc(max_compressed_size+1);
s->uncompressed_chunk = g_malloc(512*max_sectors_per_chunk);
s->compressed_chunk = qemu_malloc(max_compressed_size+1);
s->uncompressed_chunk = qemu_malloc(512*max_sectors_per_chunk);
if(inflateInit(&s->zstream) != Z_OK)
goto fail;
s->current_chunk = s->n_chunks;
qemu_co_mutex_init(&s->lock);
return 0;
fail:
return -1;
@@ -282,17 +280,6 @@ static int dmg_read(BlockDriverState *bs, int64_t sector_num,
return 0;
}
static coroutine_fn int dmg_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVDMGState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = dmg_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void dmg_close(BlockDriverState *bs)
{
BDRVDMGState *s = bs->opaque;
@@ -313,7 +300,7 @@ static BlockDriver bdrv_dmg = {
.instance_size = sizeof(BDRVDMGState),
.bdrv_probe = dmg_probe,
.bdrv_open = dmg_open,
.bdrv_read = dmg_co_read,
.bdrv_read = dmg_read,
.bdrv_close = dmg_close,
};

View File

@@ -1,934 +0,0 @@
/*
* QEMU Block driver for iSCSI images
*
* Copyright (c) 2010-2011 Ronnie Sahlberg <ronniesahlberg@gmail.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "config-host.h"
#include <poll.h>
#include <arpa/inet.h>
#include "qemu-common.h"
#include "qemu-error.h"
#include "block_int.h"
#include "trace.h"
#include "hw/scsi-defs.h"
#include <iscsi/iscsi.h>
#include <iscsi/scsi-lowlevel.h>
typedef struct IscsiLun {
struct iscsi_context *iscsi;
int lun;
enum scsi_inquiry_peripheral_device_type type;
int block_size;
uint64_t num_blocks;
int events;
} IscsiLun;
typedef struct IscsiAIOCB {
BlockDriverAIOCB common;
QEMUIOVector *qiov;
QEMUBH *bh;
IscsiLun *iscsilun;
struct scsi_task *task;
uint8_t *buf;
int status;
int canceled;
size_t read_size;
size_t read_offset;
} IscsiAIOCB;
struct IscsiTask {
IscsiLun *iscsilun;
BlockDriverState *bs;
int status;
int complete;
};
static void
iscsi_bh_cb(void *p)
{
IscsiAIOCB *acb = p;
qemu_bh_delete(acb->bh);
if (acb->canceled == 0) {
acb->common.cb(acb->common.opaque, acb->status);
}
if (acb->task != NULL) {
scsi_free_scsi_task(acb->task);
acb->task = NULL;
}
qemu_aio_release(acb);
}
static void
iscsi_schedule_bh(IscsiAIOCB *acb)
{
if (acb->bh) {
return;
}
acb->bh = qemu_bh_new(iscsi_bh_cb, acb);
qemu_bh_schedule(acb->bh);
}
static void
iscsi_abort_task_cb(struct iscsi_context *iscsi, int status, void *command_data,
void *private_data)
{
IscsiAIOCB *acb = private_data;
acb->status = -ECANCELED;
iscsi_schedule_bh(acb);
}
static void
iscsi_aio_cancel(BlockDriverAIOCB *blockacb)
{
IscsiAIOCB *acb = (IscsiAIOCB *)blockacb;
IscsiLun *iscsilun = acb->iscsilun;
if (acb->status != -EINPROGRESS) {
return;
}
acb->canceled = 1;
/* send a task mgmt call to the target to cancel the task on the target */
iscsi_task_mgmt_abort_task_async(iscsilun->iscsi, acb->task,
iscsi_abort_task_cb, acb);
while (acb->status == -EINPROGRESS) {
qemu_aio_wait();
}
}
static AIOPool iscsi_aio_pool = {
.aiocb_size = sizeof(IscsiAIOCB),
.cancel = iscsi_aio_cancel,
};
static void iscsi_process_read(void *arg);
static void iscsi_process_write(void *arg);
static int iscsi_process_flush(void *arg)
{
IscsiLun *iscsilun = arg;
return iscsi_queue_length(iscsilun->iscsi) > 0;
}
static void
iscsi_set_events(IscsiLun *iscsilun)
{
struct iscsi_context *iscsi = iscsilun->iscsi;
int ev;
/* We always register a read handler. */
ev = POLLIN;
ev |= iscsi_which_events(iscsi);
if (ev != iscsilun->events) {
qemu_aio_set_fd_handler(iscsi_get_fd(iscsi),
iscsi_process_read,
(ev & POLLOUT) ? iscsi_process_write : NULL,
iscsi_process_flush,
iscsilun);
}
/* If we just added an event, the callback might be delayed
* unless we call qemu_notify_event().
*/
if (ev & ~iscsilun->events) {
qemu_notify_event();
}
iscsilun->events = ev;
}
static void
iscsi_process_read(void *arg)
{
IscsiLun *iscsilun = arg;
struct iscsi_context *iscsi = iscsilun->iscsi;
iscsi_service(iscsi, POLLIN);
iscsi_set_events(iscsilun);
}
static void
iscsi_process_write(void *arg)
{
IscsiLun *iscsilun = arg;
struct iscsi_context *iscsi = iscsilun->iscsi;
iscsi_service(iscsi, POLLOUT);
iscsi_set_events(iscsilun);
}
static void
iscsi_aio_write16_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
{
IscsiAIOCB *acb = opaque;
trace_iscsi_aio_write16_cb(iscsi, status, acb, acb->canceled);
g_free(acb->buf);
if (acb->canceled != 0) {
return;
}
acb->status = 0;
if (status < 0) {
error_report("Failed to write16 data to iSCSI lun. %s",
iscsi_get_error(iscsi));
acb->status = -EIO;
}
iscsi_schedule_bh(acb);
}
static int64_t sector_qemu2lun(int64_t sector, IscsiLun *iscsilun)
{
return sector * BDRV_SECTOR_SIZE / iscsilun->block_size;
}
static BlockDriverAIOCB *
iscsi_aio_writev(BlockDriverState *bs, int64_t sector_num,
QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque)
{
IscsiLun *iscsilun = bs->opaque;
struct iscsi_context *iscsi = iscsilun->iscsi;
IscsiAIOCB *acb;
size_t size;
uint32_t num_sectors;
uint64_t lba;
struct iscsi_data data;
acb = qemu_aio_get(&iscsi_aio_pool, bs, cb, opaque);
trace_iscsi_aio_writev(iscsi, sector_num, nb_sectors, opaque, acb);
acb->iscsilun = iscsilun;
acb->qiov = qiov;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
/* XXX we should pass the iovec to write16 to avoid the extra copy */
/* this will allow us to get rid of 'buf' completely */
size = nb_sectors * BDRV_SECTOR_SIZE;
acb->buf = g_malloc(size);
qemu_iovec_to_buffer(acb->qiov, acb->buf);
acb->task = malloc(sizeof(struct scsi_task));
if (acb->task == NULL) {
error_report("iSCSI: Failed to allocate task for scsi WRITE16 "
"command. %s", iscsi_get_error(iscsi));
qemu_aio_release(acb);
return NULL;
}
memset(acb->task, 0, sizeof(struct scsi_task));
acb->task->xfer_dir = SCSI_XFER_WRITE;
acb->task->cdb_size = 16;
acb->task->cdb[0] = 0x8a;
if (!(bs->open_flags & BDRV_O_CACHE_WB)) {
/* set FUA on writes when cache mode is write through */
acb->task->cdb[1] |= 0x04;
}
lba = sector_qemu2lun(sector_num, iscsilun);
*(uint32_t *)&acb->task->cdb[2] = htonl(lba >> 32);
*(uint32_t *)&acb->task->cdb[6] = htonl(lba & 0xffffffff);
num_sectors = size / iscsilun->block_size;
*(uint32_t *)&acb->task->cdb[10] = htonl(num_sectors);
acb->task->expxferlen = size;
data.data = acb->buf;
data.size = size;
if (iscsi_scsi_command_async(iscsi, iscsilun->lun, acb->task,
iscsi_aio_write16_cb,
&data,
acb) != 0) {
scsi_free_scsi_task(acb->task);
g_free(acb->buf);
qemu_aio_release(acb);
return NULL;
}
iscsi_set_events(iscsilun);
return &acb->common;
}
static void
iscsi_aio_read16_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
{
IscsiAIOCB *acb = opaque;
trace_iscsi_aio_read16_cb(iscsi, status, acb, acb->canceled);
if (acb->canceled != 0) {
return;
}
acb->status = 0;
if (status != 0) {
error_report("Failed to read16 data from iSCSI lun. %s",
iscsi_get_error(iscsi));
acb->status = -EIO;
}
iscsi_schedule_bh(acb);
}
static BlockDriverAIOCB *
iscsi_aio_readv(BlockDriverState *bs, int64_t sector_num,
QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque)
{
IscsiLun *iscsilun = bs->opaque;
struct iscsi_context *iscsi = iscsilun->iscsi;
IscsiAIOCB *acb;
size_t qemu_read_size;
int i;
uint64_t lba;
uint32_t num_sectors;
qemu_read_size = BDRV_SECTOR_SIZE * (size_t)nb_sectors;
acb = qemu_aio_get(&iscsi_aio_pool, bs, cb, opaque);
trace_iscsi_aio_readv(iscsi, sector_num, nb_sectors, opaque, acb);
acb->iscsilun = iscsilun;
acb->qiov = qiov;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
acb->read_size = qemu_read_size;
acb->buf = NULL;
/* If LUN blocksize is bigger than BDRV_BLOCK_SIZE a read from QEMU
* may be misaligned to the LUN, so we may need to read some extra
* data.
*/
acb->read_offset = 0;
if (iscsilun->block_size > BDRV_SECTOR_SIZE) {
uint64_t bdrv_offset = BDRV_SECTOR_SIZE * sector_num;
acb->read_offset = bdrv_offset % iscsilun->block_size;
}
num_sectors = (qemu_read_size + iscsilun->block_size
+ acb->read_offset - 1)
/ iscsilun->block_size;
acb->task = malloc(sizeof(struct scsi_task));
if (acb->task == NULL) {
error_report("iSCSI: Failed to allocate task for scsi READ16 "
"command. %s", iscsi_get_error(iscsi));
qemu_aio_release(acb);
return NULL;
}
memset(acb->task, 0, sizeof(struct scsi_task));
acb->task->xfer_dir = SCSI_XFER_READ;
lba = sector_qemu2lun(sector_num, iscsilun);
acb->task->expxferlen = qemu_read_size;
switch (iscsilun->type) {
case TYPE_DISK:
acb->task->cdb_size = 16;
acb->task->cdb[0] = 0x88;
*(uint32_t *)&acb->task->cdb[2] = htonl(lba >> 32);
*(uint32_t *)&acb->task->cdb[6] = htonl(lba & 0xffffffff);
*(uint32_t *)&acb->task->cdb[10] = htonl(num_sectors);
break;
default:
acb->task->cdb_size = 10;
acb->task->cdb[0] = 0x28;
*(uint32_t *)&acb->task->cdb[2] = htonl(lba);
*(uint16_t *)&acb->task->cdb[7] = htons(num_sectors);
break;
}
if (iscsi_scsi_command_async(iscsi, iscsilun->lun, acb->task,
iscsi_aio_read16_cb,
NULL,
acb) != 0) {
scsi_free_scsi_task(acb->task);
qemu_aio_release(acb);
return NULL;
}
for (i = 0; i < acb->qiov->niov; i++) {
scsi_task_add_data_in_buffer(acb->task,
acb->qiov->iov[i].iov_len,
acb->qiov->iov[i].iov_base);
}
iscsi_set_events(iscsilun);
return &acb->common;
}
static void
iscsi_synccache10_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
{
IscsiAIOCB *acb = opaque;
if (acb->canceled != 0) {
return;
}
acb->status = 0;
if (status < 0) {
error_report("Failed to sync10 data on iSCSI lun. %s",
iscsi_get_error(iscsi));
acb->status = -EIO;
}
iscsi_schedule_bh(acb);
}
static BlockDriverAIOCB *
iscsi_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
IscsiLun *iscsilun = bs->opaque;
struct iscsi_context *iscsi = iscsilun->iscsi;
IscsiAIOCB *acb;
acb = qemu_aio_get(&iscsi_aio_pool, bs, cb, opaque);
acb->iscsilun = iscsilun;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
acb->task = iscsi_synchronizecache10_task(iscsi, iscsilun->lun,
0, 0, 0, 0,
iscsi_synccache10_cb,
acb);
if (acb->task == NULL) {
error_report("iSCSI: Failed to send synchronizecache10 command. %s",
iscsi_get_error(iscsi));
qemu_aio_release(acb);
return NULL;
}
iscsi_set_events(iscsilun);
return &acb->common;
}
static void
iscsi_unmap_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
{
IscsiAIOCB *acb = opaque;
if (acb->canceled != 0) {
return;
}
acb->status = 0;
if (status < 0) {
error_report("Failed to unmap data on iSCSI lun. %s",
iscsi_get_error(iscsi));
acb->status = -EIO;
}
iscsi_schedule_bh(acb);
}
static BlockDriverAIOCB *
iscsi_aio_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
IscsiLun *iscsilun = bs->opaque;
struct iscsi_context *iscsi = iscsilun->iscsi;
IscsiAIOCB *acb;
struct unmap_list list[1];
acb = qemu_aio_get(&iscsi_aio_pool, bs, cb, opaque);
acb->iscsilun = iscsilun;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
list[0].lba = sector_qemu2lun(sector_num, iscsilun);
list[0].num = nb_sectors * BDRV_SECTOR_SIZE / iscsilun->block_size;
acb->task = iscsi_unmap_task(iscsi, iscsilun->lun,
0, 0, &list[0], 1,
iscsi_unmap_cb,
acb);
if (acb->task == NULL) {
error_report("iSCSI: Failed to send unmap command. %s",
iscsi_get_error(iscsi));
qemu_aio_release(acb);
return NULL;
}
iscsi_set_events(iscsilun);
return &acb->common;
}
static int64_t
iscsi_getlength(BlockDriverState *bs)
{
IscsiLun *iscsilun = bs->opaque;
int64_t len;
len = iscsilun->num_blocks;
len *= iscsilun->block_size;
return len;
}
static void
iscsi_readcapacity16_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
{
struct IscsiTask *itask = opaque;
struct scsi_readcapacity16 *rc16;
struct scsi_task *task = command_data;
if (status != 0) {
error_report("iSCSI: Failed to read capacity of iSCSI lun. %s",
iscsi_get_error(iscsi));
itask->status = 1;
itask->complete = 1;
scsi_free_scsi_task(task);
return;
}
rc16 = scsi_datain_unmarshall(task);
if (rc16 == NULL) {
error_report("iSCSI: Failed to unmarshall readcapacity16 data.");
itask->status = 1;
itask->complete = 1;
scsi_free_scsi_task(task);
return;
}
itask->iscsilun->block_size = rc16->block_length;
itask->iscsilun->num_blocks = rc16->returned_lba + 1;
itask->bs->total_sectors = itask->iscsilun->num_blocks *
itask->iscsilun->block_size / BDRV_SECTOR_SIZE ;
itask->status = 0;
itask->complete = 1;
scsi_free_scsi_task(task);
}
static void
iscsi_readcapacity10_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
{
struct IscsiTask *itask = opaque;
struct scsi_readcapacity10 *rc10;
struct scsi_task *task = command_data;
if (status != 0) {
error_report("iSCSI: Failed to read capacity of iSCSI lun. %s",
iscsi_get_error(iscsi));
itask->status = 1;
itask->complete = 1;
scsi_free_scsi_task(task);
return;
}
rc10 = scsi_datain_unmarshall(task);
if (rc10 == NULL) {
error_report("iSCSI: Failed to unmarshall readcapacity10 data.");
itask->status = 1;
itask->complete = 1;
scsi_free_scsi_task(task);
return;
}
itask->iscsilun->block_size = rc10->block_size;
itask->iscsilun->num_blocks = rc10->lba + 1;
itask->bs->total_sectors = itask->iscsilun->num_blocks *
itask->iscsilun->block_size / BDRV_SECTOR_SIZE ;
itask->status = 0;
itask->complete = 1;
scsi_free_scsi_task(task);
}
static void
iscsi_inquiry_cb(struct iscsi_context *iscsi, int status, void *command_data,
void *opaque)
{
struct IscsiTask *itask = opaque;
struct scsi_task *task = command_data;
struct scsi_inquiry_standard *inq;
if (status != 0) {
itask->status = 1;
itask->complete = 1;
scsi_free_scsi_task(task);
return;
}
inq = scsi_datain_unmarshall(task);
if (inq == NULL) {
error_report("iSCSI: Failed to unmarshall inquiry data.");
itask->status = 1;
itask->complete = 1;
scsi_free_scsi_task(task);
return;
}
itask->iscsilun->type = inq->periperal_device_type;
scsi_free_scsi_task(task);
switch (itask->iscsilun->type) {
case TYPE_DISK:
task = iscsi_readcapacity16_task(iscsi, itask->iscsilun->lun,
iscsi_readcapacity16_cb, opaque);
if (task == NULL) {
error_report("iSCSI: failed to send readcapacity16 command.");
itask->status = 1;
itask->complete = 1;
return;
}
break;
case TYPE_ROM:
task = iscsi_readcapacity10_task(iscsi, itask->iscsilun->lun,
0, 0,
iscsi_readcapacity10_cb, opaque);
if (task == NULL) {
error_report("iSCSI: failed to send readcapacity16 command.");
itask->status = 1;
itask->complete = 1;
return;
}
break;
default:
itask->status = 0;
itask->complete = 1;
}
}
static void
iscsi_connect_cb(struct iscsi_context *iscsi, int status, void *command_data,
void *opaque)
{
struct IscsiTask *itask = opaque;
struct scsi_task *task;
if (status != 0) {
itask->status = 1;
itask->complete = 1;
return;
}
task = iscsi_inquiry_task(iscsi, itask->iscsilun->lun,
0, 0, 36,
iscsi_inquiry_cb, opaque);
if (task == NULL) {
error_report("iSCSI: failed to send inquiry command.");
itask->status = 1;
itask->complete = 1;
return;
}
}
static int parse_chap(struct iscsi_context *iscsi, const char *target)
{
QemuOptsList *list;
QemuOpts *opts;
const char *user = NULL;
const char *password = NULL;
list = qemu_find_opts("iscsi");
if (!list) {
return 0;
}
opts = qemu_opts_find(list, target);
if (opts == NULL) {
opts = QTAILQ_FIRST(&list->head);
if (!opts) {
return 0;
}
}
user = qemu_opt_get(opts, "user");
if (!user) {
return 0;
}
password = qemu_opt_get(opts, "password");
if (!password) {
error_report("CHAP username specified but no password was given");
return -1;
}
if (iscsi_set_initiator_username_pwd(iscsi, user, password)) {
error_report("Failed to set initiator username and password");
return -1;
}
return 0;
}
static void parse_header_digest(struct iscsi_context *iscsi, const char *target)
{
QemuOptsList *list;
QemuOpts *opts;
const char *digest = NULL;
list = qemu_find_opts("iscsi");
if (!list) {
return;
}
opts = qemu_opts_find(list, target);
if (opts == NULL) {
opts = QTAILQ_FIRST(&list->head);
if (!opts) {
return;
}
}
digest = qemu_opt_get(opts, "header-digest");
if (!digest) {
return;
}
if (!strcmp(digest, "CRC32C")) {
iscsi_set_header_digest(iscsi, ISCSI_HEADER_DIGEST_CRC32C);
} else if (!strcmp(digest, "NONE")) {
iscsi_set_header_digest(iscsi, ISCSI_HEADER_DIGEST_NONE);
} else if (!strcmp(digest, "CRC32C-NONE")) {
iscsi_set_header_digest(iscsi, ISCSI_HEADER_DIGEST_CRC32C_NONE);
} else if (!strcmp(digest, "NONE-CRC32C")) {
iscsi_set_header_digest(iscsi, ISCSI_HEADER_DIGEST_NONE_CRC32C);
} else {
error_report("Invalid header-digest setting : %s", digest);
}
}
static char *parse_initiator_name(const char *target)
{
QemuOptsList *list;
QemuOpts *opts;
const char *name = NULL;
list = qemu_find_opts("iscsi");
if (!list) {
return g_strdup("iqn.2008-11.org.linux-kvm");
}
opts = qemu_opts_find(list, target);
if (opts == NULL) {
opts = QTAILQ_FIRST(&list->head);
if (!opts) {
return g_strdup("iqn.2008-11.org.linux-kvm");
}
}
name = qemu_opt_get(opts, "initiator-name");
if (!name) {
return g_strdup("iqn.2008-11.org.linux-kvm");
}
return g_strdup(name);
}
/*
* We support iscsi url's on the form
* iscsi://[<username>%<password>@]<host>[:<port>]/<targetname>/<lun>
*/
static int iscsi_open(BlockDriverState *bs, const char *filename, int flags)
{
IscsiLun *iscsilun = bs->opaque;
struct iscsi_context *iscsi = NULL;
struct iscsi_url *iscsi_url = NULL;
struct IscsiTask task;
char *initiator_name = NULL;
int ret;
if ((BDRV_SECTOR_SIZE % 512) != 0) {
error_report("iSCSI: Invalid BDRV_SECTOR_SIZE. "
"BDRV_SECTOR_SIZE(%lld) is not a multiple "
"of 512", BDRV_SECTOR_SIZE);
return -EINVAL;
}
iscsi_url = iscsi_parse_full_url(iscsi, filename);
if (iscsi_url == NULL) {
error_report("Failed to parse URL : %s %s", filename,
iscsi_get_error(iscsi));
ret = -EINVAL;
goto failed;
}
memset(iscsilun, 0, sizeof(IscsiLun));
initiator_name = parse_initiator_name(iscsi_url->target);
iscsi = iscsi_create_context(initiator_name);
if (iscsi == NULL) {
error_report("iSCSI: Failed to create iSCSI context.");
ret = -ENOMEM;
goto failed;
}
if (iscsi_set_targetname(iscsi, iscsi_url->target)) {
error_report("iSCSI: Failed to set target name.");
ret = -EINVAL;
goto failed;
}
if (iscsi_url->user != NULL) {
ret = iscsi_set_initiator_username_pwd(iscsi, iscsi_url->user,
iscsi_url->passwd);
if (ret != 0) {
error_report("Failed to set initiator username and password");
ret = -EINVAL;
goto failed;
}
}
/* check if we got CHAP username/password via the options */
if (parse_chap(iscsi, iscsi_url->target) != 0) {
error_report("iSCSI: Failed to set CHAP user/password");
ret = -EINVAL;
goto failed;
}
if (iscsi_set_session_type(iscsi, ISCSI_SESSION_NORMAL) != 0) {
error_report("iSCSI: Failed to set session type to normal.");
ret = -EINVAL;
goto failed;
}
iscsi_set_header_digest(iscsi, ISCSI_HEADER_DIGEST_NONE_CRC32C);
/* check if we got HEADER_DIGEST via the options */
parse_header_digest(iscsi, iscsi_url->target);
task.iscsilun = iscsilun;
task.status = 0;
task.complete = 0;
task.bs = bs;
iscsilun->iscsi = iscsi;
iscsilun->lun = iscsi_url->lun;
if (iscsi_full_connect_async(iscsi, iscsi_url->portal, iscsi_url->lun,
iscsi_connect_cb, &task)
!= 0) {
error_report("iSCSI: Failed to start async connect.");
ret = -EINVAL;
goto failed;
}
while (!task.complete) {
iscsi_set_events(iscsilun);
qemu_aio_wait();
}
if (task.status != 0) {
error_report("iSCSI: Failed to connect to LUN : %s",
iscsi_get_error(iscsi));
ret = -EINVAL;
goto failed;
}
if (iscsi_url != NULL) {
iscsi_destroy_url(iscsi_url);
}
return 0;
failed:
if (initiator_name != NULL) {
g_free(initiator_name);
}
if (iscsi_url != NULL) {
iscsi_destroy_url(iscsi_url);
}
if (iscsi != NULL) {
iscsi_destroy_context(iscsi);
}
memset(iscsilun, 0, sizeof(IscsiLun));
return ret;
}
static void iscsi_close(BlockDriverState *bs)
{
IscsiLun *iscsilun = bs->opaque;
struct iscsi_context *iscsi = iscsilun->iscsi;
qemu_aio_set_fd_handler(iscsi_get_fd(iscsi), NULL, NULL, NULL, NULL);
iscsi_destroy_context(iscsi);
memset(iscsilun, 0, sizeof(IscsiLun));
}
static BlockDriver bdrv_iscsi = {
.format_name = "iscsi",
.protocol_name = "iscsi",
.instance_size = sizeof(IscsiLun),
.bdrv_file_open = iscsi_open,
.bdrv_close = iscsi_close,
.bdrv_getlength = iscsi_getlength,
.bdrv_aio_readv = iscsi_aio_readv,
.bdrv_aio_writev = iscsi_aio_writev,
.bdrv_aio_flush = iscsi_aio_flush,
.bdrv_aio_discard = iscsi_aio_discard,
};
static void iscsi_block_init(void)
{
bdrv_register(&bdrv_iscsi);
}
block_init(iscsi_block_init);

View File

@@ -28,7 +28,6 @@
#include "qemu-common.h"
#include "nbd.h"
#include "block_int.h"
#include "module.h"
#include "qemu_socket.h"
@@ -46,25 +45,12 @@
#define logout(fmt, ...) ((void)0)
#endif
#define MAX_NBD_REQUESTS 16
#define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs))
#define INDEX_TO_HANDLE(bs, index) ((index) ^ ((uint64_t)(intptr_t)bs))
typedef struct BDRVNBDState {
int sock;
uint32_t nbdflags;
off_t size;
size_t blocksize;
char *export_name; /* An NBD server may export several devices */
CoMutex send_mutex;
CoMutex free_sema;
Coroutine *send_coroutine;
int in_flight;
Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
struct nbd_reply reply;
/* If it begins with '/', this is a UNIX domain socket. Otherwise,
* it's a string of the form <hostname|ip4|\[ip6\]>:port
*/
@@ -79,7 +65,7 @@ static int nbd_config(BDRVNBDState *s, const char *filename, int flags)
const char *unixpath;
int err = -EINVAL;
file = g_strdup(filename);
file = qemu_strdup(filename);
export_name = strstr(file, EN_OPTSTR);
if (export_name) {
@@ -88,7 +74,7 @@ static int nbd_config(BDRVNBDState *s, const char *filename, int flags)
}
export_name[0] = 0; /* truncate 'file' */
export_name += strlen(EN_OPTSTR);
s->export_name = g_strdup(export_name);
s->export_name = qemu_strdup(export_name);
}
/* extract the host_spec - fail if it's not nbd:... */
@@ -101,157 +87,22 @@ static int nbd_config(BDRVNBDState *s, const char *filename, int flags)
if (unixpath[0] != '/') { /* We demand an absolute path*/
goto out;
}
s->host_spec = g_strdup(unixpath);
s->host_spec = qemu_strdup(unixpath);
} else {
s->host_spec = g_strdup(host_spec);
s->host_spec = qemu_strdup(host_spec);
}
err = 0;
out:
g_free(file);
qemu_free(file);
if (err != 0) {
g_free(s->export_name);
g_free(s->host_spec);
qemu_free(s->export_name);
qemu_free(s->host_spec);
}
return err;
}
static void nbd_coroutine_start(BDRVNBDState *s, struct nbd_request *request)
{
int i;
/* Poor man semaphore. The free_sema is locked when no other request
* can be accepted, and unlocked after receiving one reply. */
if (s->in_flight >= MAX_NBD_REQUESTS - 1) {
qemu_co_mutex_lock(&s->free_sema);
assert(s->in_flight < MAX_NBD_REQUESTS);
}
s->in_flight++;
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i] == NULL) {
s->recv_coroutine[i] = qemu_coroutine_self();
break;
}
}
assert(i < MAX_NBD_REQUESTS);
request->handle = INDEX_TO_HANDLE(s, i);
}
static int nbd_have_request(void *opaque)
{
BDRVNBDState *s = opaque;
return s->in_flight > 0;
}
static void nbd_reply_ready(void *opaque)
{
BDRVNBDState *s = opaque;
uint64_t i;
int ret;
if (s->reply.handle == 0) {
/* No reply already in flight. Fetch a header. It is possible
* that another thread has done the same thing in parallel, so
* the socket is not readable anymore.
*/
ret = nbd_receive_reply(s->sock, &s->reply);
if (ret == -EAGAIN) {
return;
}
if (ret < 0) {
s->reply.handle = 0;
goto fail;
}
}
/* There's no need for a mutex on the receive side, because the
* handler acts as a synchronization point and ensures that only
* one coroutine is called until the reply finishes. */
i = HANDLE_TO_INDEX(s, s->reply.handle);
if (i >= MAX_NBD_REQUESTS) {
goto fail;
}
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
return;
}
fail:
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
}
}
}
static void nbd_restart_write(void *opaque)
{
BDRVNBDState *s = opaque;
qemu_coroutine_enter(s->send_coroutine, NULL);
}
static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
struct iovec *iov, int offset)
{
int rc, ret;
qemu_co_mutex_lock(&s->send_mutex);
s->send_coroutine = qemu_coroutine_self();
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write,
nbd_have_request, s);
rc = nbd_send_request(s->sock, request);
if (rc >= 0 && iov) {
ret = qemu_co_sendv(s->sock, iov, request->len, offset);
if (ret != request->len) {
return -EIO;
}
}
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL,
nbd_have_request, s);
s->send_coroutine = NULL;
qemu_co_mutex_unlock(&s->send_mutex);
return rc;
}
static void nbd_co_receive_reply(BDRVNBDState *s, struct nbd_request *request,
struct nbd_reply *reply,
struct iovec *iov, int offset)
{
int ret;
/* Wait until we're woken up by the read handler. TODO: perhaps
* peek at the next reply and avoid yielding if it's ours? */
qemu_coroutine_yield();
*reply = s->reply;
if (reply->handle != request->handle) {
reply->error = EIO;
} else {
if (iov && reply->error == 0) {
ret = qemu_co_recvv(s->sock, iov, request->len, offset);
if (ret != request->len) {
reply->error = EIO;
}
}
/* Tell the read handler to read another header. */
s->reply.handle = 0;
}
}
static void nbd_coroutine_end(BDRVNBDState *s, struct nbd_request *request)
{
int i = HANDLE_TO_INDEX(s, request->handle);
s->recv_coroutine[i] = NULL;
if (s->in_flight-- == MAX_NBD_REQUESTS) {
qemu_co_mutex_unlock(&s->free_sema);
}
}
static int nbd_establish_connection(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
@@ -259,6 +110,7 @@ static int nbd_establish_connection(BlockDriverState *bs)
int ret;
off_t size;
size_t blocksize;
uint32_t nbdflags;
if (s->host_spec[0] == '/') {
sock = unix_socket_outgoing(s->host_spec);
@@ -267,25 +119,22 @@ static int nbd_establish_connection(BlockDriverState *bs)
}
/* Failed to establish connection */
if (sock < 0) {
if (sock == -1) {
logout("Failed to establish connection to NBD server\n");
return -errno;
}
/* NBD handshake */
ret = nbd_receive_negotiate(sock, s->export_name, &s->nbdflags, &size,
ret = nbd_receive_negotiate(sock, s->export_name, &nbdflags, &size,
&blocksize);
if (ret < 0) {
if (ret == -1) {
logout("Failed to negotiate with the NBD server\n");
closesocket(sock);
return ret;
return -errno;
}
/* Now that we're connected, set the socket to be non-blocking and
* kick the reply mechanism. */
/* Now that we're connected, set the socket to be non-blocking */
socket_set_nonblock(sock);
qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL,
nbd_have_request, s);
s->sock = sock;
s->size = size;
@@ -301,11 +150,11 @@ static void nbd_teardown_connection(BlockDriverState *bs)
struct nbd_request request;
request.type = NBD_CMD_DISC;
request.handle = (uint64_t)(intptr_t)bs;
request.from = 0;
request.len = 0;
nbd_send_request(s->sock, &request);
qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL, NULL);
closesocket(s->sock);
}
@@ -314,9 +163,6 @@ static int nbd_open(BlockDriverState *bs, const char* filename, int flags)
BDRVNBDState *s = bs->opaque;
int result;
qemu_co_mutex_init(&s->send_mutex);
qemu_co_mutex_init(&s->free_sema);
/* Pop the config into our state object. Exit if invalid. */
result = nbd_config(s, filename, flags);
if (result != 0) {
@@ -331,158 +177,71 @@ static int nbd_open(BlockDriverState *bs, const char* filename, int flags)
return result;
}
static int nbd_co_readv_1(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
static int nbd_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
request.type = NBD_CMD_READ;
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, qiov->iov, offset);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
static int nbd_co_writev_1(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
request.type = NBD_CMD_WRITE;
if (!bdrv_enable_write_cache(bs) && (s->nbdflags & NBD_FLAG_SEND_FUA)) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, qiov->iov, offset);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, NULL, 0);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
/* qemu-nbd has a limit of slightly less than 1M per request. Try to
* remain aligned to 4K. */
#define NBD_MAX_SECTORS 2040
static int nbd_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_readv_1(bs, sector_num, NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_readv_1(bs, sector_num, nb_sectors, qiov, offset);
}
static int nbd_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_writev_1(bs, sector_num, NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_writev_1(bs, sector_num, nb_sectors, qiov, offset);
}
static int nbd_co_flush(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
if (!(s->nbdflags & NBD_FLAG_SEND_FLUSH)) {
return 0;
}
request.type = NBD_CMD_FLUSH;
if (s->nbdflags & NBD_FLAG_SEND_FUA) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = 0;
request.len = 0;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, NULL, 0);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
static int nbd_co_discard(BlockDriverState *bs, int64_t sector_num,
int nb_sectors)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
if (!(s->nbdflags & NBD_FLAG_SEND_TRIM)) {
return 0;
}
request.type = NBD_CMD_TRIM;
request.handle = (uint64_t)(intptr_t)bs;
request.from = sector_num * 512;;
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, NULL, 0);
}
nbd_coroutine_end(s, &request);
return -reply.error;
if (nbd_send_request(s->sock, &request) == -1)
return -errno;
if (nbd_receive_reply(s->sock, &reply) == -1)
return -errno;
if (reply.error !=0)
return -reply.error;
if (reply.handle != request.handle)
return -EIO;
if (nbd_wr_sync(s->sock, buf, request.len, 1) != request.len)
return -EIO;
return 0;
}
static int nbd_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
request.type = NBD_CMD_WRITE;
request.handle = (uint64_t)(intptr_t)bs;
request.from = sector_num * 512;;
request.len = nb_sectors * 512;
if (nbd_send_request(s->sock, &request) == -1)
return -errno;
if (nbd_wr_sync(s->sock, (uint8_t*)buf, request.len, 0) != request.len)
return -EIO;
if (nbd_receive_reply(s->sock, &reply) == -1)
return -errno;
if (reply.error !=0)
return -reply.error;
if (reply.handle != request.handle)
return -EIO;
return 0;
}
static void nbd_close(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
g_free(s->export_name);
g_free(s->host_spec);
qemu_free(s->export_name);
qemu_free(s->host_spec);
nbd_teardown_connection(bs);
}
@@ -495,16 +254,14 @@ static int64_t nbd_getlength(BlockDriverState *bs)
}
static BlockDriver bdrv_nbd = {
.format_name = "nbd",
.instance_size = sizeof(BDRVNBDState),
.bdrv_file_open = nbd_open,
.bdrv_co_readv = nbd_co_readv,
.bdrv_co_writev = nbd_co_writev,
.bdrv_close = nbd_close,
.bdrv_co_flush_to_os = nbd_co_flush,
.bdrv_co_discard = nbd_co_discard,
.bdrv_getlength = nbd_getlength,
.protocol_name = "nbd",
.format_name = "nbd",
.instance_size = sizeof(BDRVNBDState),
.bdrv_file_open = nbd_open,
.bdrv_read = nbd_read,
.bdrv_write = nbd_write,
.bdrv_close = nbd_close,
.bdrv_getlength = nbd_getlength,
.protocol_name = "nbd",
};
static void bdrv_nbd_init(void)

View File

@@ -43,10 +43,9 @@ struct parallels_header {
uint32_t catalog_entries;
uint32_t nb_sectors;
char padding[24];
} QEMU_PACKED;
} __attribute__((packed));
typedef struct BDRVParallelsState {
CoMutex lock;
uint32_t *catalog_bitmap;
int catalog_size;
@@ -89,18 +88,17 @@ static int parallels_open(BlockDriverState *bs, int flags)
s->tracks = le32_to_cpu(ph.tracks);
s->catalog_size = le32_to_cpu(ph.catalog_entries);
s->catalog_bitmap = g_malloc(s->catalog_size * 4);
s->catalog_bitmap = qemu_malloc(s->catalog_size * 4);
if (bdrv_pread(bs->file, 64, s->catalog_bitmap, s->catalog_size * 4) !=
s->catalog_size * 4)
goto fail;
for (i = 0; i < s->catalog_size; i++)
le32_to_cpus(&s->catalog_bitmap[i]);
qemu_co_mutex_init(&s->lock);
return 0;
fail:
if (s->catalog_bitmap)
g_free(s->catalog_bitmap);
qemu_free(s->catalog_bitmap);
return -1;
}
@@ -136,21 +134,10 @@ static int parallels_read(BlockDriverState *bs, int64_t sector_num,
return 0;
}
static coroutine_fn int parallels_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVParallelsState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = parallels_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void parallels_close(BlockDriverState *bs)
{
BDRVParallelsState *s = bs->opaque;
g_free(s->catalog_bitmap);
qemu_free(s->catalog_bitmap);
}
static BlockDriver bdrv_parallels = {
@@ -158,7 +145,7 @@ static BlockDriver bdrv_parallels = {
.instance_size = sizeof(BDRVParallelsState),
.bdrv_probe = parallels_probe,
.bdrv_open = parallels_open,
.bdrv_read = parallels_co_read,
.bdrv_read = parallels_read,
.bdrv_close = parallels_close,
};

View File

@@ -26,7 +26,6 @@
#include "module.h"
#include <zlib.h>
#include "aes.h"
#include "migration.h"
/**************************************************************/
/* QEMU COW block driver with compression and encryption support */
@@ -74,8 +73,6 @@ typedef struct BDRVQcowState {
uint32_t crypt_method_header;
AES_KEY aes_encrypt_key;
AES_KEY aes_decrypt_key;
CoMutex lock;
Error *migration_blocker;
} BDRVQcowState;
static int decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset);
@@ -95,13 +92,11 @@ static int qcow_probe(const uint8_t *buf, int buf_size, const char *filename)
static int qcow_open(BlockDriverState *bs, int flags)
{
BDRVQcowState *s = bs->opaque;
int len, i, shift, ret;
int len, i, shift;
QCowHeader header;
ret = bdrv_pread(bs->file, 0, &header, sizeof(header));
if (ret < 0) {
if (bdrv_pread(bs->file, 0, &header, sizeof(header)) != sizeof(header))
goto fail;
}
be32_to_cpus(&header.magic);
be32_to_cpus(&header.version);
be64_to_cpus(&header.backing_file_offset);
@@ -111,31 +106,15 @@ static int qcow_open(BlockDriverState *bs, int flags)
be32_to_cpus(&header.crypt_method);
be64_to_cpus(&header.l1_table_offset);
if (header.magic != QCOW_MAGIC) {
ret = -EINVAL;
if (header.magic != QCOW_MAGIC || header.version != QCOW_VERSION)
goto fail;
}
if (header.version != QCOW_VERSION) {
char version[64];
snprintf(version, sizeof(version), "QCOW version %d", header.version);
qerror_report(QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "qcow", version);
ret = -ENOTSUP;
if (header.size <= 1 || header.cluster_bits < 9)
goto fail;
}
if (header.size <= 1 || header.cluster_bits < 9) {
ret = -EINVAL;
if (header.crypt_method > QCOW_CRYPT_AES)
goto fail;
}
if (header.crypt_method > QCOW_CRYPT_AES) {
ret = -EINVAL;
goto fail;
}
s->crypt_method_header = header.crypt_method;
if (s->crypt_method_header) {
if (s->crypt_method_header)
bs->encrypted = 1;
}
s->cluster_bits = header.cluster_bits;
s->cluster_size = 1 << s->cluster_bits;
s->cluster_sectors = 1 << (s->cluster_bits - 9);
@@ -149,52 +128,44 @@ static int qcow_open(BlockDriverState *bs, int flags)
s->l1_size = (header.size + (1LL << shift) - 1) >> shift;
s->l1_table_offset = header.l1_table_offset;
s->l1_table = g_malloc(s->l1_size * sizeof(uint64_t));
ret = bdrv_pread(bs->file, s->l1_table_offset, s->l1_table,
s->l1_size * sizeof(uint64_t));
if (ret < 0) {
s->l1_table = qemu_malloc(s->l1_size * sizeof(uint64_t));
if (!s->l1_table)
goto fail;
if (bdrv_pread(bs->file, s->l1_table_offset, s->l1_table, s->l1_size * sizeof(uint64_t)) !=
s->l1_size * sizeof(uint64_t))
goto fail;
}
for(i = 0;i < s->l1_size; i++) {
be64_to_cpus(&s->l1_table[i]);
}
/* alloc L2 cache */
s->l2_cache = g_malloc(s->l2_size * L2_CACHE_SIZE * sizeof(uint64_t));
s->cluster_cache = g_malloc(s->cluster_size);
s->cluster_data = g_malloc(s->cluster_size);
s->l2_cache = qemu_malloc(s->l2_size * L2_CACHE_SIZE * sizeof(uint64_t));
if (!s->l2_cache)
goto fail;
s->cluster_cache = qemu_malloc(s->cluster_size);
if (!s->cluster_cache)
goto fail;
s->cluster_data = qemu_malloc(s->cluster_size);
if (!s->cluster_data)
goto fail;
s->cluster_cache_offset = -1;
/* read the backing file name */
if (header.backing_file_offset != 0) {
len = header.backing_file_size;
if (len > 1023) {
if (len > 1023)
len = 1023;
}
ret = bdrv_pread(bs->file, header.backing_file_offset,
bs->backing_file, len);
if (ret < 0) {
if (bdrv_pread(bs->file, header.backing_file_offset, bs->backing_file, len) != len)
goto fail;
}
bs->backing_file[len] = '\0';
}
/* Disable migration when qcow images are used */
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"qcow", bs->device_name, "live migration");
migrate_add_blocker(s->migration_blocker);
qemu_co_mutex_init(&s->lock);
return 0;
fail:
g_free(s->l1_table);
g_free(s->l2_cache);
g_free(s->cluster_cache);
g_free(s->cluster_data);
return ret;
qemu_free(s->l1_table);
qemu_free(s->l2_cache);
qemu_free(s->cluster_cache);
qemu_free(s->cluster_data);
return -1;
}
static int qcow_set_key(BlockDriverState *bs, const char *key)
@@ -218,6 +189,24 @@ static int qcow_set_key(BlockDriverState *bs, const char *key)
return -1;
if (AES_set_decrypt_key(keybuf, 128, &s->aes_decrypt_key) != 0)
return -1;
#if 0
/* test */
{
uint8_t in[16];
uint8_t out[16];
uint8_t tmp[16];
for(i=0;i<16;i++)
in[i] = i;
AES_encrypt(in, tmp, &s->aes_encrypt_key);
AES_decrypt(tmp, out, &s->aes_decrypt_key);
for(i = 0; i < 16; i++)
printf(" %02x", tmp[i]);
printf("\n");
for(i = 0; i < 16; i++)
printf(" %02x", out[i]);
printf("\n");
}
#endif
return 0;
}
@@ -386,16 +375,14 @@ static uint64_t get_cluster_offset(BlockDriverState *bs,
return cluster_offset;
}
static int coroutine_fn qcow_co_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum)
static int qcow_is_allocated(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum)
{
BDRVQcowState *s = bs->opaque;
int index_in_cluster, n;
uint64_t cluster_offset;
qemu_co_mutex_lock(&s->lock);
cluster_offset = get_cluster_offset(bs, sector_num << 9, 0, 0, 0, 0);
qemu_co_mutex_unlock(&s->lock);
index_in_cluster = sector_num & (s->cluster_sectors - 1);
n = s->cluster_sectors - index_in_cluster;
if (n > nb_sectors)
@@ -453,205 +440,375 @@ static int decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset)
return 0;
}
static coroutine_fn int qcow_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
#if 0
static int qcow_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVQcowState *s = bs->opaque;
int index_in_cluster;
int ret = 0, n;
int ret, index_in_cluster, n;
uint64_t cluster_offset;
struct iovec hd_iov;
QEMUIOVector hd_qiov;
uint8_t *buf;
void *orig_buf;
if (qiov->niov > 1) {
buf = orig_buf = qemu_blockalign(bs, qiov->size);
} else {
orig_buf = NULL;
buf = (uint8_t *)qiov->iov->iov_base;
}
qemu_co_mutex_lock(&s->lock);
while (nb_sectors != 0) {
/* prepare next request */
cluster_offset = get_cluster_offset(bs, sector_num << 9,
0, 0, 0, 0);
while (nb_sectors > 0) {
cluster_offset = get_cluster_offset(bs, sector_num << 9, 0, 0, 0, 0);
index_in_cluster = sector_num & (s->cluster_sectors - 1);
n = s->cluster_sectors - index_in_cluster;
if (n > nb_sectors) {
if (n > nb_sectors)
n = nb_sectors;
}
if (!cluster_offset) {
if (bs->backing_hd) {
/* read from the base image */
hd_iov.iov_base = (void *)buf;
hd_iov.iov_len = n * 512;
qemu_iovec_init_external(&hd_qiov, &hd_iov, 1);
qemu_co_mutex_unlock(&s->lock);
ret = bdrv_co_readv(bs->backing_hd, sector_num,
n, &hd_qiov);
qemu_co_mutex_lock(&s->lock);
if (ret < 0) {
goto fail;
}
ret = bdrv_read(bs->backing_hd, sector_num, buf, n);
if (ret < 0)
return -1;
} else {
/* Note: in this case, no need to wait */
memset(buf, 0, 512 * n);
}
} else if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
/* add AIO support for compressed blocks ? */
if (decompress_cluster(bs, cluster_offset) < 0) {
goto fail;
}
memcpy(buf,
s->cluster_cache + index_in_cluster * 512, 512 * n);
if (decompress_cluster(bs, cluster_offset) < 0)
return -1;
memcpy(buf, s->cluster_cache + index_in_cluster * 512, 512 * n);
} else {
if ((cluster_offset & 511) != 0) {
goto fail;
}
hd_iov.iov_base = (void *)buf;
hd_iov.iov_len = n * 512;
qemu_iovec_init_external(&hd_qiov, &hd_iov, 1);
qemu_co_mutex_unlock(&s->lock);
ret = bdrv_co_readv(bs->file,
(cluster_offset >> 9) + index_in_cluster,
n, &hd_qiov);
qemu_co_mutex_lock(&s->lock);
if (ret < 0) {
break;
}
ret = bdrv_pread(bs->file, cluster_offset + index_in_cluster * 512, buf, n * 512);
if (ret != n * 512)
return -1;
if (s->crypt_method) {
encrypt_sectors(s, sector_num, buf, buf,
n, 0,
encrypt_sectors(s, sector_num, buf, buf, n, 0,
&s->aes_decrypt_key);
}
}
ret = 0;
nb_sectors -= n;
sector_num += n;
buf += n * 512;
}
return 0;
}
#endif
done:
qemu_co_mutex_unlock(&s->lock);
typedef struct QCowAIOCB {
BlockDriverAIOCB common;
int64_t sector_num;
QEMUIOVector *qiov;
uint8_t *buf;
void *orig_buf;
int nb_sectors;
int n;
uint64_t cluster_offset;
uint8_t *cluster_data;
struct iovec hd_iov;
bool is_write;
QEMUBH *bh;
QEMUIOVector hd_qiov;
BlockDriverAIOCB *hd_aiocb;
} QCowAIOCB;
if (qiov->niov > 1) {
qemu_iovec_from_buffer(qiov, orig_buf, qiov->size);
qemu_vfree(orig_buf);
}
return ret;
fail:
ret = -EIO;
goto done;
static void qcow_aio_cancel(BlockDriverAIOCB *blockacb)
{
QCowAIOCB *acb = container_of(blockacb, QCowAIOCB, common);
if (acb->hd_aiocb)
bdrv_aio_cancel(acb->hd_aiocb);
qemu_aio_release(acb);
}
static coroutine_fn int qcow_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
static AIOPool qcow_aio_pool = {
.aiocb_size = sizeof(QCowAIOCB),
.cancel = qcow_aio_cancel,
};
static QCowAIOCB *qcow_aio_setup(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque, int is_write)
{
QCowAIOCB *acb;
acb = qemu_aio_get(&qcow_aio_pool, bs, cb, opaque);
if (!acb)
return NULL;
acb->hd_aiocb = NULL;
acb->sector_num = sector_num;
acb->qiov = qiov;
acb->is_write = is_write;
if (qiov->niov > 1) {
acb->buf = acb->orig_buf = qemu_blockalign(bs, qiov->size);
if (is_write)
qemu_iovec_to_buffer(qiov, acb->buf);
} else {
acb->buf = (uint8_t *)qiov->iov->iov_base;
}
acb->nb_sectors = nb_sectors;
acb->n = 0;
acb->cluster_offset = 0;
return acb;
}
static void qcow_aio_read_cb(void *opaque, int ret);
static void qcow_aio_write_cb(void *opaque, int ret);
static void qcow_aio_rw_bh(void *opaque)
{
QCowAIOCB *acb = opaque;
qemu_bh_delete(acb->bh);
acb->bh = NULL;
if (acb->is_write) {
qcow_aio_write_cb(opaque, 0);
} else {
qcow_aio_read_cb(opaque, 0);
}
}
static int qcow_schedule_bh(QEMUBHFunc *cb, QCowAIOCB *acb)
{
if (acb->bh) {
return -EIO;
}
acb->bh = qemu_bh_new(cb, acb);
if (!acb->bh) {
return -EIO;
}
qemu_bh_schedule(acb->bh);
return 0;
}
static void qcow_aio_read_cb(void *opaque, int ret)
{
QCowAIOCB *acb = opaque;
BlockDriverState *bs = acb->common.bs;
BDRVQcowState *s = bs->opaque;
int index_in_cluster;
acb->hd_aiocb = NULL;
if (ret < 0)
goto done;
redo:
/* post process the read buffer */
if (!acb->cluster_offset) {
/* nothing to do */
} else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
/* nothing to do */
} else {
if (s->crypt_method) {
encrypt_sectors(s, acb->sector_num, acb->buf, acb->buf,
acb->n, 0,
&s->aes_decrypt_key);
}
}
acb->nb_sectors -= acb->n;
acb->sector_num += acb->n;
acb->buf += acb->n * 512;
if (acb->nb_sectors == 0) {
/* request completed */
ret = 0;
goto done;
}
/* prepare next AIO request */
acb->cluster_offset = get_cluster_offset(bs, acb->sector_num << 9,
0, 0, 0, 0);
index_in_cluster = acb->sector_num & (s->cluster_sectors - 1);
acb->n = s->cluster_sectors - index_in_cluster;
if (acb->n > acb->nb_sectors)
acb->n = acb->nb_sectors;
if (!acb->cluster_offset) {
if (bs->backing_hd) {
/* read from the base image */
acb->hd_iov.iov_base = (void *)acb->buf;
acb->hd_iov.iov_len = acb->n * 512;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
acb->hd_aiocb = bdrv_aio_readv(bs->backing_hd, acb->sector_num,
&acb->hd_qiov, acb->n, qcow_aio_read_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
} else {
/* Note: in this case, no need to wait */
memset(acb->buf, 0, 512 * acb->n);
goto redo;
}
} else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
/* add AIO support for compressed blocks ? */
if (decompress_cluster(bs, acb->cluster_offset) < 0) {
ret = -EIO;
goto done;
}
memcpy(acb->buf,
s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
goto redo;
} else {
if ((acb->cluster_offset & 511) != 0) {
ret = -EIO;
goto done;
}
acb->hd_iov.iov_base = (void *)acb->buf;
acb->hd_iov.iov_len = acb->n * 512;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
acb->hd_aiocb = bdrv_aio_readv(bs->file,
(acb->cluster_offset >> 9) + index_in_cluster,
&acb->hd_qiov, acb->n, qcow_aio_read_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
}
return;
done:
if (acb->qiov->niov > 1) {
qemu_iovec_from_buffer(acb->qiov, acb->orig_buf, acb->qiov->size);
qemu_vfree(acb->orig_buf);
}
acb->common.cb(acb->common.opaque, ret);
qemu_aio_release(acb);
}
static BlockDriverAIOCB *qcow_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
QCowAIOCB *acb;
int ret;
acb = qcow_aio_setup(bs, sector_num, qiov, nb_sectors, cb, opaque, 0);
if (!acb)
return NULL;
ret = qcow_schedule_bh(qcow_aio_rw_bh, acb);
if (ret < 0) {
if (acb->qiov->niov > 1) {
qemu_vfree(acb->orig_buf);
}
qemu_aio_release(acb);
return NULL;
}
return &acb->common;
}
static void qcow_aio_write_cb(void *opaque, int ret)
{
QCowAIOCB *acb = opaque;
BlockDriverState *bs = acb->common.bs;
BDRVQcowState *s = bs->opaque;
int index_in_cluster;
uint64_t cluster_offset;
const uint8_t *src_buf;
int ret = 0, n;
uint8_t *cluster_data = NULL;
struct iovec hd_iov;
QEMUIOVector hd_qiov;
uint8_t *buf;
void *orig_buf;
acb->hd_aiocb = NULL;
if (ret < 0)
goto done;
acb->nb_sectors -= acb->n;
acb->sector_num += acb->n;
acb->buf += acb->n * 512;
if (acb->nb_sectors == 0) {
/* request completed */
ret = 0;
goto done;
}
index_in_cluster = acb->sector_num & (s->cluster_sectors - 1);
acb->n = s->cluster_sectors - index_in_cluster;
if (acb->n > acb->nb_sectors)
acb->n = acb->nb_sectors;
cluster_offset = get_cluster_offset(bs, acb->sector_num << 9, 1, 0,
index_in_cluster,
index_in_cluster + acb->n);
if (!cluster_offset || (cluster_offset & 511) != 0) {
ret = -EIO;
goto done;
}
if (s->crypt_method) {
if (!acb->cluster_data) {
acb->cluster_data = qemu_mallocz(s->cluster_size);
if (!acb->cluster_data) {
ret = -ENOMEM;
goto done;
}
}
encrypt_sectors(s, acb->sector_num, acb->cluster_data, acb->buf,
acb->n, 1, &s->aes_encrypt_key);
src_buf = acb->cluster_data;
} else {
src_buf = acb->buf;
}
acb->hd_iov.iov_base = (void *)src_buf;
acb->hd_iov.iov_len = acb->n * 512;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
acb->hd_aiocb = bdrv_aio_writev(bs->file,
(cluster_offset >> 9) + index_in_cluster,
&acb->hd_qiov, acb->n,
qcow_aio_write_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
return;
done:
if (acb->qiov->niov > 1)
qemu_vfree(acb->orig_buf);
acb->common.cb(acb->common.opaque, ret);
qemu_aio_release(acb);
}
static BlockDriverAIOCB *qcow_aio_writev(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVQcowState *s = bs->opaque;
QCowAIOCB *acb;
int ret;
s->cluster_cache_offset = -1; /* disable compressed cache */
if (qiov->niov > 1) {
buf = orig_buf = qemu_blockalign(bs, qiov->size);
qemu_iovec_to_buffer(qiov, buf);
} else {
orig_buf = NULL;
buf = (uint8_t *)qiov->iov->iov_base;
acb = qcow_aio_setup(bs, sector_num, qiov, nb_sectors, cb, opaque, 1);
if (!acb)
return NULL;
ret = qcow_schedule_bh(qcow_aio_rw_bh, acb);
if (ret < 0) {
if (acb->qiov->niov > 1) {
qemu_vfree(acb->orig_buf);
}
qemu_aio_release(acb);
return NULL;
}
qemu_co_mutex_lock(&s->lock);
while (nb_sectors != 0) {
index_in_cluster = sector_num & (s->cluster_sectors - 1);
n = s->cluster_sectors - index_in_cluster;
if (n > nb_sectors) {
n = nb_sectors;
}
cluster_offset = get_cluster_offset(bs, sector_num << 9, 1, 0,
index_in_cluster,
index_in_cluster + n);
if (!cluster_offset || (cluster_offset & 511) != 0) {
ret = -EIO;
break;
}
if (s->crypt_method) {
if (!cluster_data) {
cluster_data = g_malloc0(s->cluster_size);
}
encrypt_sectors(s, sector_num, cluster_data, buf,
n, 1, &s->aes_encrypt_key);
src_buf = cluster_data;
} else {
src_buf = buf;
}
hd_iov.iov_base = (void *)src_buf;
hd_iov.iov_len = n * 512;
qemu_iovec_init_external(&hd_qiov, &hd_iov, 1);
qemu_co_mutex_unlock(&s->lock);
ret = bdrv_co_writev(bs->file,
(cluster_offset >> 9) + index_in_cluster,
n, &hd_qiov);
qemu_co_mutex_lock(&s->lock);
if (ret < 0) {
break;
}
ret = 0;
nb_sectors -= n;
sector_num += n;
buf += n * 512;
}
qemu_co_mutex_unlock(&s->lock);
if (qiov->niov > 1) {
qemu_vfree(orig_buf);
}
g_free(cluster_data);
return ret;
return &acb->common;
}
static void qcow_close(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
g_free(s->l1_table);
g_free(s->l2_cache);
g_free(s->cluster_cache);
g_free(s->cluster_data);
migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
qemu_free(s->l1_table);
qemu_free(s->l2_cache);
qemu_free(s->cluster_cache);
qemu_free(s->cluster_data);
}
static int qcow_create(const char *filename, QEMUOptionParameter *options)
{
int header_size, backing_filename_len, l1_size, shift, i;
int fd, header_size, backing_filename_len, l1_size, i, shift;
QCowHeader header;
uint8_t *tmp;
uint64_t tmp;
int64_t total_size = 0;
const char *backing_file = NULL;
int flags = 0;
int ret;
BlockDriverState *qcow_bs;
/* Read out options */
while (options && options->name) {
@@ -665,21 +822,9 @@ static int qcow_create(const char *filename, QEMUOptionParameter *options)
options++;
}
ret = bdrv_create_file(filename, options);
if (ret < 0) {
return ret;
}
ret = bdrv_file_open(&qcow_bs, filename, BDRV_O_RDWR);
if (ret < 0) {
return ret;
}
ret = bdrv_truncate(qcow_bs, 0);
if (ret < 0) {
goto exit;
}
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, 0644);
if (fd < 0)
return -errno;
memset(&header, 0, sizeof(header));
header.magic = cpu_to_be32(QCOW_MAGIC);
header.version = cpu_to_be32(QCOW_VERSION);
@@ -715,34 +860,33 @@ static int qcow_create(const char *filename, QEMUOptionParameter *options)
}
/* write all the data */
ret = bdrv_pwrite(qcow_bs, 0, &header, sizeof(header));
ret = qemu_write_full(fd, &header, sizeof(header));
if (ret != sizeof(header)) {
ret = -errno;
goto exit;
}
if (backing_file) {
ret = bdrv_pwrite(qcow_bs, sizeof(header),
backing_file, backing_filename_len);
ret = qemu_write_full(fd, backing_file, backing_filename_len);
if (ret != backing_filename_len) {
ret = -errno;
goto exit;
}
}
lseek(fd, header_size, SEEK_SET);
tmp = 0;
for(i = 0;i < l1_size; i++) {
ret = qemu_write_full(fd, &tmp, sizeof(tmp));
if (ret != sizeof(tmp)) {
ret = -errno;
goto exit;
}
}
tmp = g_malloc0(BDRV_SECTOR_SIZE);
for (i = 0; i < ((sizeof(uint64_t)*l1_size + BDRV_SECTOR_SIZE - 1)/
BDRV_SECTOR_SIZE); i++) {
ret = bdrv_pwrite(qcow_bs, header_size +
BDRV_SECTOR_SIZE*i, tmp, BDRV_SECTOR_SIZE);
if (ret != BDRV_SECTOR_SIZE) {
g_free(tmp);
goto exit;
}
}
g_free(tmp);
ret = 0;
exit:
bdrv_delete(qcow_bs);
close(fd);
return ret;
}
@@ -781,7 +925,9 @@ static int qcow_write_compressed(BlockDriverState *bs, int64_t sector_num,
if (nb_sectors != s->cluster_sectors)
return -EINVAL;
out_buf = g_malloc(s->cluster_size + (s->cluster_size / 1000) + 128);
out_buf = qemu_malloc(s->cluster_size + (s->cluster_size / 1000) + 128);
if (!out_buf)
return -1;
/* best compression, small window, no zlib header */
memset(&strm, 0, sizeof(strm));
@@ -789,8 +935,8 @@ static int qcow_write_compressed(BlockDriverState *bs, int64_t sector_num,
Z_DEFLATED, -12,
9, Z_DEFAULT_STRATEGY);
if (ret != 0) {
ret = -EINVAL;
goto fail;
qemu_free(out_buf);
return -1;
}
strm.avail_in = s->cluster_size;
@@ -800,9 +946,9 @@ static int qcow_write_compressed(BlockDriverState *bs, int64_t sector_num,
ret = deflate(&strm, Z_FINISH);
if (ret != Z_STREAM_END && ret != Z_OK) {
qemu_free(out_buf);
deflateEnd(&strm);
ret = -EINVAL;
goto fail;
return -1;
}
out_len = strm.next_out - out_buf;
@@ -810,29 +956,30 @@ static int qcow_write_compressed(BlockDriverState *bs, int64_t sector_num,
if (ret != Z_STREAM_END || out_len >= s->cluster_size) {
/* could not compress: write normal cluster */
ret = bdrv_write(bs, sector_num, buf, s->cluster_sectors);
if (ret < 0) {
goto fail;
}
bdrv_write(bs, sector_num, buf, s->cluster_sectors);
} else {
cluster_offset = get_cluster_offset(bs, sector_num << 9, 2,
out_len, 0, 0);
if (cluster_offset == 0) {
ret = -EIO;
goto fail;
}
cluster_offset &= s->cluster_offset_mask;
ret = bdrv_pwrite(bs->file, cluster_offset, out_buf, out_len);
if (ret < 0) {
goto fail;
if (bdrv_pwrite(bs->file, cluster_offset, out_buf, out_len) != out_len) {
qemu_free(out_buf);
return -1;
}
}
ret = 0;
fail:
g_free(out_buf);
return ret;
qemu_free(out_buf);
return 0;
}
static int qcow_flush(BlockDriverState *bs)
{
return bdrv_flush(bs->file);
}
static BlockDriverAIOCB *qcow_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
return bdrv_aio_flush(bs->file, cb, opaque);
}
static int qcow_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
@@ -869,15 +1016,15 @@ static BlockDriver bdrv_qcow = {
.bdrv_open = qcow_open,
.bdrv_close = qcow_close,
.bdrv_create = qcow_create,
.bdrv_co_readv = qcow_co_readv,
.bdrv_co_writev = qcow_co_writev,
.bdrv_co_is_allocated = qcow_co_is_allocated,
.bdrv_set_key = qcow_set_key,
.bdrv_make_empty = qcow_make_empty,
.bdrv_write_compressed = qcow_write_compressed,
.bdrv_get_info = qcow_get_info,
.bdrv_flush = qcow_flush,
.bdrv_is_allocated = qcow_is_allocated,
.bdrv_set_key = qcow_set_key,
.bdrv_make_empty = qcow_make_empty,
.bdrv_aio_readv = qcow_aio_readv,
.bdrv_aio_writev = qcow_aio_writev,
.bdrv_aio_flush = qcow_aio_flush,
.bdrv_write_compressed = qcow_write_compressed,
.bdrv_get_info = qcow_get_info,
.create_options = qcow_create_options,
};

View File

@@ -25,7 +25,6 @@
#include "block_int.h"
#include "qemu-common.h"
#include "qcow2.h"
#include "trace.h"
typedef struct Qcow2CachedTable {
void* table;
@@ -50,9 +49,9 @@ Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables,
Qcow2Cache *c;
int i;
c = g_malloc0(sizeof(*c));
c = qemu_mallocz(sizeof(*c));
c->size = num_tables;
c->entries = g_malloc0(sizeof(*c->entries) * num_tables);
c->entries = qemu_mallocz(sizeof(*c->entries) * num_tables);
c->writethrough = writethrough;
for (i = 0; i < c->size; i++) {
@@ -71,8 +70,8 @@ int qcow2_cache_destroy(BlockDriverState* bs, Qcow2Cache *c)
qemu_vfree(c->entries[i].table);
}
g_free(c->entries);
g_free(c);
qemu_free(c->entries);
qemu_free(c);
return 0;
}
@@ -101,9 +100,6 @@ static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
return 0;
}
trace_qcow2_cache_entry_flush(qemu_coroutine_self(),
c == s->l2_table_cache, i);
if (c->depends) {
ret = qcow2_cache_flush_dependency(bs, c);
} else if (c->depends_on_flush) {
@@ -136,13 +132,10 @@ static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
int qcow2_cache_flush(BlockDriverState *bs, Qcow2Cache *c)
{
BDRVQcowState *s = bs->opaque;
int result = 0;
int ret;
int i;
trace_qcow2_cache_flush(qemu_coroutine_self(), c == s->l2_table_cache);
for (i = 0; i < c->size; i++) {
ret = qcow2_cache_entry_flush(bs, c, i);
if (ret < 0 && result != -ENOSPC) {
@@ -225,9 +218,6 @@ static int qcow2_cache_do_get(BlockDriverState *bs, Qcow2Cache *c,
int i;
int ret;
trace_qcow2_cache_get(qemu_coroutine_self(), c == s->l2_table_cache,
offset, read_from_disk);
/* Check if the table is already cached */
for (i = 0; i < c->size; i++) {
if (c->entries[i].offset == offset) {
@@ -237,8 +227,6 @@ static int qcow2_cache_do_get(BlockDriverState *bs, Qcow2Cache *c,
/* If not, write a table back and replace it */
i = qcow2_cache_find_entry_to_replace(c);
trace_qcow2_cache_get_replace_entry(qemu_coroutine_self(),
c == s->l2_table_cache, i);
if (i < 0) {
return i;
}
@@ -248,8 +236,6 @@ static int qcow2_cache_do_get(BlockDriverState *bs, Qcow2Cache *c,
return ret;
}
trace_qcow2_cache_get_read(qemu_coroutine_self(),
c == s->l2_table_cache, i);
c->entries[i].offset = 0;
if (read_from_disk) {
if (c == s->l2_table_cache) {
@@ -272,10 +258,6 @@ found:
c->entries[i].cache_hits++;
c->entries[i].ref++;
*table = c->entries[i].table;
trace_qcow2_cache_get_done(qemu_coroutine_self(),
c == s->l2_table_cache, i);
return 0;
}

View File

@@ -27,7 +27,6 @@
#include "qemu-common.h"
#include "block_int.h"
#include "block/qcow2.h"
#include "trace.h"
int qcow2_grow_l1_table(BlockDriverState *bs, int min_size, bool exact_size)
{
@@ -54,18 +53,18 @@ int qcow2_grow_l1_table(BlockDriverState *bs, int min_size, bool exact_size)
}
#ifdef DEBUG_ALLOC2
fprintf(stderr, "grow l1_table from %d to %d\n", s->l1_size, new_l1_size);
printf("grow l1_table from %d to %d\n", s->l1_size, new_l1_size);
#endif
new_l1_size2 = sizeof(uint64_t) * new_l1_size;
new_l1_table = g_malloc0(align_offset(new_l1_size2, 512));
new_l1_table = qemu_mallocz(align_offset(new_l1_size2, 512));
memcpy(new_l1_table, s->l1_table, s->l1_size * sizeof(uint64_t));
/* write new table (align to cluster) */
BLKDBG_EVENT(bs->file, BLKDBG_L1_GROW_ALLOC_TABLE);
new_l1_table_offset = qcow2_alloc_clusters(bs, new_l1_size2);
if (new_l1_table_offset < 0) {
g_free(new_l1_table);
qemu_free(new_l1_table);
return new_l1_table_offset;
}
@@ -91,14 +90,14 @@ int qcow2_grow_l1_table(BlockDriverState *bs, int min_size, bool exact_size)
if (ret < 0) {
goto fail;
}
g_free(s->l1_table);
qemu_free(s->l1_table);
qcow2_free_clusters(bs, s->l1_table_offset, s->l1_size * sizeof(uint64_t));
s->l1_table_offset = new_l1_table_offset;
s->l1_table = new_l1_table;
s->l1_size = new_l1_size;
return 0;
fail:
g_free(new_l1_table);
qemu_free(new_l1_table);
qcow2_free_clusters(bs, new_l1_table_offset, new_l1_size2);
return ret;
}
@@ -171,8 +170,6 @@ static int l2_allocate(BlockDriverState *bs, int l1_index, uint64_t **table)
old_l2_offset = s->l1_table[l1_index];
trace_qcow2_l2_allocate(bs, l1_index);
/* allocate a new l2 entry */
l2_offset = qcow2_alloc_clusters(bs, s->l2_size * sizeof(uint64_t));
@@ -187,7 +184,6 @@ static int l2_allocate(BlockDriverState *bs, int l1_index, uint64_t **table)
/* allocate a new entry in the l2 cache */
trace_qcow2_l2_allocate_get_empty(bs, l1_index);
ret = qcow2_cache_get_empty(bs, s->l2_table_cache, l2_offset, (void**) table);
if (ret < 0) {
return ret;
@@ -195,7 +191,7 @@ static int l2_allocate(BlockDriverState *bs, int l1_index, uint64_t **table)
l2_table = *table;
if ((old_l2_offset & L1E_OFFSET_MASK) == 0) {
if (old_l2_offset == 0) {
/* if there was no old l2 table, clear the new table */
memset(l2_table, 0, s->l2_size * sizeof(uint64_t));
} else {
@@ -203,8 +199,7 @@ static int l2_allocate(BlockDriverState *bs, int l1_index, uint64_t **table)
/* if there was an old l2 table, read it from the disk */
BLKDBG_EVENT(bs->file, BLKDBG_L2_ALLOC_COW_READ);
ret = qcow2_cache_get(bs, s->l2_table_cache,
old_l2_offset & L1E_OFFSET_MASK,
ret = qcow2_cache_get(bs, s->l2_table_cache, old_l2_offset,
(void**) &old_table);
if (ret < 0) {
goto fail;
@@ -221,7 +216,6 @@ static int l2_allocate(BlockDriverState *bs, int l1_index, uint64_t **table)
/* write the l2 table to the file */
BLKDBG_EVENT(bs->file, BLKDBG_L2_ALLOC_WRITE);
trace_qcow2_l2_allocate_write_l2(bs, l1_index);
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
ret = qcow2_cache_flush(bs, s->l2_table_cache);
if (ret < 0) {
@@ -229,7 +223,6 @@ static int l2_allocate(BlockDriverState *bs, int l1_index, uint64_t **table)
}
/* update the L1 entry */
trace_qcow2_l2_allocate_write_l1(bs, l1_index);
s->l1_table[l1_index] = l2_offset | QCOW_OFLAG_COPIED;
ret = write_l1_entry(bs, l1_index);
if (ret < 0) {
@@ -237,54 +230,36 @@ static int l2_allocate(BlockDriverState *bs, int l1_index, uint64_t **table)
}
*table = l2_table;
trace_qcow2_l2_allocate_done(bs, l1_index, 0);
return 0;
fail:
trace_qcow2_l2_allocate_done(bs, l1_index, ret);
qcow2_cache_put(bs, s->l2_table_cache, (void**) table);
s->l1_table[l1_index] = old_l2_offset;
return ret;
}
/*
* Checks how many clusters in a given L2 table are contiguous in the image
* file. As soon as one of the flags in the bitmask stop_flags changes compared
* to the first cluster, the search is stopped and the cluster is not counted
* as contiguous. (This allows it, for example, to stop at the first compressed
* cluster which may require a different handling)
*/
static int count_contiguous_clusters(uint64_t nb_clusters, int cluster_size,
uint64_t *l2_table, uint64_t start, uint64_t stop_flags)
uint64_t *l2_table, uint64_t start, uint64_t mask)
{
int i;
uint64_t mask = stop_flags | L2E_OFFSET_MASK;
uint64_t offset = be64_to_cpu(l2_table[0]) & mask;
uint64_t offset = be64_to_cpu(l2_table[0]) & ~mask;
if (!offset)
return 0;
for (i = start; i < start + nb_clusters; i++) {
uint64_t l2_entry = be64_to_cpu(l2_table[i]) & mask;
if (offset + (uint64_t) i * cluster_size != l2_entry) {
for (i = start; i < start + nb_clusters; i++)
if (offset + (uint64_t) i * cluster_size != (be64_to_cpu(l2_table[i]) & ~mask))
break;
}
}
return (i - start);
}
static int count_contiguous_free_clusters(uint64_t nb_clusters, uint64_t *l2_table)
{
int i;
int i = 0;
for (i = 0; i < nb_clusters; i++) {
int type = qcow2_get_cluster_type(be64_to_cpu(l2_table[i]));
if (type != QCOW2_CLUSTER_UNALLOCATED) {
break;
}
}
while(nb_clusters-- && l2_table[i] == 0)
i++;
return i;
}
@@ -314,62 +289,89 @@ void qcow2_encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
}
}
static int coroutine_fn copy_sectors(BlockDriverState *bs,
uint64_t start_sect,
uint64_t cluster_offset,
int n_start, int n_end)
static int qcow2_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVQcowState *s = bs->opaque;
QEMUIOVector qiov;
int ret, index_in_cluster, n, n1;
uint64_t cluster_offset;
struct iovec iov;
QEMUIOVector qiov;
while (nb_sectors > 0) {
n = nb_sectors;
ret = qcow2_get_cluster_offset(bs, sector_num << 9, &n,
&cluster_offset);
if (ret < 0) {
return ret;
}
index_in_cluster = sector_num & (s->cluster_sectors - 1);
if (!cluster_offset) {
if (bs->backing_hd) {
/* read from the base image */
iov.iov_base = buf;
iov.iov_len = n * 512;
qemu_iovec_init_external(&qiov, &iov, 1);
n1 = qcow2_backing_read1(bs->backing_hd, &qiov, sector_num, n);
if (n1 > 0) {
BLKDBG_EVENT(bs->file, BLKDBG_READ_BACKING);
ret = bdrv_read(bs->backing_hd, sector_num, buf, n1);
if (ret < 0)
return -1;
}
} else {
memset(buf, 0, 512 * n);
}
} else if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
if (qcow2_decompress_cluster(bs, cluster_offset) < 0)
return -1;
memcpy(buf, s->cluster_cache + index_in_cluster * 512, 512 * n);
} else {
BLKDBG_EVENT(bs->file, BLKDBG_READ);
ret = bdrv_pread(bs->file, cluster_offset + index_in_cluster * 512, buf, n * 512);
if (ret != n * 512)
return -1;
if (s->crypt_method) {
qcow2_encrypt_sectors(s, sector_num, buf, buf, n, 0,
&s->aes_decrypt_key);
}
}
nb_sectors -= n;
sector_num += n;
buf += n * 512;
}
return 0;
}
static int copy_sectors(BlockDriverState *bs, uint64_t start_sect,
uint64_t cluster_offset, int n_start, int n_end)
{
BDRVQcowState *s = bs->opaque;
int n, ret;
/*
* If this is the last cluster and it is only partially used, we must only
* copy until the end of the image, or bdrv_check_request will fail for the
* bdrv_read/write calls below.
*/
if (start_sect + n_end > bs->total_sectors) {
n_end = bs->total_sectors - start_sect;
}
n = n_end - n_start;
if (n <= 0) {
if (n <= 0)
return 0;
}
iov.iov_len = n * BDRV_SECTOR_SIZE;
iov.iov_base = qemu_blockalign(bs, iov.iov_len);
qemu_iovec_init_external(&qiov, &iov, 1);
BLKDBG_EVENT(bs->file, BLKDBG_COW_READ);
/* Call .bdrv_co_readv() directly instead of using the public block-layer
* interface. This avoids double I/O throttling and request tracking,
* which can lead to deadlock when block layer copy-on-read is enabled.
*/
ret = bs->drv->bdrv_co_readv(bs, start_sect + n_start, n, &qiov);
if (ret < 0) {
goto out;
}
ret = qcow2_read(bs, start_sect + n_start, s->cluster_data, n);
if (ret < 0)
return ret;
if (s->crypt_method) {
qcow2_encrypt_sectors(s, start_sect + n_start,
iov.iov_base, iov.iov_base, n, 1,
s->cluster_data,
s->cluster_data, n, 1,
&s->aes_encrypt_key);
}
BLKDBG_EVENT(bs->file, BLKDBG_COW_WRITE);
ret = bdrv_co_writev(bs->file, (cluster_offset >> 9) + n_start, n, &qiov);
if (ret < 0) {
goto out;
}
ret = 0;
out:
qemu_vfree(iov.iov_base);
return ret;
ret = bdrv_write(bs->file, (cluster_offset >> 9) + n_start,
s->cluster_data, n);
if (ret < 0)
return ret;
return 0;
}
@@ -379,14 +381,16 @@ out:
* For a given offset of the disk image, find the cluster offset in
* qcow2 file. The offset is stored in *cluster_offset.
*
* on entry, *num is the number of contiguous sectors we'd like to
* on entry, *num is the number of contiguous clusters we'd like to
* access following offset.
*
* on exit, *num is the number of contiguous sectors we can read.
* on exit, *num is the number of contiguous clusters we can read.
*
* Return 0, if the offset is found
* Return -errno, otherwise.
*
* Returns the cluster type (QCOW2_CLUSTER_*) on success, -errno in error
* cases.
*/
int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
int *num, uint64_t *cluster_offset)
{
@@ -422,19 +426,19 @@ int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
/* seek the the l2 offset in the l1 table */
l1_index = offset >> l1_bits;
if (l1_index >= s->l1_size) {
ret = QCOW2_CLUSTER_UNALLOCATED;
if (l1_index >= s->l1_size)
goto out;
}
l2_offset = s->l1_table[l1_index] & L1E_OFFSET_MASK;
if (!l2_offset) {
ret = QCOW2_CLUSTER_UNALLOCATED;
l2_offset = s->l1_table[l1_index];
/* seek the l2 table of the given l2 offset */
if (!l2_offset)
goto out;
}
/* load the l2 table in memory */
l2_offset &= ~QCOW_OFLAG_COPIED;
ret = l2_load(bs, l2_offset, &l2_table);
if (ret < 0) {
return ret;
@@ -446,46 +450,26 @@ int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
*cluster_offset = be64_to_cpu(l2_table[l2_index]);
nb_clusters = size_to_clusters(s, nb_needed << 9);
ret = qcow2_get_cluster_type(*cluster_offset);
switch (ret) {
case QCOW2_CLUSTER_COMPRESSED:
/* Compressed clusters can only be processed one by one */
c = 1;
*cluster_offset &= L2E_COMPRESSED_OFFSET_SIZE_MASK;
break;
case QCOW2_CLUSTER_ZERO:
c = count_contiguous_clusters(nb_clusters, s->cluster_size,
&l2_table[l2_index], 0,
QCOW_OFLAG_COMPRESSED | QCOW_OFLAG_ZERO);
*cluster_offset = 0;
break;
case QCOW2_CLUSTER_UNALLOCATED:
if (!*cluster_offset) {
/* how many empty clusters ? */
c = count_contiguous_free_clusters(nb_clusters, &l2_table[l2_index]);
*cluster_offset = 0;
break;
case QCOW2_CLUSTER_NORMAL:
} else {
/* how many allocated clusters ? */
c = count_contiguous_clusters(nb_clusters, s->cluster_size,
&l2_table[l2_index], 0,
QCOW_OFLAG_COMPRESSED | QCOW_OFLAG_ZERO);
*cluster_offset &= L2E_OFFSET_MASK;
break;
default:
abort();
&l2_table[l2_index], 0, QCOW_OFLAG_COPIED);
}
qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
nb_available = (c * s->cluster_sectors);
nb_available = (c * s->cluster_sectors);
out:
if (nb_available > nb_needed)
nb_available = nb_needed;
*num = nb_available - index_in_cluster;
return ret;
*cluster_offset &=~QCOW_OFLAG_COPIED;
return 0;
}
/*
@@ -501,6 +485,7 @@ out:
*/
static int get_cluster_table(BlockDriverState *bs, uint64_t offset,
uint64_t **new_l2_table,
uint64_t *new_l2_offset,
int *new_l2_index)
{
BDRVQcowState *s = bs->opaque;
@@ -518,13 +503,13 @@ static int get_cluster_table(BlockDriverState *bs, uint64_t offset,
return ret;
}
}
l2_offset = s->l1_table[l1_index] & L1E_OFFSET_MASK;
l2_offset = s->l1_table[l1_index];
/* seek the l2 table of the given l2 offset */
if (s->l1_table[l1_index] & QCOW_OFLAG_COPIED) {
if (l2_offset & QCOW_OFLAG_COPIED) {
/* load the l2 table in memory */
l2_offset &= ~QCOW_OFLAG_COPIED;
ret = l2_load(bs, l2_offset, &l2_table);
if (ret < 0) {
return ret;
@@ -540,7 +525,7 @@ static int get_cluster_table(BlockDriverState *bs, uint64_t offset,
if (l2_offset) {
qcow2_free_clusters(bs, l2_offset, s->l2_size * sizeof(uint64_t));
}
l2_offset = s->l1_table[l1_index] & L1E_OFFSET_MASK;
l2_offset = s->l1_table[l1_index] & ~QCOW_OFLAG_COPIED;
}
/* find the cluster offset for the given disk offset */
@@ -548,6 +533,7 @@ static int get_cluster_table(BlockDriverState *bs, uint64_t offset,
l2_index = (offset >> s->cluster_bits) & (s->l2_size - 1);
*new_l2_table = l2_table;
*new_l2_offset = l2_offset;
*new_l2_index = l2_index;
return 0;
@@ -572,22 +558,21 @@ uint64_t qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs,
{
BDRVQcowState *s = bs->opaque;
int l2_index, ret;
uint64_t *l2_table;
uint64_t l2_offset, *l2_table;
int64_t cluster_offset;
int nb_csectors;
ret = get_cluster_table(bs, offset, &l2_table, &l2_index);
ret = get_cluster_table(bs, offset, &l2_table, &l2_offset, &l2_index);
if (ret < 0) {
return 0;
}
/* Compression can't overwrite anything. Fail if the cluster was already
* allocated. */
cluster_offset = be64_to_cpu(l2_table[l2_index]);
if (cluster_offset & L2E_OFFSET_MASK) {
qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
return 0;
}
if (cluster_offset & QCOW_OFLAG_COPIED)
return cluster_offset & ~QCOW_OFLAG_COPIED;
if (cluster_offset)
qcow2_free_any_clusters(bs, cluster_offset, 1);
cluster_offset = qcow2_alloc_bytes(bs, compressed_size);
if (cluster_offset < 0) {
@@ -620,24 +605,20 @@ int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m)
{
BDRVQcowState *s = bs->opaque;
int i, j = 0, l2_index, ret;
uint64_t *old_cluster, start_sect, *l2_table;
uint64_t cluster_offset = m->alloc_offset;
uint64_t *old_cluster, start_sect, l2_offset, *l2_table;
uint64_t cluster_offset = m->cluster_offset;
bool cow = false;
trace_qcow2_cluster_link_l2(qemu_coroutine_self(), m->nb_clusters);
if (m->nb_clusters == 0)
return 0;
old_cluster = g_malloc(m->nb_clusters * sizeof(uint64_t));
old_cluster = qemu_malloc(m->nb_clusters * sizeof(uint64_t));
/* copy content of unmodified sectors */
start_sect = (m->offset & ~(s->cluster_size - 1)) >> 9;
if (m->n_start) {
cow = true;
qemu_co_mutex_unlock(&s->lock);
ret = copy_sectors(bs, start_sect, cluster_offset, 0, m->n_start);
qemu_co_mutex_lock(&s->lock);
if (ret < 0)
goto err;
}
@@ -645,10 +626,8 @@ int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m)
if (m->nb_available & (s->cluster_sectors - 1)) {
uint64_t end = m->nb_available & ~(uint64_t)(s->cluster_sectors - 1);
cow = true;
qemu_co_mutex_unlock(&s->lock);
ret = copy_sectors(bs, start_sect + end, cluster_offset + (end << 9),
m->nb_available - end, s->cluster_sectors);
qemu_co_mutex_lock(&s->lock);
if (ret < 0)
goto err;
}
@@ -665,7 +644,7 @@ int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m)
}
qcow2_cache_set_dependency(bs, s->l2_table_cache, s->refcount_block_cache);
ret = get_cluster_table(bs, m->offset, &l2_table, &l2_index);
ret = get_cluster_table(bs, m->offset, &l2_table, &l2_offset, &l2_index);
if (ret < 0) {
goto err;
}
@@ -697,77 +676,98 @@ int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m)
*/
if (j != 0) {
for (i = 0; i < j; i++) {
qcow2_free_any_clusters(bs, be64_to_cpu(old_cluster[i]), 1);
qcow2_free_any_clusters(bs,
be64_to_cpu(old_cluster[i]) & ~QCOW_OFLAG_COPIED, 1);
}
}
ret = 0;
err:
g_free(old_cluster);
qemu_free(old_cluster);
return ret;
}
/*
* Returns the number of contiguous clusters that can be used for an allocating
* write, but require COW to be performed (this includes yet unallocated space,
* which must copy from the backing file)
* alloc_cluster_offset
*
* For a given offset of the disk image, return cluster offset in qcow2 file.
* If the offset is not found, allocate a new cluster.
*
* If the cluster was already allocated, m->nb_clusters is set to 0,
* m->depends_on is set to NULL and the other fields in m are meaningless.
*
* If the cluster is newly allocated, m->nb_clusters is set to the number of
* contiguous clusters that have been allocated. This may be 0 if the request
* conflict with another write request in flight; in this case, m->depends_on
* is set and the remaining fields of m are meaningless.
*
* If m->nb_clusters is non-zero, the other fields of m are valid and contain
* information about the first allocated cluster.
*
* Return 0 on success and -errno in error cases
*/
static int count_cow_clusters(BDRVQcowState *s, int nb_clusters,
uint64_t *l2_table, int l2_index)
{
int i;
for (i = 0; i < nb_clusters; i++) {
uint64_t l2_entry = be64_to_cpu(l2_table[l2_index + i]);
int cluster_type = qcow2_get_cluster_type(l2_entry);
switch(cluster_type) {
case QCOW2_CLUSTER_NORMAL:
if (l2_entry & QCOW_OFLAG_COPIED) {
goto out;
}
break;
case QCOW2_CLUSTER_UNALLOCATED:
case QCOW2_CLUSTER_COMPRESSED:
case QCOW2_CLUSTER_ZERO:
break;
default:
abort();
}
}
out:
assert(i <= nb_clusters);
return i;
}
/*
* Allocates new clusters for the given guest_offset.
*
* At most *nb_clusters are allocated, and on return *nb_clusters is updated to
* contain the number of clusters that have been allocated and are contiguous
* in the image file.
*
* If *host_offset is non-zero, it specifies the offset in the image file at
* which the new clusters must start. *nb_clusters can be 0 on return in this
* case if the cluster at host_offset is already in use. If *host_offset is
* zero, the clusters can be allocated anywhere in the image file.
*
* *host_offset is updated to contain the offset into the image file at which
* the first allocated cluster starts.
*
* Return 0 on success and -errno in error cases. -EAGAIN means that the
* function has been waiting for another request and the allocation must be
* restarted, but the whole request should not be failed.
*/
static int do_alloc_cluster_offset(BlockDriverState *bs, uint64_t guest_offset,
uint64_t *host_offset, unsigned int *nb_clusters)
int qcow2_alloc_cluster_offset(BlockDriverState *bs, uint64_t offset,
int n_start, int n_end, int *num, QCowL2Meta *m)
{
BDRVQcowState *s = bs->opaque;
int l2_index, ret;
uint64_t l2_offset, *l2_table;
int64_t cluster_offset;
unsigned int nb_clusters, i = 0;
QCowL2Meta *old_alloc;
trace_qcow2_do_alloc_clusters_offset(qemu_coroutine_self(), guest_offset,
*host_offset, *nb_clusters);
ret = get_cluster_table(bs, offset, &l2_table, &l2_offset, &l2_index);
if (ret < 0) {
return ret;
}
nb_clusters = size_to_clusters(s, n_end << 9);
nb_clusters = MIN(nb_clusters, s->l2_size - l2_index);
cluster_offset = be64_to_cpu(l2_table[l2_index]);
/* We keep all QCOW_OFLAG_COPIED clusters */
if (cluster_offset & QCOW_OFLAG_COPIED) {
nb_clusters = count_contiguous_clusters(nb_clusters, s->cluster_size,
&l2_table[l2_index], 0, 0);
cluster_offset &= ~QCOW_OFLAG_COPIED;
m->nb_clusters = 0;
m->depends_on = NULL;
goto out;
}
/* for the moment, multiple compressed clusters are not managed */
if (cluster_offset & QCOW_OFLAG_COMPRESSED)
nb_clusters = 1;
/* how many available clusters ? */
while (i < nb_clusters) {
i += count_contiguous_clusters(nb_clusters - i, s->cluster_size,
&l2_table[l2_index], i, 0);
if ((i >= nb_clusters) || be64_to_cpu(l2_table[l2_index + i])) {
break;
}
i += count_contiguous_free_clusters(nb_clusters - i,
&l2_table[l2_index + i]);
if (i >= nb_clusters) {
break;
}
cluster_offset = be64_to_cpu(l2_table[l2_index + i]);
if ((cluster_offset & QCOW_OFLAG_COPIED) ||
(cluster_offset & QCOW_OFLAG_COMPRESSED))
break;
}
assert(i <= nb_clusters);
nb_clusters = i;
/*
* Check if there already is an AIO write request in flight which allocates
@@ -776,212 +776,71 @@ static int do_alloc_cluster_offset(BlockDriverState *bs, uint64_t guest_offset,
*/
QLIST_FOREACH(old_alloc, &s->cluster_allocs, next_in_flight) {
uint64_t start = guest_offset >> s->cluster_bits;
uint64_t end = start + *nb_clusters;
uint64_t old_start = old_alloc->offset >> s->cluster_bits;
uint64_t old_end = old_start + old_alloc->nb_clusters;
uint64_t end_offset = offset + nb_clusters * s->cluster_size;
uint64_t old_offset = old_alloc->offset;
uint64_t old_end_offset = old_alloc->offset +
old_alloc->nb_clusters * s->cluster_size;
if (end < old_start || start > old_end) {
if (end_offset < old_offset || offset > old_end_offset) {
/* No intersection */
} else {
if (start < old_start) {
if (offset < old_offset) {
/* Stop at the start of a running allocation */
*nb_clusters = old_start - start;
nb_clusters = (old_offset - offset) >> s->cluster_bits;
} else {
*nb_clusters = 0;
nb_clusters = 0;
}
if (*nb_clusters == 0) {
/* Wait for the dependency to complete. We need to recheck
* the free/allocated clusters when we continue. */
qemu_co_mutex_unlock(&s->lock);
qemu_co_queue_wait(&old_alloc->dependent_requests);
qemu_co_mutex_lock(&s->lock);
return -EAGAIN;
if (nb_clusters == 0) {
/* Set dependency and wait for a callback */
m->depends_on = old_alloc;
m->nb_clusters = 0;
*num = 0;
goto out_wait_dependency;
}
}
}
if (!*nb_clusters) {
if (!nb_clusters) {
abort();
}
/* Allocate new clusters */
trace_qcow2_cluster_alloc_phys(qemu_coroutine_self());
if (*host_offset == 0) {
int64_t cluster_offset =
qcow2_alloc_clusters(bs, *nb_clusters * s->cluster_size);
if (cluster_offset < 0) {
return cluster_offset;
}
*host_offset = cluster_offset;
return 0;
} else {
int ret = qcow2_alloc_clusters_at(bs, *host_offset, *nb_clusters);
if (ret < 0) {
return ret;
}
*nb_clusters = ret;
return 0;
}
}
QLIST_INSERT_HEAD(&s->cluster_allocs, m, next_in_flight);
/*
* alloc_cluster_offset
*
* For a given offset on the virtual disk, find the cluster offset in qcow2
* file. If the offset is not found, allocate a new cluster.
*
* If the cluster was already allocated, m->nb_clusters is set to 0 and
* other fields in m are meaningless.
*
* If the cluster is newly allocated, m->nb_clusters is set to the number of
* contiguous clusters that have been allocated. In this case, the other
* fields of m are valid and contain information about the first allocated
* cluster.
*
* If the request conflicts with another write request in flight, the coroutine
* is queued and will be reentered when the dependency has completed.
*
* Return 0 on success and -errno in error cases
*/
int qcow2_alloc_cluster_offset(BlockDriverState *bs, uint64_t offset,
int n_start, int n_end, int *num, QCowL2Meta *m)
{
BDRVQcowState *s = bs->opaque;
int l2_index, ret, sectors;
uint64_t *l2_table;
unsigned int nb_clusters, keep_clusters;
uint64_t cluster_offset;
/* allocate a new cluster */
trace_qcow2_alloc_clusters_offset(qemu_coroutine_self(), offset,
n_start, n_end);
/* Find L2 entry for the first involved cluster */
again:
ret = get_cluster_table(bs, offset, &l2_table, &l2_index);
if (ret < 0) {
return ret;
cluster_offset = qcow2_alloc_clusters(bs, nb_clusters * s->cluster_size);
if (cluster_offset < 0) {
ret = cluster_offset;
goto fail;
}
/*
* Calculate the number of clusters to look for. We stop at L2 table
* boundaries to keep things simple.
*/
nb_clusters = MIN(size_to_clusters(s, n_end << BDRV_SECTOR_BITS),
s->l2_size - l2_index);
/* save info needed for meta data update */
m->offset = offset;
m->n_start = n_start;
m->nb_clusters = nb_clusters;
cluster_offset = be64_to_cpu(l2_table[l2_index]);
/*
* Check how many clusters are already allocated and don't need COW, and how
* many need a new allocation.
*/
if (qcow2_get_cluster_type(cluster_offset) == QCOW2_CLUSTER_NORMAL
&& (cluster_offset & QCOW_OFLAG_COPIED))
{
/* We keep all QCOW_OFLAG_COPIED clusters */
keep_clusters =
count_contiguous_clusters(nb_clusters, s->cluster_size,
&l2_table[l2_index], 0,
QCOW_OFLAG_COPIED | QCOW_OFLAG_ZERO);
assert(keep_clusters <= nb_clusters);
nb_clusters -= keep_clusters;
} else {
keep_clusters = 0;
cluster_offset = 0;
}
if (nb_clusters > 0) {
/* For the moment, overwrite compressed clusters one by one */
uint64_t entry = be64_to_cpu(l2_table[l2_index + keep_clusters]);
if (entry & QCOW_OFLAG_COMPRESSED) {
nb_clusters = 1;
} else {
nb_clusters = count_cow_clusters(s, nb_clusters, l2_table,
l2_index + keep_clusters);
}
}
cluster_offset &= L2E_OFFSET_MASK;
/*
* The L2 table isn't used any more after this. As long as the cache works
* synchronously, it's important to release it before calling
* do_alloc_cluster_offset, which may yield if we need to wait for another
* request to complete. If we still had the reference, we could use up the
* whole cache with sleeping requests.
*/
out:
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
if (ret < 0) {
return ret;
goto fail_put;
}
/* If there is something left to allocate, do that now */
*m = (QCowL2Meta) {
.cluster_offset = cluster_offset,
.nb_clusters = 0,
};
qemu_co_queue_init(&m->dependent_requests);
m->nb_available = MIN(nb_clusters << (s->cluster_bits - 9), n_end);
m->cluster_offset = cluster_offset;
if (nb_clusters > 0) {
uint64_t alloc_offset;
uint64_t alloc_cluster_offset;
uint64_t keep_bytes = keep_clusters * s->cluster_size;
/* Calculate start and size of allocation */
alloc_offset = offset + keep_bytes;
if (keep_clusters == 0) {
alloc_cluster_offset = 0;
} else {
alloc_cluster_offset = cluster_offset + keep_bytes;
}
/* Allocate, if necessary at a given offset in the image file */
ret = do_alloc_cluster_offset(bs, alloc_offset, &alloc_cluster_offset,
&nb_clusters);
if (ret == -EAGAIN) {
goto again;
} else if (ret < 0) {
goto fail;
}
/* save info needed for meta data update */
if (nb_clusters > 0) {
int requested_sectors = n_end - keep_clusters * s->cluster_sectors;
int avail_sectors = (keep_clusters + nb_clusters)
<< (s->cluster_bits - BDRV_SECTOR_BITS);
*m = (QCowL2Meta) {
.cluster_offset = keep_clusters == 0 ?
alloc_cluster_offset : cluster_offset,
.alloc_offset = alloc_cluster_offset,
.offset = alloc_offset,
.n_start = keep_clusters == 0 ? n_start : 0,
.nb_clusters = nb_clusters,
.nb_available = MIN(requested_sectors, avail_sectors),
};
qemu_co_queue_init(&m->dependent_requests);
QLIST_INSERT_HEAD(&s->cluster_allocs, m, next_in_flight);
}
}
/* Some cleanup work */
sectors = (keep_clusters + nb_clusters) << (s->cluster_bits - 9);
if (sectors > n_end) {
sectors = n_end;
}
assert(sectors > n_start);
*num = sectors - n_start;
*num = m->nb_available - n_start;
return 0;
out_wait_dependency:
return qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
fail:
if (m->nb_clusters > 0) {
QLIST_REMOVE(m, next_in_flight);
}
qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
fail_put:
QLIST_REMOVE(m, next_in_flight);
return ret;
}
@@ -1046,12 +905,12 @@ static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
unsigned int nb_clusters)
{
BDRVQcowState *s = bs->opaque;
uint64_t *l2_table;
uint64_t l2_offset, *l2_table;
int l2_index;
int ret;
int i;
ret = get_cluster_table(bs, offset, &l2_table, &l2_index);
ret = get_cluster_table(bs, offset, &l2_table, &l2_offset, &l2_index);
if (ret < 0) {
return ret;
}
@@ -1063,7 +922,9 @@ static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
uint64_t old_offset;
old_offset = be64_to_cpu(l2_table[l2_index + i]);
if ((old_offset & L2E_OFFSET_MASK) == 0) {
old_offset &= ~QCOW_OFLAG_COPIED;
if (old_offset == 0) {
continue;
}
@@ -1116,75 +977,3 @@ int qcow2_discard_clusters(BlockDriverState *bs, uint64_t offset,
return 0;
}
/*
* This zeroes as many clusters of nb_clusters as possible at once (i.e.
* all clusters in the same L2 table) and returns the number of zeroed
* clusters.
*/
static int zero_single_l2(BlockDriverState *bs, uint64_t offset,
unsigned int nb_clusters)
{
BDRVQcowState *s = bs->opaque;
uint64_t *l2_table;
int l2_index;
int ret;
int i;
ret = get_cluster_table(bs, offset, &l2_table, &l2_index);
if (ret < 0) {
return ret;
}
/* Limit nb_clusters to one L2 table */
nb_clusters = MIN(nb_clusters, s->l2_size - l2_index);
for (i = 0; i < nb_clusters; i++) {
uint64_t old_offset;
old_offset = be64_to_cpu(l2_table[l2_index + i]);
/* Update L2 entries */
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
if (old_offset & QCOW_OFLAG_COMPRESSED) {
l2_table[l2_index + i] = cpu_to_be64(QCOW_OFLAG_ZERO);
qcow2_free_any_clusters(bs, old_offset, 1);
} else {
l2_table[l2_index + i] |= cpu_to_be64(QCOW_OFLAG_ZERO);
}
}
ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
if (ret < 0) {
return ret;
}
return nb_clusters;
}
int qcow2_zero_clusters(BlockDriverState *bs, uint64_t offset, int nb_sectors)
{
BDRVQcowState *s = bs->opaque;
unsigned int nb_clusters;
int ret;
/* The zero flag is only supported by version 3 and newer */
if (s->qcow_version < 3) {
return -ENOTSUP;
}
/* Each L2 table is handled by its own loop iteration */
nb_clusters = size_to_clusters(s, nb_sectors << BDRV_SECTOR_BITS);
while (nb_clusters > 0) {
ret = zero_single_l2(bs, offset, nb_clusters);
if (ret < 0) {
return ret;
}
nb_clusters -= ret;
offset += (ret * s->cluster_size);
}
return 0;
}

View File

@@ -41,7 +41,7 @@ int qcow2_refcount_init(BlockDriverState *bs)
int ret, refcount_table_size2, i;
refcount_table_size2 = s->refcount_table_size * sizeof(uint64_t);
s->refcount_table = g_malloc(refcount_table_size2);
s->refcount_table = qemu_malloc(refcount_table_size2);
if (s->refcount_table_size > 0) {
BLKDBG_EVENT(bs->file, BLKDBG_REFTABLE_LOAD);
ret = bdrv_pread(bs->file, s->refcount_table_offset,
@@ -59,7 +59,7 @@ int qcow2_refcount_init(BlockDriverState *bs)
void qcow2_refcount_close(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
g_free(s->refcount_table);
qemu_free(s->refcount_table);
}
@@ -167,7 +167,7 @@ static int alloc_refcount_block(BlockDriverState *bs,
if (refcount_table_index < s->refcount_table_size) {
uint64_t refcount_block_offset =
s->refcount_table[refcount_table_index] & REFT_OFFSET_MASK;
s->refcount_table[refcount_table_index];
/* If it's already there, we're done */
if (refcount_block_offset) {
@@ -323,8 +323,8 @@ static int alloc_refcount_block(BlockDriverState *bs,
uint64_t meta_offset = (blocks_used * refcount_block_clusters) *
s->cluster_size;
uint64_t table_offset = meta_offset + blocks_clusters * s->cluster_size;
uint16_t *new_blocks = g_malloc0(blocks_clusters * s->cluster_size);
uint64_t *new_table = g_malloc0(table_size * sizeof(uint64_t));
uint16_t *new_blocks = qemu_mallocz(blocks_clusters * s->cluster_size);
uint64_t *new_table = qemu_mallocz(table_size * sizeof(uint64_t));
assert(meta_offset >= (s->free_cluster_index * s->cluster_size));
@@ -349,7 +349,7 @@ static int alloc_refcount_block(BlockDriverState *bs,
BLKDBG_EVENT(bs->file, BLKDBG_REFBLOCK_ALLOC_WRITE_BLOCKS);
ret = bdrv_pwrite_sync(bs->file, meta_offset, new_blocks,
blocks_clusters * s->cluster_size);
g_free(new_blocks);
qemu_free(new_blocks);
if (ret < 0) {
goto fail_table;
}
@@ -367,7 +367,7 @@ static int alloc_refcount_block(BlockDriverState *bs,
}
for(i = 0; i < table_size; i++) {
be64_to_cpus(&new_table[i]);
cpu_to_be64s(&new_table[i]);
}
/* Hook up the new refcount table in the qcow2 header */
@@ -385,7 +385,7 @@ static int alloc_refcount_block(BlockDriverState *bs,
uint64_t old_table_offset = s->refcount_table_offset;
uint64_t old_table_size = s->refcount_table_size;
g_free(s->refcount_table);
qemu_free(s->refcount_table);
s->refcount_table = new_table;
s->refcount_table_size = table_size;
s->refcount_table_offset = table_offset;
@@ -400,10 +400,10 @@ static int alloc_refcount_block(BlockDriverState *bs,
return ret;
}
return 0;
return new_block;
fail_table:
g_free(new_table);
qemu_free(new_table);
fail_block:
if (*refcount_block != NULL) {
qcow2_cache_put(bs, s->refcount_block_cache, (void**) refcount_block);
@@ -422,7 +422,7 @@ static int QEMU_WARN_UNUSED_RESULT update_refcount(BlockDriverState *bs,
int ret;
#ifdef DEBUG_ALLOC2
fprintf(stderr, "update_refcount: offset=%" PRId64 " size=%" PRId64 " addend=%d\n",
printf("update_refcount: offset=%" PRId64 " size=%" PRId64 " addend=%d\n",
offset, length, addend);
#endif
if (length < 0) {
@@ -556,7 +556,7 @@ retry:
}
}
#ifdef DEBUG_ALLOC2
fprintf(stderr, "alloc_clusters: size=%" PRId64 " -> %" PRId64 "\n",
printf("alloc_clusters: size=%" PRId64 " -> %" PRId64 "\n",
size,
(s->free_cluster_index - nb_clusters) << s->cluster_bits);
#endif
@@ -582,40 +582,6 @@ int64_t qcow2_alloc_clusters(BlockDriverState *bs, int64_t size)
return offset;
}
int qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
int nb_clusters)
{
BDRVQcowState *s = bs->opaque;
uint64_t cluster_index;
uint64_t old_free_cluster_index;
int i, refcount, ret;
/* Check how many clusters there are free */
cluster_index = offset >> s->cluster_bits;
for(i = 0; i < nb_clusters; i++) {
refcount = get_refcount(bs, cluster_index++);
if (refcount < 0) {
return refcount;
} else if (refcount != 0) {
break;
}
}
/* And then allocate them */
old_free_cluster_index = s->free_cluster_index;
s->free_cluster_index = cluster_index + i;
ret = update_refcount(bs, offset, i << s->cluster_bits, 1);
if (ret < 0) {
return ret;
}
s->free_cluster_index = old_free_cluster_index;
return i;
}
/* only used to allocate compressed sectors. We try to allocate
contiguous sectors. size must be <= cluster_size */
int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size)
@@ -679,35 +645,32 @@ void qcow2_free_clusters(BlockDriverState *bs,
}
/*
* Free a cluster using its L2 entry (handles clusters of all types, e.g.
* normal cluster, compressed cluster, etc.)
* free_any_clusters
*
* free clusters according to its type: compressed or not
*
*/
void qcow2_free_any_clusters(BlockDriverState *bs,
uint64_t l2_entry, int nb_clusters)
uint64_t cluster_offset, int nb_clusters)
{
BDRVQcowState *s = bs->opaque;
switch (qcow2_get_cluster_type(l2_entry)) {
case QCOW2_CLUSTER_COMPRESSED:
{
int nb_csectors;
nb_csectors = ((l2_entry >> s->csize_shift) &
s->csize_mask) + 1;
qcow2_free_clusters(bs,
(l2_entry & s->cluster_offset_mask) & ~511,
nb_csectors * 512);
}
break;
case QCOW2_CLUSTER_NORMAL:
qcow2_free_clusters(bs, l2_entry & L2E_OFFSET_MASK,
nb_clusters << s->cluster_bits);
break;
case QCOW2_CLUSTER_UNALLOCATED:
case QCOW2_CLUSTER_ZERO:
break;
default:
abort();
/* free the cluster */
if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
int nb_csectors;
nb_csectors = ((cluster_offset >> s->csize_shift) &
s->csize_mask) + 1;
qcow2_free_clusters(bs,
(cluster_offset & s->cluster_offset_mask) & ~511,
nb_csectors * 512);
return;
}
qcow2_free_clusters(bs, cluster_offset, nb_clusters << s->cluster_bits);
return;
}
@@ -717,6 +680,24 @@ void qcow2_free_any_clusters(BlockDriverState *bs,
void qcow2_create_refcount_update(QCowCreateState *s, int64_t offset,
int64_t size)
{
int refcount;
int64_t start, last, cluster_offset;
uint16_t *p;
start = offset & ~(s->cluster_size - 1);
last = (offset + size - 1) & ~(s->cluster_size - 1);
for(cluster_offset = start; cluster_offset <= last;
cluster_offset += s->cluster_size) {
p = &s->refcount_block[cluster_offset >> s->cluster_bits];
refcount = be16_to_cpu(*p);
refcount++;
*p = cpu_to_be16(refcount);
}
}
/* update the refcounts of snapshots and the copied flag */
int qcow2_update_snapshot_refcount(BlockDriverState *bs,
int64_t l1_table_offset, int l1_size, int addend)
@@ -737,13 +718,9 @@ int qcow2_update_snapshot_refcount(BlockDriverState *bs,
l2_table = NULL;
l1_table = NULL;
l1_size2 = l1_size * sizeof(uint64_t);
/* WARNING: qcow2_snapshot_goto relies on this function not using the
* l1_table_offset when it is the current s->l1_table_offset! Be careful
* when changing this! */
if (l1_table_offset != s->l1_table_offset) {
if (l1_size2 != 0) {
l1_table = g_malloc0(align_offset(l1_size2, 512));
l1_table = qemu_mallocz(align_offset(l1_size2, 512));
} else {
l1_table = NULL;
}
@@ -767,7 +744,7 @@ int qcow2_update_snapshot_refcount(BlockDriverState *bs,
l2_offset = l1_table[i];
if (l2_offset) {
old_l2_offset = l2_offset;
l2_offset &= L1E_OFFSET_MASK;
l2_offset &= ~QCOW_OFLAG_COPIED;
ret = qcow2_cache_get(bs, s->l2_table_cache, l2_offset,
(void**) &l2_table);
@@ -799,11 +776,10 @@ int qcow2_update_snapshot_refcount(BlockDriverState *bs,
/* compressed clusters are never modified */
refcount = 2;
} else {
uint64_t cluster_index = (offset & L2E_OFFSET_MASK) >> s->cluster_bits;
if (addend != 0) {
refcount = update_cluster_refcount(bs, cluster_index, addend);
refcount = update_cluster_refcount(bs, offset >> s->cluster_bits, addend);
} else {
refcount = get_refcount(bs, cluster_index);
refcount = get_refcount(bs, offset >> s->cluster_bits);
}
if (refcount < 0) {
@@ -861,8 +837,7 @@ fail:
qcow2_cache_set_writethrough(bs, s->refcount_block_cache,
old_refcount_writethrough);
/* Update L1 only if it isn't deleted anyway (addend = -1) */
if (addend >= 0 && l1_modified) {
if (l1_modified) {
for(i = 0; i < l1_size; i++)
cpu_to_be64s(&l1_table[i]);
if (bdrv_pwrite_sync(bs->file, l1_table_offset, l1_table,
@@ -872,7 +847,7 @@ fail:
be64_to_cpus(&l1_table[i]);
}
if (l1_allocated)
g_free(l1_table);
qemu_free(l1_table);
return ret;
}
@@ -941,91 +916,75 @@ static int check_refcounts_l2(BlockDriverState *bs, BdrvCheckResult *res,
int check_copied)
{
BDRVQcowState *s = bs->opaque;
uint64_t *l2_table, l2_entry;
uint64_t *l2_table, offset;
int i, l2_size, nb_csectors, refcount;
/* Read L2 table from disk */
l2_size = s->l2_size * sizeof(uint64_t);
l2_table = g_malloc(l2_size);
l2_table = qemu_malloc(l2_size);
if (bdrv_pread(bs->file, l2_offset, l2_table, l2_size) != l2_size)
goto fail;
/* Do the actual checks */
for(i = 0; i < s->l2_size; i++) {
l2_entry = be64_to_cpu(l2_table[i]);
switch (qcow2_get_cluster_type(l2_entry)) {
case QCOW2_CLUSTER_COMPRESSED:
/* Compressed clusters don't have QCOW_OFLAG_COPIED */
if (l2_entry & QCOW_OFLAG_COPIED) {
fprintf(stderr, "ERROR: cluster %" PRId64 ": "
"copied flag must never be set for compressed "
"clusters\n", l2_entry >> s->cluster_bits);
l2_entry &= ~QCOW_OFLAG_COPIED;
res->corruptions++;
}
/* Mark cluster as used */
nb_csectors = ((l2_entry >> s->csize_shift) &
s->csize_mask) + 1;
l2_entry &= s->cluster_offset_mask;
inc_refcounts(bs, res, refcount_table, refcount_table_size,
l2_entry & ~511, nb_csectors * 512);
break;
case QCOW2_CLUSTER_ZERO:
if ((l2_entry & L2E_OFFSET_MASK) == 0) {
break;
}
/* fall through */
case QCOW2_CLUSTER_NORMAL:
{
/* QCOW_OFLAG_COPIED must be set iff refcount == 1 */
uint64_t offset = l2_entry & L2E_OFFSET_MASK;
if (check_copied) {
refcount = get_refcount(bs, offset >> s->cluster_bits);
if (refcount < 0) {
fprintf(stderr, "Can't get refcount for offset %"
PRIx64 ": %s\n", l2_entry, strerror(-refcount));
goto fail;
offset = be64_to_cpu(l2_table[i]);
if (offset != 0) {
if (offset & QCOW_OFLAG_COMPRESSED) {
/* Compressed clusters don't have QCOW_OFLAG_COPIED */
if (offset & QCOW_OFLAG_COPIED) {
fprintf(stderr, "ERROR: cluster %" PRId64 ": "
"copied flag must never be set for compressed "
"clusters\n", offset >> s->cluster_bits);
offset &= ~QCOW_OFLAG_COPIED;
res->corruptions++;
}
if ((refcount == 1) != ((l2_entry & QCOW_OFLAG_COPIED) != 0)) {
fprintf(stderr, "ERROR OFLAG_COPIED: offset=%"
PRIx64 " refcount=%d\n", l2_entry, refcount);
/* Mark cluster as used */
nb_csectors = ((offset >> s->csize_shift) &
s->csize_mask) + 1;
offset &= s->cluster_offset_mask;
inc_refcounts(bs, res, refcount_table, refcount_table_size,
offset & ~511, nb_csectors * 512);
} else {
/* QCOW_OFLAG_COPIED must be set iff refcount == 1 */
if (check_copied) {
uint64_t entry = offset;
offset &= ~QCOW_OFLAG_COPIED;
refcount = get_refcount(bs, offset >> s->cluster_bits);
if (refcount < 0) {
fprintf(stderr, "Can't get refcount for offset %"
PRIx64 ": %s\n", entry, strerror(-refcount));
goto fail;
}
if ((refcount == 1) != ((entry & QCOW_OFLAG_COPIED) != 0)) {
fprintf(stderr, "ERROR OFLAG_COPIED: offset=%"
PRIx64 " refcount=%d\n", entry, refcount);
res->corruptions++;
}
}
/* Mark cluster as used */
offset &= ~QCOW_OFLAG_COPIED;
inc_refcounts(bs, res, refcount_table,refcount_table_size,
offset, s->cluster_size);
/* Correct offsets are cluster aligned */
if (offset & (s->cluster_size - 1)) {
fprintf(stderr, "ERROR offset=%" PRIx64 ": Cluster is not "
"properly aligned; L2 entry corrupted.\n", offset);
res->corruptions++;
}
}
/* Mark cluster as used */
inc_refcounts(bs, res, refcount_table,refcount_table_size,
offset, s->cluster_size);
/* Correct offsets are cluster aligned */
if (offset & (s->cluster_size - 1)) {
fprintf(stderr, "ERROR offset=%" PRIx64 ": Cluster is not "
"properly aligned; L2 entry corrupted.\n", offset);
res->corruptions++;
}
break;
}
case QCOW2_CLUSTER_UNALLOCATED:
break;
default:
abort();
}
}
g_free(l2_table);
qemu_free(l2_table);
return 0;
fail:
fprintf(stderr, "ERROR: I/O error in check_refcounts_l2\n");
g_free(l2_table);
qemu_free(l2_table);
return -EIO;
}
@@ -1058,7 +1017,7 @@ static int check_refcounts_l1(BlockDriverState *bs,
if (l1_size2 == 0) {
l1_table = NULL;
} else {
l1_table = g_malloc(l1_size2);
l1_table = qemu_malloc(l1_size2);
if (bdrv_pread(bs->file, l1_table_offset,
l1_table, l1_size2) != l1_size2)
goto fail;
@@ -1087,7 +1046,7 @@ static int check_refcounts_l1(BlockDriverState *bs,
}
/* Mark L2 table as used */
l2_offset &= L1E_OFFSET_MASK;
l2_offset &= ~QCOW_OFLAG_COPIED;
inc_refcounts(bs, res, refcount_table, refcount_table_size,
l2_offset, s->cluster_size);
@@ -1106,13 +1065,13 @@ static int check_refcounts_l1(BlockDriverState *bs,
}
}
}
g_free(l1_table);
qemu_free(l1_table);
return 0;
fail:
fprintf(stderr, "ERROR: I/O error in check_refcounts_l1\n");
res->check_errors++;
g_free(l1_table);
qemu_free(l1_table);
return -EIO;
}
@@ -1133,7 +1092,7 @@ int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res)
size = bdrv_getlength(bs->file);
nb_clusters = size_to_clusters(s, size);
refcount_table = g_malloc0(nb_clusters * sizeof(uint16_t));
refcount_table = qemu_mallocz(nb_clusters * sizeof(uint16_t));
/* header */
inc_refcounts(bs, res, refcount_table, nb_clusters,
@@ -1219,7 +1178,7 @@ int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res)
ret = 0;
fail:
g_free(refcount_table);
qemu_free(refcount_table);
return ret;
}

View File

@@ -26,7 +26,7 @@
#include "block_int.h"
#include "block/qcow2.h"
typedef struct QEMU_PACKED QCowSnapshotHeader {
typedef struct __attribute__((packed)) QCowSnapshotHeader {
/* header is 8 byte aligned */
uint64_t l1_table_offset;
@@ -46,21 +46,16 @@ typedef struct QEMU_PACKED QCowSnapshotHeader {
/* name follows */
} QCowSnapshotHeader;
typedef struct QEMU_PACKED QCowSnapshotExtraData {
uint64_t vm_state_size_large;
uint64_t disk_size;
} QCowSnapshotExtraData;
void qcow2_free_snapshots(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
int i;
for(i = 0; i < s->nb_snapshots; i++) {
g_free(s->snapshots[i].name);
g_free(s->snapshots[i].id_str);
qemu_free(s->snapshots[i].name);
qemu_free(s->snapshots[i].id_str);
}
g_free(s->snapshots);
qemu_free(s->snapshots);
s->snapshots = NULL;
s->nb_snapshots = 0;
}
@@ -69,12 +64,10 @@ int qcow2_read_snapshots(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
QCowSnapshotHeader h;
QCowSnapshotExtraData extra;
QCowSnapshot *sn;
int i, id_str_size, name_size;
int64_t offset;
uint32_t extra_data_size;
int ret;
if (!s->nb_snapshots) {
s->snapshots = NULL;
@@ -83,16 +76,11 @@ int qcow2_read_snapshots(BlockDriverState *bs)
}
offset = s->snapshots_offset;
s->snapshots = g_malloc0(s->nb_snapshots * sizeof(QCowSnapshot));
s->snapshots = qemu_mallocz(s->nb_snapshots * sizeof(QCowSnapshot));
for(i = 0; i < s->nb_snapshots; i++) {
/* Read statically sized part of the snapshot header */
offset = align_offset(offset, 8);
ret = bdrv_pread(bs->file, offset, &h, sizeof(h));
if (ret < 0) {
if (bdrv_pread(bs->file, offset, &h, sizeof(h)) != sizeof(h))
goto fail;
}
offset += sizeof(h);
sn = s->snapshots + i;
sn->l1_table_offset = be64_to_cpu(h.l1_table_offset);
@@ -106,49 +94,25 @@ int qcow2_read_snapshots(BlockDriverState *bs)
id_str_size = be16_to_cpu(h.id_str_size);
name_size = be16_to_cpu(h.name_size);
/* Read extra data */
ret = bdrv_pread(bs->file, offset, &extra,
MIN(sizeof(extra), extra_data_size));
if (ret < 0) {
goto fail;
}
offset += extra_data_size;
if (extra_data_size >= 8) {
sn->vm_state_size = be64_to_cpu(extra.vm_state_size_large);
}
if (extra_data_size >= 16) {
sn->disk_size = be64_to_cpu(extra.disk_size);
} else {
sn->disk_size = bs->total_sectors * BDRV_SECTOR_SIZE;
}
/* Read snapshot ID */
sn->id_str = g_malloc(id_str_size + 1);
ret = bdrv_pread(bs->file, offset, sn->id_str, id_str_size);
if (ret < 0) {
sn->id_str = qemu_malloc(id_str_size + 1);
if (bdrv_pread(bs->file, offset, sn->id_str, id_str_size) != id_str_size)
goto fail;
}
offset += id_str_size;
sn->id_str[id_str_size] = '\0';
/* Read snapshot name */
sn->name = g_malloc(name_size + 1);
ret = bdrv_pread(bs->file, offset, sn->name, name_size);
if (ret < 0) {
sn->name = qemu_malloc(name_size + 1);
if (bdrv_pread(bs->file, offset, sn->name, name_size) != name_size)
goto fail;
}
offset += name_size;
sn->name[name_size] = '\0';
}
s->snapshots_size = offset - s->snapshots_offset;
return 0;
fail:
fail:
qcow2_free_snapshots(bs);
return ret;
return -1;
}
/* add at the end of the file a new list of snapshots */
@@ -157,14 +121,10 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
BDRVQcowState *s = bs->opaque;
QCowSnapshot *sn;
QCowSnapshotHeader h;
QCowSnapshotExtraData extra;
int i, name_size, id_str_size, snapshots_size;
struct {
uint32_t nb_snapshots;
uint64_t snapshots_offset;
} QEMU_PACKED header_data;
uint64_t data64;
uint32_t data32;
int64_t offset, snapshots_offset;
int ret;
/* compute the size of the snapshots */
offset = 0;
@@ -172,13 +132,11 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
sn = s->snapshots + i;
offset = align_offset(offset, 8);
offset += sizeof(h);
offset += sizeof(extra);
offset += strlen(sn->id_str);
offset += strlen(sn->name);
}
snapshots_size = offset;
/* Allocate space for the new snapshot list */
snapshots_offset = qcow2_alloc_clusters(bs, snapshots_size);
bdrv_flush(bs->file);
offset = snapshots_offset;
@@ -186,86 +144,49 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
return offset;
}
/* Write all snapshots to the new list */
for(i = 0; i < s->nb_snapshots; i++) {
sn = s->snapshots + i;
memset(&h, 0, sizeof(h));
h.l1_table_offset = cpu_to_be64(sn->l1_table_offset);
h.l1_size = cpu_to_be32(sn->l1_size);
/* If it doesn't fit in 32 bit, older implementations should treat it
* as a disk-only snapshot rather than truncate the VM state */
if (sn->vm_state_size <= 0xffffffff) {
h.vm_state_size = cpu_to_be32(sn->vm_state_size);
}
h.vm_state_size = cpu_to_be32(sn->vm_state_size);
h.date_sec = cpu_to_be32(sn->date_sec);
h.date_nsec = cpu_to_be32(sn->date_nsec);
h.vm_clock_nsec = cpu_to_be64(sn->vm_clock_nsec);
h.extra_data_size = cpu_to_be32(sizeof(extra));
memset(&extra, 0, sizeof(extra));
extra.vm_state_size_large = cpu_to_be64(sn->vm_state_size);
extra.disk_size = cpu_to_be64(sn->disk_size);
id_str_size = strlen(sn->id_str);
name_size = strlen(sn->name);
h.id_str_size = cpu_to_be16(id_str_size);
h.name_size = cpu_to_be16(name_size);
offset = align_offset(offset, 8);
ret = bdrv_pwrite(bs->file, offset, &h, sizeof(h));
if (ret < 0) {
if (bdrv_pwrite_sync(bs->file, offset, &h, sizeof(h)) < 0)
goto fail;
}
offset += sizeof(h);
ret = bdrv_pwrite(bs->file, offset, &extra, sizeof(extra));
if (ret < 0) {
if (bdrv_pwrite_sync(bs->file, offset, sn->id_str, id_str_size) < 0)
goto fail;
}
offset += sizeof(extra);
ret = bdrv_pwrite(bs->file, offset, sn->id_str, id_str_size);
if (ret < 0) {
goto fail;
}
offset += id_str_size;
ret = bdrv_pwrite(bs->file, offset, sn->name, name_size);
if (ret < 0) {
if (bdrv_pwrite_sync(bs->file, offset, sn->name, name_size) < 0)
goto fail;
}
offset += name_size;
}
/*
* Update the header to point to the new snapshot table. This requires the
* new table and its refcounts to be stable on disk.
*/
ret = bdrv_flush(bs);
if (ret < 0) {
/* update the various header fields */
data64 = cpu_to_be64(snapshots_offset);
if (bdrv_pwrite_sync(bs->file, offsetof(QCowHeader, snapshots_offset),
&data64, sizeof(data64)) < 0)
goto fail;
}
QEMU_BUILD_BUG_ON(offsetof(QCowHeader, snapshots_offset) !=
offsetof(QCowHeader, nb_snapshots) + sizeof(header_data.nb_snapshots));
header_data.nb_snapshots = cpu_to_be32(s->nb_snapshots);
header_data.snapshots_offset = cpu_to_be64(snapshots_offset);
ret = bdrv_pwrite_sync(bs->file, offsetof(QCowHeader, nb_snapshots),
&header_data, sizeof(header_data));
if (ret < 0) {
data32 = cpu_to_be32(s->nb_snapshots);
if (bdrv_pwrite_sync(bs->file, offsetof(QCowHeader, nb_snapshots),
&data32, sizeof(data32)) < 0)
goto fail;
}
/* free the old snapshot table */
qcow2_free_clusters(bs, s->snapshots_offset, s->snapshots_size);
s->snapshots_offset = snapshots_offset;
s->snapshots_size = snapshots_size;
return 0;
fail:
return ret;
fail:
return -1;
}
static void find_new_snapshot_id(BlockDriverState *bs,
@@ -315,107 +236,80 @@ static int find_snapshot_by_id_or_name(BlockDriverState *bs, const char *name)
int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
{
BDRVQcowState *s = bs->opaque;
QCowSnapshot *new_snapshot_list = NULL;
QCowSnapshot *old_snapshot_list = NULL;
QCowSnapshot sn1, *sn = &sn1;
QCowSnapshot *snapshots1, sn1, *sn = &sn1;
int i, ret;
uint64_t *l1_table = NULL;
int64_t l1_table_offset;
memset(sn, 0, sizeof(*sn));
/* Generate an ID if it wasn't passed */
if (sn_info->id_str[0] == '\0') {
/* compute a new id */
find_new_snapshot_id(bs, sn_info->id_str, sizeof(sn_info->id_str));
}
/* Check that the ID is unique */
if (find_snapshot_by_id(bs, sn_info->id_str) >= 0) {
return -EEXIST;
}
/* check that the ID is unique */
if (find_snapshot_by_id(bs, sn_info->id_str) >= 0)
return -ENOENT;
/* Populate sn with passed data */
sn->id_str = g_strdup(sn_info->id_str);
sn->name = g_strdup(sn_info->name);
sn->disk_size = bs->total_sectors * BDRV_SECTOR_SIZE;
sn->id_str = qemu_strdup(sn_info->id_str);
if (!sn->id_str)
goto fail;
sn->name = qemu_strdup(sn_info->name);
if (!sn->name)
goto fail;
sn->vm_state_size = sn_info->vm_state_size;
sn->date_sec = sn_info->date_sec;
sn->date_nsec = sn_info->date_nsec;
sn->vm_clock_nsec = sn_info->vm_clock_nsec;
/* Allocate the L1 table of the snapshot and copy the current one there. */
ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 1);
if (ret < 0)
goto fail;
/* create the L1 table of the snapshot */
l1_table_offset = qcow2_alloc_clusters(bs, s->l1_size * sizeof(uint64_t));
if (l1_table_offset < 0) {
ret = l1_table_offset;
goto fail;
}
bdrv_flush(bs->file);
sn->l1_table_offset = l1_table_offset;
sn->l1_size = s->l1_size;
l1_table = g_malloc(s->l1_size * sizeof(uint64_t));
if (s->l1_size != 0) {
l1_table = qemu_malloc(s->l1_size * sizeof(uint64_t));
} else {
l1_table = NULL;
}
for(i = 0; i < s->l1_size; i++) {
l1_table[i] = cpu_to_be64(s->l1_table[i]);
}
ret = bdrv_pwrite(bs->file, sn->l1_table_offset, l1_table,
s->l1_size * sizeof(uint64_t));
if (ret < 0) {
if (bdrv_pwrite_sync(bs->file, sn->l1_table_offset,
l1_table, s->l1_size * sizeof(uint64_t)) < 0)
goto fail;
}
g_free(l1_table);
qemu_free(l1_table);
l1_table = NULL;
/*
* Increase the refcounts of all clusters and make sure everything is
* stable on disk before updating the snapshot table to contain a pointer
* to the new L1 table.
*/
ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 1);
if (ret < 0) {
goto fail;
}
ret = bdrv_flush(bs);
if (ret < 0) {
goto fail;
}
/* Append the new snapshot to the snapshot list */
new_snapshot_list = g_malloc((s->nb_snapshots + 1) * sizeof(QCowSnapshot));
snapshots1 = qemu_malloc((s->nb_snapshots + 1) * sizeof(QCowSnapshot));
if (s->snapshots) {
memcpy(new_snapshot_list, s->snapshots,
s->nb_snapshots * sizeof(QCowSnapshot));
old_snapshot_list = s->snapshots;
memcpy(snapshots1, s->snapshots, s->nb_snapshots * sizeof(QCowSnapshot));
qemu_free(s->snapshots);
}
s->snapshots = new_snapshot_list;
s->snapshots = snapshots1;
s->snapshots[s->nb_snapshots++] = *sn;
ret = qcow2_write_snapshots(bs);
if (ret < 0) {
g_free(s->snapshots);
s->snapshots = old_snapshot_list;
if (qcow2_write_snapshots(bs) < 0)
goto fail;
}
g_free(old_snapshot_list);
#ifdef DEBUG_ALLOC
{
BdrvCheckResult result = {0};
qcow2_check_refcounts(bs, &result);
}
qcow2_check_refcounts(bs);
#endif
return 0;
fail:
g_free(sn->id_str);
g_free(sn->name);
g_free(l1_table);
return ret;
fail:
qemu_free(sn->name);
qemu_free(l1_table);
return -1;
}
/* copy the snapshot 'snapshot_name' into the current disk image */
@@ -425,165 +319,78 @@ int qcow2_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
QCowSnapshot *sn;
int i, snapshot_index;
int cur_l1_bytes, sn_l1_bytes;
int ret;
uint64_t *sn_l1_table = NULL;
/* Search the snapshot */
snapshot_index = find_snapshot_by_id_or_name(bs, snapshot_id);
if (snapshot_index < 0) {
if (snapshot_index < 0)
return -ENOENT;
}
sn = &s->snapshots[snapshot_index];
if (sn->disk_size != bs->total_sectors * BDRV_SECTOR_SIZE) {
error_report("qcow2: Loading snapshots with different disk "
"size is not implemented");
ret = -ENOTSUP;
if (qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, -1) < 0)
goto fail;
}
/*
* Make sure that the current L1 table is big enough to contain the whole
* L1 table of the snapshot. If the snapshot L1 table is smaller, the
* current one must be padded with zeros.
*/
ret = qcow2_grow_l1_table(bs, sn->l1_size, true);
if (ret < 0) {
if (qcow2_grow_l1_table(bs, sn->l1_size, true) < 0)
goto fail;
}
cur_l1_bytes = s->l1_size * sizeof(uint64_t);
sn_l1_bytes = sn->l1_size * sizeof(uint64_t);
/*
* Copy the snapshot L1 table to the current L1 table.
*
* Before overwriting the old current L1 table on disk, make sure to
* increase all refcounts for the clusters referenced by the new one.
* Decrease the refcount referenced by the old one only when the L1
* table is overwritten.
*/
sn_l1_table = g_malloc0(cur_l1_bytes);
ret = bdrv_pread(bs->file, sn->l1_table_offset, sn_l1_table, sn_l1_bytes);
if (ret < 0) {
goto fail;
if (cur_l1_bytes > sn_l1_bytes) {
memset(s->l1_table + sn->l1_size, 0, cur_l1_bytes - sn_l1_bytes);
}
ret = qcow2_update_snapshot_refcount(bs, sn->l1_table_offset,
sn->l1_size, 1);
if (ret < 0) {
/* copy the snapshot l1 table to the current l1 table */
if (bdrv_pread(bs->file, sn->l1_table_offset,
s->l1_table, sn_l1_bytes) < 0)
goto fail;
}
ret = bdrv_pwrite_sync(bs->file, s->l1_table_offset, sn_l1_table,
cur_l1_bytes);
if (ret < 0) {
if (bdrv_pwrite_sync(bs->file, s->l1_table_offset,
s->l1_table, cur_l1_bytes) < 0)
goto fail;
}
/*
* Decrease refcount of clusters of current L1 table.
*
* At this point, the in-memory s->l1_table points to the old L1 table,
* whereas on disk we already have the new one.
*
* qcow2_update_snapshot_refcount special cases the current L1 table to use
* the in-memory data instead of really using the offset to load a new one,
* which is why this works.
*/
ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset,
s->l1_size, -1);
/*
* Now update the in-memory L1 table to be in sync with the on-disk one. We
* need to do this even if updating refcounts failed.
*/
for(i = 0;i < s->l1_size; i++) {
s->l1_table[i] = be64_to_cpu(sn_l1_table[i]);
be64_to_cpus(&s->l1_table[i]);
}
if (ret < 0) {
if (qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 1) < 0)
goto fail;
}
g_free(sn_l1_table);
sn_l1_table = NULL;
/*
* Update QCOW_OFLAG_COPIED in the active L1 table (it may have changed
* when we decreased the refcount of the old snapshot.
*/
ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 0);
if (ret < 0) {
goto fail;
}
#ifdef DEBUG_ALLOC
{
BdrvCheckResult result = {0};
qcow2_check_refcounts(bs, &result);
}
qcow2_check_refcounts(bs);
#endif
return 0;
fail:
g_free(sn_l1_table);
return ret;
fail:
return -EIO;
}
int qcow2_snapshot_delete(BlockDriverState *bs, const char *snapshot_id)
{
BDRVQcowState *s = bs->opaque;
QCowSnapshot sn;
QCowSnapshot *sn;
int snapshot_index, ret;
/* Search the snapshot */
snapshot_index = find_snapshot_by_id_or_name(bs, snapshot_id);
if (snapshot_index < 0) {
if (snapshot_index < 0)
return -ENOENT;
}
sn = s->snapshots[snapshot_index];
sn = &s->snapshots[snapshot_index];
/* Remove it from the snapshot list */
memmove(s->snapshots + snapshot_index,
s->snapshots + snapshot_index + 1,
(s->nb_snapshots - snapshot_index - 1) * sizeof(sn));
ret = qcow2_update_snapshot_refcount(bs, sn->l1_table_offset, sn->l1_size, -1);
if (ret < 0)
return ret;
/* must update the copied flag on the current cluster offsets */
ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 0);
if (ret < 0)
return ret;
qcow2_free_clusters(bs, sn->l1_table_offset, sn->l1_size * sizeof(uint64_t));
qemu_free(sn->id_str);
qemu_free(sn->name);
memmove(sn, sn + 1, (s->nb_snapshots - snapshot_index - 1) * sizeof(*sn));
s->nb_snapshots--;
ret = qcow2_write_snapshots(bs);
if (ret < 0) {
/* XXX: restore snapshot if error ? */
return ret;
}
/*
* The snapshot is now unused, clean up. If we fail after this point, we
* won't recover but just leak clusters.
*/
g_free(sn.id_str);
g_free(sn.name);
/*
* Now decrease the refcounts of clusters referenced by the snapshot and
* free the L1 table.
*/
ret = qcow2_update_snapshot_refcount(bs, sn.l1_table_offset,
sn.l1_size, -1);
if (ret < 0) {
return ret;
}
qcow2_free_clusters(bs, sn.l1_table_offset, sn.l1_size * sizeof(uint64_t));
/* must update the copied flag on the current cluster offsets */
ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 0);
if (ret < 0) {
return ret;
}
#ifdef DEBUG_ALLOC
{
BdrvCheckResult result = {0};
qcow2_check_refcounts(bs, &result);
}
qcow2_check_refcounts(bs);
#endif
return 0;
}
@@ -600,7 +407,7 @@ int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab)
return s->nb_snapshots;
}
sn_tab = g_malloc0(s->nb_snapshots * sizeof(QEMUSnapshotInfo));
sn_tab = qemu_mallocz(s->nb_snapshots * sizeof(QEMUSnapshotInfo));
for(i = 0; i < s->nb_snapshots; i++) {
sn_info = sn_tab + i;
sn = s->snapshots + i;
@@ -619,42 +426,32 @@ int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab)
int qcow2_snapshot_load_tmp(BlockDriverState *bs, const char *snapshot_name)
{
int i, snapshot_index;
int i, snapshot_index, l1_size2;
BDRVQcowState *s = bs->opaque;
QCowSnapshot *sn;
uint64_t *new_l1_table;
int new_l1_bytes;
int ret;
assert(bs->read_only);
/* Search the snapshot */
snapshot_index = find_snapshot_by_id_or_name(bs, snapshot_name);
if (snapshot_index < 0) {
return -ENOENT;
}
sn = &s->snapshots[snapshot_index];
/* Allocate and read in the snapshot's L1 table */
new_l1_bytes = s->l1_size * sizeof(uint64_t);
new_l1_table = g_malloc0(align_offset(new_l1_bytes, 512));
ret = bdrv_pread(bs->file, sn->l1_table_offset, new_l1_table, new_l1_bytes);
if (ret < 0) {
g_free(new_l1_table);
return ret;
s->l1_size = sn->l1_size;
l1_size2 = s->l1_size * sizeof(uint64_t);
if (s->l1_table != NULL) {
qemu_free(s->l1_table);
}
/* Switch the L1 table */
g_free(s->l1_table);
s->l1_size = sn->l1_size;
s->l1_table_offset = sn->l1_table_offset;
s->l1_table = new_l1_table;
s->l1_table = qemu_mallocz(align_offset(l1_size2, 512));
if (bdrv_pread(bs->file, sn->l1_table_offset,
s->l1_table, l1_size2) != l1_size2) {
return -1;
}
for(i = 0;i < s->l1_size; i++) {
be64_to_cpus(&s->l1_table[i]);
}
return 0;
}

File diff suppressed because it is too large Load Diff

View File

@@ -26,13 +26,13 @@
#define BLOCK_QCOW2_H
#include "aes.h"
#include "qemu-coroutine.h"
//#define DEBUG_ALLOC
//#define DEBUG_ALLOC2
//#define DEBUG_EXT
#define QCOW_MAGIC (('Q' << 24) | ('F' << 16) | ('I' << 8) | 0xfb)
#define QCOW_VERSION 2
#define QCOW_CRYPT_NONE 0
#define QCOW_CRYPT_AES 1
@@ -43,8 +43,6 @@
#define QCOW_OFLAG_COPIED (1LL << 63)
/* indicate that the cluster is compressed (they never have the copied flag) */
#define QCOW_OFLAG_COMPRESSED (1LL << 62)
/* The cluster reads as all zeros */
#define QCOW_OFLAG_ZERO (1LL << 0)
#define REFCOUNT_SHIFT 1 /* refcount size is 2 bytes */
@@ -72,14 +70,6 @@ typedef struct QCowHeader {
uint32_t refcount_table_clusters;
uint32_t nb_snapshots;
uint64_t snapshots_offset;
/* The following fields are only valid for version >= 3 */
uint64_t incompatible_features;
uint64_t compatible_features;
uint64_t autoclear_features;
uint32_t refcount_order;
uint32_t header_length;
} QCowHeader;
typedef struct QCowSnapshot {
@@ -87,8 +77,7 @@ typedef struct QCowSnapshot {
uint32_t l1_size;
char *id_str;
char *name;
uint64_t disk_size;
uint64_t vm_state_size;
uint32_t vm_state_size;
uint32_t date_sec;
uint32_t date_nsec;
uint64_t vm_clock_nsec;
@@ -97,25 +86,6 @@ typedef struct QCowSnapshot {
struct Qcow2Cache;
typedef struct Qcow2Cache Qcow2Cache;
typedef struct Qcow2UnknownHeaderExtension {
uint32_t magic;
uint32_t len;
QLIST_ENTRY(Qcow2UnknownHeaderExtension) next;
uint8_t data[];
} Qcow2UnknownHeaderExtension;
enum {
QCOW2_FEAT_TYPE_INCOMPATIBLE = 0,
QCOW2_FEAT_TYPE_COMPATIBLE = 1,
QCOW2_FEAT_TYPE_AUTOCLEAR = 2,
};
typedef struct Qcow2Feature {
uint8_t type;
uint8_t bit;
char name[46];
} QEMU_PACKED Qcow2Feature;
typedef struct BDRVQcowState {
int cluster_bits;
int cluster_size;
@@ -144,8 +114,6 @@ typedef struct BDRVQcowState {
int64_t free_cluster_index;
int64_t free_byte_offset;
CoMutex lock;
uint32_t crypt_method; /* current crypt method, 0 if no key yet */
uint32_t crypt_method_header;
AES_KEY aes_encrypt_key;
@@ -154,17 +122,6 @@ typedef struct BDRVQcowState {
int snapshots_size;
int nb_snapshots;
QCowSnapshot *snapshots;
int flags;
int qcow_version;
uint64_t incompatible_features;
uint64_t compatible_features;
uint64_t autoclear_features;
size_t unknown_header_fields_size;
void* unknown_header_fields;
QLIST_HEAD(, Qcow2UnknownHeaderExtension) unknown_header_ext;
} BDRVQcowState;
/* XXX: use std qcow open function ? */
@@ -185,28 +142,15 @@ typedef struct QCowL2Meta
{
uint64_t offset;
uint64_t cluster_offset;
uint64_t alloc_offset;
int n_start;
int nb_available;
int nb_clusters;
CoQueue dependent_requests;
struct QCowL2Meta *depends_on;
QLIST_HEAD(QCowAioDependencies, QCowAIOCB) dependent_requests;
QLIST_ENTRY(QCowL2Meta) next_in_flight;
} QCowL2Meta;
enum {
QCOW2_CLUSTER_UNALLOCATED,
QCOW2_CLUSTER_NORMAL,
QCOW2_CLUSTER_COMPRESSED,
QCOW2_CLUSTER_ZERO
};
#define L1E_OFFSET_MASK 0x00ffffffffffff00ULL
#define L2E_OFFSET_MASK 0x00ffffffffffff00ULL
#define L2E_COMPRESSED_OFFSET_SIZE_MASK 0x3fffffffffffffffULL
#define REFT_OFFSET_MASK 0xffffffffffffff00ULL
static inline int size_to_clusters(BDRVQcowState *s, int64_t size)
{
return (size + (s->cluster_size - 1)) >> s->cluster_bits;
@@ -224,40 +168,26 @@ static inline int64_t align_offset(int64_t offset, int n)
return offset;
}
static inline int qcow2_get_cluster_type(uint64_t l2_entry)
{
if (l2_entry & QCOW_OFLAG_COMPRESSED) {
return QCOW2_CLUSTER_COMPRESSED;
} else if (l2_entry & QCOW_OFLAG_ZERO) {
return QCOW2_CLUSTER_ZERO;
} else if (!(l2_entry & L2E_OFFSET_MASK)) {
return QCOW2_CLUSTER_UNALLOCATED;
} else {
return QCOW2_CLUSTER_NORMAL;
}
}
// FIXME Need qcow2_ prefix to global functions
/* qcow2.c functions */
int qcow2_backing_read1(BlockDriverState *bs, QEMUIOVector *qiov,
int64_t sector_num, int nb_sectors);
int qcow2_update_header(BlockDriverState *bs);
/* qcow2-refcount.c functions */
int qcow2_refcount_init(BlockDriverState *bs);
void qcow2_refcount_close(BlockDriverState *bs);
int64_t qcow2_alloc_clusters(BlockDriverState *bs, int64_t size);
int qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
int nb_clusters);
int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size);
void qcow2_free_clusters(BlockDriverState *bs,
int64_t offset, int64_t size);
void qcow2_free_any_clusters(BlockDriverState *bs,
uint64_t cluster_offset, int nb_clusters);
void qcow2_create_refcount_update(QCowCreateState *s, int64_t offset,
int64_t size);
int qcow2_update_snapshot_refcount(BlockDriverState *bs,
int64_t l1_table_offset, int l1_size, int addend);
@@ -283,7 +213,6 @@ uint64_t qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs,
int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m);
int qcow2_discard_clusters(BlockDriverState *bs, uint64_t offset,
int nb_sectors);
int qcow2_zero_clusters(BlockDriverState *bs, uint64_t offset, int nb_sectors);
/* qcow2-snapshot.c functions */
int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info);

View File

@@ -68,7 +68,6 @@ static unsigned int qed_check_l2_table(QEDCheck *check, QEDTable *table)
{
BDRVQEDState *s = check->s;
unsigned int i, num_invalid = 0;
uint64_t last_offset = 0;
for (i = 0; i < s->table_nelems; i++) {
uint64_t offset = table->offsets[i];
@@ -77,11 +76,6 @@ static unsigned int qed_check_l2_table(QEDCheck *check, QEDTable *table)
qed_offset_is_zero_cluster(offset)) {
continue;
}
check->result->bfi.allocated_clusters++;
if (last_offset && (last_offset + s->header.cluster_size != offset)) {
check->result->bfi.fragmented_clusters++;
}
last_offset = offset;
/* Detect invalid cluster offset */
if (!qed_check_cluster_offset(s, offset)) {
@@ -203,18 +197,15 @@ int qed_check(BDRVQEDState *s, BdrvCheckResult *result, bool fix)
};
int ret;
check.used_clusters = g_malloc0(((check.nclusters + 31) / 32) *
check.used_clusters = qemu_mallocz(((check.nclusters + 31) / 32) *
sizeof(check.used_clusters[0]));
check.result->bfi.total_clusters =
(s->header.image_size + s->header.cluster_size - 1) /
s->header.cluster_size;
ret = qed_check_l1_table(&check, s->l1_table);
if (ret == 0) {
/* Only check for leaks if entire image was scanned successfully */
qed_check_for_leaks(&check);
}
g_free(check.used_clusters);
qemu_free(check.used_clusters);
return ret;
}

View File

@@ -108,7 +108,7 @@ static void qed_find_cluster_cb(void *opaque, int ret)
out:
find_cluster_cb->cb(find_cluster_cb->opaque, ret, offset, len);
g_free(find_cluster_cb);
qemu_free(find_cluster_cb);
}
/**
@@ -152,7 +152,7 @@ void qed_find_cluster(BDRVQEDState *s, QEDRequest *request, uint64_t pos,
return;
}
find_cluster_cb = g_malloc(sizeof(*find_cluster_cb));
find_cluster_cb = qemu_malloc(sizeof(*find_cluster_cb));
find_cluster_cb->s = s;
find_cluster_cb->pos = pos;
find_cluster_cb->len = len;

View File

@@ -15,7 +15,7 @@
void *gencb_alloc(size_t len, BlockDriverCompletionFunc *cb, void *opaque)
{
GenericCB *gencb = g_malloc(len);
GenericCB *gencb = qemu_malloc(len);
gencb->cb = cb;
gencb->opaque = opaque;
return gencb;
@@ -27,6 +27,6 @@ void gencb_complete(void *opaque, int ret)
BlockDriverCompletionFunc *cb = gencb->cb;
void *user_opaque = gencb->opaque;
g_free(gencb);
qemu_free(gencb);
cb(user_opaque, ret);
}

View File

@@ -74,7 +74,7 @@ void qed_free_l2_cache(L2TableCache *l2_cache)
QTAILQ_FOREACH_SAFE(entry, &l2_cache->entries, node, next_entry) {
qemu_vfree(entry->table);
g_free(entry);
qemu_free(entry);
}
}
@@ -89,7 +89,7 @@ CachedL2Table *qed_alloc_l2_cache_entry(L2TableCache *l2_cache)
{
CachedL2Table *entry;
entry = g_malloc0(sizeof(*entry));
entry = qemu_mallocz(sizeof(*entry));
entry->ref++;
trace_qed_alloc_l2_cache_entry(l2_cache, entry);
@@ -111,7 +111,7 @@ void qed_unref_l2_cache_entry(CachedL2Table *entry)
trace_qed_unref_l2_cache_entry(entry, entry->ref);
if (entry->ref == 0) {
qemu_vfree(entry->table);
g_free(entry);
qemu_free(entry);
}
}
@@ -161,25 +161,11 @@ void qed_commit_l2_cache_entry(L2TableCache *l2_cache, CachedL2Table *l2_table)
return;
}
/* Evict an unused cache entry so we have space. If all entries are in use
* we can grow the cache temporarily and we try to shrink back down later.
*/
if (l2_cache->n_entries >= MAX_L2_CACHE_SIZE) {
CachedL2Table *next;
QTAILQ_FOREACH_SAFE(entry, &l2_cache->entries, node, next) {
if (entry->ref > 1) {
continue;
}
QTAILQ_REMOVE(&l2_cache->entries, entry, node);
l2_cache->n_entries--;
qed_unref_l2_cache_entry(entry);
/* Stop evicting when we've shrunk back to max size */
if (l2_cache->n_entries < MAX_L2_CACHE_SIZE) {
break;
}
}
entry = QTAILQ_FIRST(&l2_cache->entries);
QTAILQ_REMOVE(&l2_cache->entries, entry, node);
l2_cache->n_entries--;
qed_unref_l2_cache_entry(entry);
}
l2_cache->n_entries++;

View File

@@ -29,7 +29,7 @@ static void qed_read_table_cb(void *opaque, int ret)
{
QEDReadTableCB *read_table_cb = opaque;
QEDTable *table = read_table_cb->table;
int noffsets = read_table_cb->qiov.size / sizeof(uint64_t);
int noffsets = read_table_cb->iov.iov_len / sizeof(uint64_t);
int i;
/* Handle I/O error */
@@ -54,6 +54,7 @@ static void qed_read_table(BDRVQEDState *s, uint64_t offset, QEDTable *table,
QEDReadTableCB *read_table_cb = gencb_alloc(sizeof(*read_table_cb),
cb, opaque);
QEMUIOVector *qiov = &read_table_cb->qiov;
BlockDriverAIOCB *aiocb;
trace_qed_read_table(s, offset, table);
@@ -63,9 +64,12 @@ static void qed_read_table(BDRVQEDState *s, uint64_t offset, QEDTable *table,
read_table_cb->iov.iov_len = s->header.cluster_size * s->header.table_size,
qemu_iovec_init_external(qiov, &read_table_cb->iov, 1);
bdrv_aio_readv(s->bs->file, offset / BDRV_SECTOR_SIZE, qiov,
qiov->size / BDRV_SECTOR_SIZE,
qed_read_table_cb, read_table_cb);
aiocb = bdrv_aio_readv(s->bs->file, offset / BDRV_SECTOR_SIZE, qiov,
read_table_cb->iov.iov_len / BDRV_SECTOR_SIZE,
qed_read_table_cb, read_table_cb);
if (!aiocb) {
qed_read_table_cb(read_table_cb, -EIO);
}
}
typedef struct {
@@ -123,6 +127,7 @@ static void qed_write_table(BDRVQEDState *s, uint64_t offset, QEDTable *table,
BlockDriverCompletionFunc *cb, void *opaque)
{
QEDWriteTableCB *write_table_cb;
BlockDriverAIOCB *aiocb;
unsigned int sector_mask = BDRV_SECTOR_SIZE / sizeof(uint64_t) - 1;
unsigned int start, end, i;
size_t len_bytes;
@@ -153,10 +158,13 @@ static void qed_write_table(BDRVQEDState *s, uint64_t offset, QEDTable *table,
/* Adjust for offset into table */
offset += start * sizeof(uint64_t);
bdrv_aio_writev(s->bs->file, offset / BDRV_SECTOR_SIZE,
&write_table_cb->qiov,
write_table_cb->qiov.size / BDRV_SECTOR_SIZE,
qed_write_table_cb, write_table_cb);
aiocb = bdrv_aio_writev(s->bs->file, offset / BDRV_SECTOR_SIZE,
&write_table_cb->qiov,
write_table_cb->iov.iov_len / BDRV_SECTOR_SIZE,
qed_write_table_cb, write_table_cb);
if (!aiocb) {
qed_write_table_cb(write_table_cb, -EIO);
}
}
/**
@@ -171,12 +179,16 @@ int qed_read_l1_table_sync(BDRVQEDState *s)
{
int ret = -EINPROGRESS;
async_context_push();
qed_read_table(s, s->header.l1_table_offset,
s->l1_table, qed_sync_cb, &ret);
while (ret == -EINPROGRESS) {
qemu_aio_wait();
}
async_context_pop();
return ret;
}
@@ -193,11 +205,15 @@ int qed_write_l1_table_sync(BDRVQEDState *s, unsigned int index,
{
int ret = -EINPROGRESS;
async_context_push();
qed_write_l1_table(s, index, n, qed_sync_cb, &ret);
while (ret == -EINPROGRESS) {
qemu_aio_wait();
}
async_context_pop();
return ret;
}
@@ -266,11 +282,14 @@ int qed_read_l2_table_sync(BDRVQEDState *s, QEDRequest *request, uint64_t offset
{
int ret = -EINPROGRESS;
async_context_push();
qed_read_l2_table(s, request, offset, qed_sync_cb, &ret);
while (ret == -EINPROGRESS) {
qemu_aio_wait();
}
async_context_pop();
return ret;
}
@@ -288,10 +307,13 @@ int qed_write_l2_table_sync(BDRVQEDState *s, QEDRequest *request,
{
int ret = -EINPROGRESS;
async_context_push();
qed_write_l2_table(s, request, index, n, flush, qed_sync_cb, &ret);
while (ret == -EINPROGRESS) {
qemu_aio_wait();
}
async_context_pop();
return ret;
}

View File

@@ -16,7 +16,6 @@
#include "trace.h"
#include "qed.h"
#include "qerror.h"
#include "migration.h"
static void qed_aio_cancel(BlockDriverAIOCB *blockacb)
{
@@ -123,6 +122,7 @@ static void qed_write_header_read_cb(void *opaque, int ret)
{
QEDWriteHeaderCB *write_header_cb = opaque;
BDRVQEDState *s = write_header_cb->s;
BlockDriverAIOCB *acb;
if (ret) {
qed_write_header_cb(write_header_cb, ret);
@@ -132,9 +132,12 @@ static void qed_write_header_read_cb(void *opaque, int ret)
/* Update header */
qed_header_cpu_to_le(&s->header, (QEDHeader *)write_header_cb->buf);
bdrv_aio_writev(s->bs->file, 0, &write_header_cb->qiov,
write_header_cb->nsectors, qed_write_header_cb,
write_header_cb);
acb = bdrv_aio_writev(s->bs->file, 0, &write_header_cb->qiov,
write_header_cb->nsectors, qed_write_header_cb,
write_header_cb);
if (!acb) {
qed_write_header_cb(write_header_cb, -EIO);
}
}
/**
@@ -152,6 +155,7 @@ static void qed_write_header(BDRVQEDState *s, BlockDriverCompletionFunc cb,
* them, and write back.
*/
BlockDriverAIOCB *acb;
int nsectors = (sizeof(QEDHeader) + BDRV_SECTOR_SIZE - 1) /
BDRV_SECTOR_SIZE;
size_t len = nsectors * BDRV_SECTOR_SIZE;
@@ -165,8 +169,11 @@ static void qed_write_header(BDRVQEDState *s, BlockDriverCompletionFunc cb,
write_header_cb->iov.iov_len = len;
qemu_iovec_init_external(&write_header_cb->qiov, &write_header_cb->iov, 1);
bdrv_aio_readv(s->bs->file, 0, &write_header_cb->qiov, nsectors,
qed_write_header_read_cb, write_header_cb);
acb = bdrv_aio_readv(s->bs->file, 0, &write_header_cb->qiov, nsectors,
qed_write_header_read_cb, write_header_cb);
if (!acb) {
qed_write_header_cb(write_header_cb, -EIO);
}
}
static uint64_t qed_max_image_size(uint32_t cluster_size, uint32_t table_size)
@@ -367,12 +374,6 @@ static void qed_cancel_need_check_timer(BDRVQEDState *s)
qemu_del_timer(s->need_check_timer);
}
static void bdrv_qed_rebind(BlockDriverState *bs)
{
BDRVQEDState *s = bs->opaque;
s->bs = bs;
}
static int bdrv_qed_open(BlockDriverState *bs, int flags)
{
BDRVQEDState *s = bs->opaque;
@@ -387,6 +388,7 @@ static int bdrv_qed_open(BlockDriverState *bs, int flags)
if (ret < 0) {
return ret;
}
ret = 0; /* ret should always be 0 or -errno */
qed_header_le_to_cpu(&le_header, &s->header);
if (s->header.magic != QED_MAGIC) {
@@ -456,7 +458,7 @@ static int bdrv_qed_open(BlockDriverState *bs, int flags)
* feature is no longer valid.
*/
if ((s->header.autoclear_features & ~QED_AUTOCLEAR_FEATURE_MASK) != 0 &&
!bdrv_is_read_only(bs->file) && !(flags & BDRV_O_INCOMING)) {
!bdrv_is_read_only(bs->file)) {
s->header.autoclear_features &= QED_AUTOCLEAR_FEATURE_MASK;
ret = qed_write_header_sync(s);
@@ -483,8 +485,7 @@ static int bdrv_qed_open(BlockDriverState *bs, int flags)
* potentially inconsistent images to be opened read-only. This can
* aid data recovery from an otherwise inconsistent image.
*/
if (!bdrv_is_read_only(bs->file) &&
!(flags & BDRV_O_INCOMING)) {
if (!bdrv_is_read_only(bs->file)) {
BdrvCheckResult result = {0};
ret = qed_check(s, &result, true);
@@ -532,6 +533,11 @@ static void bdrv_qed_close(BlockDriverState *bs)
qemu_vfree(s->l1_table);
}
static int bdrv_qed_flush(BlockDriverState *bs)
{
return bdrv_flush(bs->file);
}
static int qed_create(const char *filename, uint32_t cluster_size,
uint64_t image_size, uint32_t table_size,
const char *backing_file, const char *backing_fmt)
@@ -589,7 +595,7 @@ static int qed_create(const char *filename, uint32_t cluster_size,
goto out;
}
l1_table = g_malloc0(l1_size);
l1_table = qemu_mallocz(l1_size);
ret = bdrv_pwrite(bs, header.l1_table_offset, l1_table, l1_size);
if (ret < 0) {
goto out;
@@ -597,7 +603,7 @@ static int qed_create(const char *filename, uint32_t cluster_size,
ret = 0; /* success */
out:
g_free(l1_table);
qemu_free(l1_table);
bdrv_delete(bs);
return ret;
}
@@ -651,7 +657,6 @@ static int bdrv_qed_create(const char *filename, QEMUOptionParameter *options)
}
typedef struct {
Coroutine *co;
int is_allocated;
int *pnum;
} QEDIsAllocatedCB;
@@ -661,14 +666,10 @@ static void qed_is_allocated_cb(void *opaque, int ret, uint64_t offset, size_t l
QEDIsAllocatedCB *cb = opaque;
*cb->pnum = len / BDRV_SECTOR_SIZE;
cb->is_allocated = (ret == QED_CLUSTER_FOUND || ret == QED_CLUSTER_ZERO);
if (cb->co) {
qemu_coroutine_enter(cb->co, NULL);
}
}
static int coroutine_fn bdrv_qed_co_is_allocated(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors, int *pnum)
static int bdrv_qed_is_allocated(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum)
{
BDRVQEDState *s = bs->opaque;
uint64_t pos = (uint64_t)sector_num * BDRV_SECTOR_SIZE;
@@ -679,14 +680,16 @@ static int coroutine_fn bdrv_qed_co_is_allocated(BlockDriverState *bs,
};
QEDRequest request = { .l2_table = NULL };
async_context_push();
qed_find_cluster(s, &request, pos, len, qed_is_allocated_cb, &cb);
/* Now sleep if the callback wasn't invoked immediately */
while (cb.is_allocated == -1) {
cb.co = qemu_coroutine_self();
qemu_coroutine_yield();
qemu_aio_wait();
}
async_context_pop();
qed_unref_l2_cache_entry(request.l2_table);
return cb.is_allocated;
@@ -718,6 +721,7 @@ static void qed_read_backing_file(BDRVQEDState *s, uint64_t pos,
QEMUIOVector *qiov,
BlockDriverCompletionFunc *cb, void *opaque)
{
BlockDriverAIOCB *aiocb;
uint64_t backing_length = 0;
size_t size;
@@ -749,8 +753,11 @@ static void qed_read_backing_file(BDRVQEDState *s, uint64_t pos,
size = MIN((uint64_t)backing_length - pos, qiov->size);
BLKDBG_EVENT(s->bs->file, BLKDBG_READ_BACKING);
bdrv_aio_readv(s->bs->backing_hd, pos / BDRV_SECTOR_SIZE,
qiov, size / BDRV_SECTOR_SIZE, cb, opaque);
aiocb = bdrv_aio_readv(s->bs->backing_hd, pos / BDRV_SECTOR_SIZE,
qiov, size / BDRV_SECTOR_SIZE, cb, opaque);
if (!aiocb) {
cb(opaque, -EIO);
}
}
typedef struct {
@@ -772,6 +779,7 @@ static void qed_copy_from_backing_file_write(void *opaque, int ret)
{
CopyFromBackingFileCB *copy_cb = opaque;
BDRVQEDState *s = copy_cb->s;
BlockDriverAIOCB *aiocb;
if (ret) {
qed_copy_from_backing_file_cb(copy_cb, ret);
@@ -779,9 +787,13 @@ static void qed_copy_from_backing_file_write(void *opaque, int ret)
}
BLKDBG_EVENT(s->bs->file, BLKDBG_COW_WRITE);
bdrv_aio_writev(s->bs->file, copy_cb->offset / BDRV_SECTOR_SIZE,
&copy_cb->qiov, copy_cb->qiov.size / BDRV_SECTOR_SIZE,
qed_copy_from_backing_file_cb, copy_cb);
aiocb = bdrv_aio_writev(s->bs->file, copy_cb->offset / BDRV_SECTOR_SIZE,
&copy_cb->qiov,
copy_cb->qiov.size / BDRV_SECTOR_SIZE,
qed_copy_from_backing_file_cb, copy_cb);
if (!aiocb) {
qed_copy_from_backing_file_cb(copy_cb, -EIO);
}
}
/**
@@ -873,12 +885,6 @@ static void qed_aio_complete(QEDAIOCB *acb, int ret)
qemu_iovec_destroy(&acb->cur_qiov);
qed_unref_l2_cache_entry(acb->request.l2_table);
/* Free the buffer we may have allocated for zero writes */
if (acb->flags & QED_AIOCB_ZERO) {
qemu_vfree(acb->qiov->iov[0].iov_base);
acb->qiov->iov[0].iov_base = NULL;
}
/* Arrange for a bh to invoke the completion function */
acb->bh_ret = ret;
acb->bh = qemu_bh_new(qed_aio_complete_bh, acb);
@@ -945,8 +951,9 @@ static void qed_aio_write_l1_update(void *opaque, int ret)
/**
* Update L2 table with new cluster offsets and write them out
*/
static void qed_aio_write_l2_update(QEDAIOCB *acb, int ret, uint64_t offset)
static void qed_aio_write_l2_update(void *opaque, int ret)
{
QEDAIOCB *acb = opaque;
BDRVQEDState *s = acb_to_s(acb);
bool need_alloc = acb->find_cluster_ret == QED_CLUSTER_L1;
int index;
@@ -962,7 +969,7 @@ static void qed_aio_write_l2_update(QEDAIOCB *acb, int ret, uint64_t offset)
index = qed_l2_index(s, acb->cur_pos);
qed_update_l2_table(s, acb->request.l2_table->table, index, acb->cur_nclusters,
offset);
acb->cur_cluster);
if (need_alloc) {
/* Write out the whole new L2 table */
@@ -979,12 +986,6 @@ err:
qed_aio_complete(acb, ret);
}
static void qed_aio_write_l2_update_cb(void *opaque, int ret)
{
QEDAIOCB *acb = opaque;
qed_aio_write_l2_update(acb, ret, acb->cur_cluster);
}
/**
* Flush new data clusters before updating the L2 table
*
@@ -999,7 +1000,7 @@ static void qed_aio_write_flush_before_l2_update(void *opaque, int ret)
QEDAIOCB *acb = opaque;
BDRVQEDState *s = acb_to_s(acb);
if (!bdrv_aio_flush(s->bs->file, qed_aio_write_l2_update_cb, opaque)) {
if (!bdrv_aio_flush(s->bs->file, qed_aio_write_l2_update, opaque)) {
qed_aio_complete(acb, -EIO);
}
}
@@ -1014,6 +1015,7 @@ static void qed_aio_write_main(void *opaque, int ret)
uint64_t offset = acb->cur_cluster +
qed_offset_into_cluster(s, acb->cur_pos);
BlockDriverCompletionFunc *next_fn;
BlockDriverAIOCB *file_acb;
trace_qed_aio_write_main(s, acb, ret, offset, acb->cur_qiov.size);
@@ -1028,14 +1030,18 @@ static void qed_aio_write_main(void *opaque, int ret)
if (s->bs->backing_hd) {
next_fn = qed_aio_write_flush_before_l2_update;
} else {
next_fn = qed_aio_write_l2_update_cb;
next_fn = qed_aio_write_l2_update;
}
}
BLKDBG_EVENT(s->bs->file, BLKDBG_WRITE_AIO);
bdrv_aio_writev(s->bs->file, offset / BDRV_SECTOR_SIZE,
&acb->cur_qiov, acb->cur_qiov.size / BDRV_SECTOR_SIZE,
next_fn, acb);
file_acb = bdrv_aio_writev(s->bs->file, offset / BDRV_SECTOR_SIZE,
&acb->cur_qiov,
acb->cur_qiov.size / BDRV_SECTOR_SIZE,
next_fn, acb);
if (!file_acb) {
qed_aio_complete(acb, -EIO);
}
}
/**
@@ -1090,18 +1096,6 @@ static bool qed_should_set_need_check(BDRVQEDState *s)
return !(s->header.features & QED_F_NEED_CHECK);
}
static void qed_aio_write_zero_cluster(void *opaque, int ret)
{
QEDAIOCB *acb = opaque;
if (ret) {
qed_aio_complete(acb, ret);
return;
}
qed_aio_write_l2_update(acb, 0, 1);
}
/**
* Write new data cluster
*
@@ -1113,7 +1107,6 @@ static void qed_aio_write_zero_cluster(void *opaque, int ret)
static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len)
{
BDRVQEDState *s = acb_to_s(acb);
BlockDriverCompletionFunc *cb;
/* Cancel timer when the first allocating request comes in */
if (QSIMPLEQ_EMPTY(&s->allocating_write_reqs)) {
@@ -1131,26 +1124,14 @@ static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len)
acb->cur_nclusters = qed_bytes_to_clusters(s,
qed_offset_into_cluster(s, acb->cur_pos) + len);
acb->cur_cluster = qed_alloc_clusters(s, acb->cur_nclusters);
qemu_iovec_copy(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len);
if (acb->flags & QED_AIOCB_ZERO) {
/* Skip ahead if the clusters are already zero */
if (acb->find_cluster_ret == QED_CLUSTER_ZERO) {
qed_aio_next_io(acb, 0);
return;
}
cb = qed_aio_write_zero_cluster;
} else {
cb = qed_aio_write_prefill;
acb->cur_cluster = qed_alloc_clusters(s, acb->cur_nclusters);
}
if (qed_should_set_need_check(s)) {
s->header.features |= QED_F_NEED_CHECK;
qed_write_header(s, cb, acb);
qed_write_header(s, qed_aio_write_prefill, acb);
} else {
cb(acb, 0);
qed_aio_write_prefill(acb, 0);
}
}
@@ -1165,16 +1146,6 @@ static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len)
*/
static void qed_aio_write_inplace(QEDAIOCB *acb, uint64_t offset, size_t len)
{
/* Allocate buffer for zero writes */
if (acb->flags & QED_AIOCB_ZERO) {
struct iovec *iov = acb->qiov->iov;
if (!iov->iov_base) {
iov->iov_base = qemu_blockalign(acb->common.bs, iov->iov_len);
memset(iov->iov_base, 0, iov->iov_len);
}
}
/* Calculate the I/O vector */
acb->cur_cluster = offset;
qemu_iovec_copy(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len);
@@ -1237,6 +1208,7 @@ static void qed_aio_read_data(void *opaque, int ret,
QEDAIOCB *acb = opaque;
BDRVQEDState *s = acb_to_s(acb);
BlockDriverState *bs = acb->common.bs;
BlockDriverAIOCB *file_acb;
/* Adjust offset into cluster */
offset += qed_offset_into_cluster(s, acb->cur_pos);
@@ -1261,9 +1233,14 @@ static void qed_aio_read_data(void *opaque, int ret,
}
BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
bdrv_aio_readv(bs->file, offset / BDRV_SECTOR_SIZE,
&acb->cur_qiov, acb->cur_qiov.size / BDRV_SECTOR_SIZE,
qed_aio_next_io, acb);
file_acb = bdrv_aio_readv(bs->file, offset / BDRV_SECTOR_SIZE,
&acb->cur_qiov,
acb->cur_qiov.size / BDRV_SECTOR_SIZE,
qed_aio_next_io, acb);
if (!file_acb) {
ret = -EIO;
goto err;
}
return;
err:
@@ -1277,8 +1254,8 @@ static void qed_aio_next_io(void *opaque, int ret)
{
QEDAIOCB *acb = opaque;
BDRVQEDState *s = acb_to_s(acb);
QEDFindClusterFunc *io_fn = (acb->flags & QED_AIOCB_WRITE) ?
qed_aio_write_data : qed_aio_read_data;
QEDFindClusterFunc *io_fn =
acb->is_write ? qed_aio_write_data : qed_aio_read_data;
trace_qed_aio_next_io(s, acb, ret, acb->cur_pos + acb->cur_qiov.size);
@@ -1308,14 +1285,14 @@ static BlockDriverAIOCB *qed_aio_setup(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque, int flags)
void *opaque, bool is_write)
{
QEDAIOCB *acb = qemu_aio_get(&qed_aio_pool, bs, cb, opaque);
trace_qed_aio_setup(bs->opaque, acb, sector_num, nb_sectors,
opaque, flags);
opaque, is_write);
acb->flags = flags;
acb->is_write = is_write;
acb->finished = NULL;
acb->qiov = qiov;
acb->qiov_offset = 0;
@@ -1335,7 +1312,7 @@ static BlockDriverAIOCB *bdrv_qed_aio_readv(BlockDriverState *bs,
BlockDriverCompletionFunc *cb,
void *opaque)
{
return qed_aio_setup(bs, sector_num, qiov, nb_sectors, cb, opaque, 0);
return qed_aio_setup(bs, sector_num, qiov, nb_sectors, cb, opaque, false);
}
static BlockDriverAIOCB *bdrv_qed_aio_writev(BlockDriverState *bs,
@@ -1344,55 +1321,14 @@ static BlockDriverAIOCB *bdrv_qed_aio_writev(BlockDriverState *bs,
BlockDriverCompletionFunc *cb,
void *opaque)
{
return qed_aio_setup(bs, sector_num, qiov, nb_sectors, cb,
opaque, QED_AIOCB_WRITE);
return qed_aio_setup(bs, sector_num, qiov, nb_sectors, cb, opaque, true);
}
typedef struct {
Coroutine *co;
int ret;
bool done;
} QEDWriteZeroesCB;
static void coroutine_fn qed_co_write_zeroes_cb(void *opaque, int ret)
static BlockDriverAIOCB *bdrv_qed_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb,
void *opaque)
{
QEDWriteZeroesCB *cb = opaque;
cb->done = true;
cb->ret = ret;
if (cb->co) {
qemu_coroutine_enter(cb->co, NULL);
}
}
static int coroutine_fn bdrv_qed_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors)
{
BlockDriverAIOCB *blockacb;
QEDWriteZeroesCB cb = { .done = false };
QEMUIOVector qiov;
struct iovec iov;
/* Zero writes start without an I/O buffer. If a buffer becomes necessary
* then it will be allocated during request processing.
*/
iov.iov_base = NULL,
iov.iov_len = nb_sectors * BDRV_SECTOR_SIZE,
qemu_iovec_init_external(&qiov, &iov, 1);
blockacb = qed_aio_setup(bs, sector_num, &qiov, nb_sectors,
qed_co_write_zeroes_cb, &cb,
QED_AIOCB_WRITE | QED_AIOCB_ZERO);
if (!blockacb) {
return -EIO;
}
if (!cb.done) {
cb.co = qemu_coroutine_self();
qemu_coroutine_yield();
}
assert(cb.done);
return cb.ret;
return bdrv_aio_flush(bs->file, cb, opaque);
}
static int bdrv_qed_truncate(BlockDriverState *bs, int64_t offset)
@@ -1432,7 +1368,6 @@ static int bdrv_qed_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
memset(bdi, 0, sizeof(*bdi));
bdi->cluster_size = s->header.cluster_size;
bdi->is_dirty = s->header.features & QED_F_NEED_CHECK;
return 0;
}
@@ -1488,35 +1423,24 @@ static int bdrv_qed_change_backing_file(BlockDriverState *bs,
}
/* Prepare new header */
buffer = g_malloc(buffer_len);
buffer = qemu_malloc(buffer_len);
qed_header_cpu_to_le(&new_header, &le_header);
memcpy(buffer, &le_header, sizeof(le_header));
buffer_len = sizeof(le_header);
if (backing_file) {
memcpy(buffer + buffer_len, backing_file, backing_file_len);
buffer_len += backing_file_len;
}
memcpy(buffer + buffer_len, backing_file, backing_file_len);
buffer_len += backing_file_len;
/* Write new header */
ret = bdrv_pwrite_sync(bs->file, 0, buffer, buffer_len);
g_free(buffer);
qemu_free(buffer);
if (ret == 0) {
memcpy(&s->header, &new_header, sizeof(new_header));
}
return ret;
}
static void bdrv_qed_invalidate_cache(BlockDriverState *bs)
{
BDRVQEDState *s = bs->opaque;
bdrv_qed_close(bs);
memset(s, 0, sizeof(BDRVQEDState));
bdrv_qed_open(bs, bs->open_flags);
}
static int bdrv_qed_check(BlockDriverState *bs, BdrvCheckResult *result)
{
BDRVQEDState *s = bs->opaque;
@@ -1556,20 +1480,19 @@ static BlockDriver bdrv_qed = {
.create_options = qed_create_options,
.bdrv_probe = bdrv_qed_probe,
.bdrv_rebind = bdrv_qed_rebind,
.bdrv_open = bdrv_qed_open,
.bdrv_close = bdrv_qed_close,
.bdrv_create = bdrv_qed_create,
.bdrv_co_is_allocated = bdrv_qed_co_is_allocated,
.bdrv_flush = bdrv_qed_flush,
.bdrv_is_allocated = bdrv_qed_is_allocated,
.bdrv_make_empty = bdrv_qed_make_empty,
.bdrv_aio_readv = bdrv_qed_aio_readv,
.bdrv_aio_writev = bdrv_qed_aio_writev,
.bdrv_co_write_zeroes = bdrv_qed_co_write_zeroes,
.bdrv_aio_flush = bdrv_qed_aio_flush,
.bdrv_truncate = bdrv_qed_truncate,
.bdrv_getlength = bdrv_qed_getlength,
.bdrv_get_info = bdrv_qed_get_info,
.bdrv_change_backing_file = bdrv_qed_change_backing_file,
.bdrv_invalidate_cache = bdrv_qed_invalidate_cache,
.bdrv_check = bdrv_qed_check,
};

View File

@@ -123,17 +123,12 @@ typedef struct QEDRequest {
CachedL2Table *l2_table;
} QEDRequest;
enum {
QED_AIOCB_WRITE = 0x0001, /* read or write? */
QED_AIOCB_ZERO = 0x0002, /* zero write, used with QED_AIOCB_WRITE */
};
typedef struct QEDAIOCB {
BlockDriverAIOCB common;
QEMUBH *bh;
int bh_ret; /* final return status for completion bh */
QSIMPLEQ_ENTRY(QEDAIOCB) next; /* next request */
int flags; /* QED_AIOCB_* bits ORed together */
bool is_write; /* false - read, true - write */
bool *finished; /* signal for cancel completion */
uint64_t end_pos; /* request end on block device, in bytes */

View File

@@ -9,8 +9,6 @@
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#ifndef QEMU_RAW_POSIX_AIO_H
#define QEMU_RAW_POSIX_AIO_H

View File

@@ -29,7 +29,7 @@
#include "module.h"
#include "block/raw-posix-aio.h"
#if defined(__APPLE__) && (__MACH__)
#ifdef CONFIG_COCOA
#include <paths.h>
#include <sys/param.h>
#include <IOKit/IOKitLib.h>
@@ -230,19 +230,13 @@ static int raw_open_common(BlockDriverState *bs, const char *filename,
}
}
/* We're falling back to POSIX AIO in some cases so init always */
if (paio_init() < 0) {
goto out_free_buf;
}
#ifdef CONFIG_LINUX_AIO
/*
* Currently Linux do AIO only for files opened with O_DIRECT
* specified so check NOCACHE flag too
*/
if ((bdrv_flags & (BDRV_O_NOCACHE|BDRV_O_NATIVE_AIO)) ==
(BDRV_O_NOCACHE|BDRV_O_NATIVE_AIO)) {
/* We're falling back to POSIX AIO in some cases */
paio_init();
s->aio_ctx = laio_init();
if (!s->aio_ctx) {
goto out_free_buf;
@@ -251,6 +245,9 @@ static int raw_open_common(BlockDriverState *bs, const char *filename,
} else
#endif
{
if (paio_init() < 0) {
goto out_free_buf;
}
#ifdef CONFIG_LINUX_AIO
s->use_aio = 0;
#endif
@@ -296,6 +293,273 @@ static int raw_open(BlockDriverState *bs, const char *filename, int flags)
#endif
*/
/*
* offset and count are in bytes, but must be multiples of 512 for files
* opened with O_DIRECT. buf must be aligned to 512 bytes then.
*
* This function may be called without alignment if the caller ensures
* that O_DIRECT is not in effect.
*/
static int raw_pread_aligned(BlockDriverState *bs, int64_t offset,
uint8_t *buf, int count)
{
BDRVRawState *s = bs->opaque;
int ret;
ret = fd_open(bs);
if (ret < 0)
return ret;
ret = pread(s->fd, buf, count, offset);
if (ret == count)
return ret;
/* Allow reads beyond the end (needed for pwrite) */
if ((ret == 0) && bs->growable) {
int64_t size = raw_getlength(bs);
if (offset >= size) {
memset(buf, 0, count);
return count;
}
}
DEBUG_BLOCK_PRINT("raw_pread(%d:%s, %" PRId64 ", %p, %d) [%" PRId64
"] read failed %d : %d = %s\n",
s->fd, bs->filename, offset, buf, count,
bs->total_sectors, ret, errno, strerror(errno));
/* Try harder for CDrom. */
if (s->type != FTYPE_FILE) {
ret = pread(s->fd, buf, count, offset);
if (ret == count)
return ret;
ret = pread(s->fd, buf, count, offset);
if (ret == count)
return ret;
DEBUG_BLOCK_PRINT("raw_pread(%d:%s, %" PRId64 ", %p, %d) [%" PRId64
"] retry read failed %d : %d = %s\n",
s->fd, bs->filename, offset, buf, count,
bs->total_sectors, ret, errno, strerror(errno));
}
return (ret < 0) ? -errno : ret;
}
/*
* offset and count are in bytes, but must be multiples of the sector size
* for files opened with O_DIRECT. buf must be aligned to sector size bytes
* then.
*
* This function may be called without alignment if the caller ensures
* that O_DIRECT is not in effect.
*/
static int raw_pwrite_aligned(BlockDriverState *bs, int64_t offset,
const uint8_t *buf, int count)
{
BDRVRawState *s = bs->opaque;
int ret;
ret = fd_open(bs);
if (ret < 0)
return -errno;
ret = pwrite(s->fd, buf, count, offset);
if (ret == count)
return ret;
DEBUG_BLOCK_PRINT("raw_pwrite(%d:%s, %" PRId64 ", %p, %d) [%" PRId64
"] write failed %d : %d = %s\n",
s->fd, bs->filename, offset, buf, count,
bs->total_sectors, ret, errno, strerror(errno));
return (ret < 0) ? -errno : ret;
}
/*
* offset and count are in bytes and possibly not aligned. For files opened
* with O_DIRECT, necessary alignments are ensured before calling
* raw_pread_aligned to do the actual read.
*/
static int raw_pread(BlockDriverState *bs, int64_t offset,
uint8_t *buf, int count)
{
BDRVRawState *s = bs->opaque;
unsigned sector_mask = bs->buffer_alignment - 1;
int size, ret, shift, sum;
sum = 0;
if (s->aligned_buf != NULL) {
if (offset & sector_mask) {
/* align offset on a sector size bytes boundary */
shift = offset & sector_mask;
size = (shift + count + sector_mask) & ~sector_mask;
if (size > s->aligned_buf_size)
size = s->aligned_buf_size;
ret = raw_pread_aligned(bs, offset - shift, s->aligned_buf, size);
if (ret < 0)
return ret;
size = bs->buffer_alignment - shift;
if (size > count)
size = count;
memcpy(buf, s->aligned_buf + shift, size);
buf += size;
offset += size;
count -= size;
sum += size;
if (count == 0)
return sum;
}
if (count & sector_mask || (uintptr_t) buf & sector_mask) {
/* read on aligned buffer */
while (count) {
size = (count + sector_mask) & ~sector_mask;
if (size > s->aligned_buf_size)
size = s->aligned_buf_size;
ret = raw_pread_aligned(bs, offset, s->aligned_buf, size);
if (ret < 0) {
return ret;
} else if (ret == 0) {
fprintf(stderr, "raw_pread: read beyond end of file\n");
abort();
}
size = ret;
if (size > count)
size = count;
memcpy(buf, s->aligned_buf, size);
buf += size;
offset += size;
count -= size;
sum += size;
}
return sum;
}
}
return raw_pread_aligned(bs, offset, buf, count) + sum;
}
static int raw_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
ret = raw_pread(bs, sector_num * BDRV_SECTOR_SIZE, buf,
nb_sectors * BDRV_SECTOR_SIZE);
if (ret == (nb_sectors * BDRV_SECTOR_SIZE))
ret = 0;
return ret;
}
/*
* offset and count are in bytes and possibly not aligned. For files opened
* with O_DIRECT, necessary alignments are ensured before calling
* raw_pwrite_aligned to do the actual write.
*/
static int raw_pwrite(BlockDriverState *bs, int64_t offset,
const uint8_t *buf, int count)
{
BDRVRawState *s = bs->opaque;
unsigned sector_mask = bs->buffer_alignment - 1;
int size, ret, shift, sum;
sum = 0;
if (s->aligned_buf != NULL) {
if (offset & sector_mask) {
/* align offset on a sector size bytes boundary */
shift = offset & sector_mask;
ret = raw_pread_aligned(bs, offset - shift, s->aligned_buf,
bs->buffer_alignment);
if (ret < 0)
return ret;
size = bs->buffer_alignment - shift;
if (size > count)
size = count;
memcpy(s->aligned_buf + shift, buf, size);
ret = raw_pwrite_aligned(bs, offset - shift, s->aligned_buf,
bs->buffer_alignment);
if (ret < 0)
return ret;
buf += size;
offset += size;
count -= size;
sum += size;
if (count == 0)
return sum;
}
if (count & sector_mask || (uintptr_t) buf & sector_mask) {
while ((size = (count & ~sector_mask)) != 0) {
if (size > s->aligned_buf_size)
size = s->aligned_buf_size;
memcpy(s->aligned_buf, buf, size);
ret = raw_pwrite_aligned(bs, offset, s->aligned_buf, size);
if (ret < 0)
return ret;
buf += ret;
offset += ret;
count -= ret;
sum += ret;
}
/* here, count < sector_size because (count & ~sector_mask) == 0 */
if (count) {
ret = raw_pread_aligned(bs, offset, s->aligned_buf,
bs->buffer_alignment);
if (ret < 0)
return ret;
memcpy(s->aligned_buf, buf, count);
ret = raw_pwrite_aligned(bs, offset, s->aligned_buf,
bs->buffer_alignment);
if (ret < 0)
return ret;
if (count < ret)
ret = count;
sum += ret;
}
return sum;
}
}
return raw_pwrite_aligned(bs, offset, buf, count) + sum;
}
static int raw_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
int ret;
ret = raw_pwrite(bs, sector_num * BDRV_SECTOR_SIZE, buf,
nb_sectors * BDRV_SECTOR_SIZE);
if (ret == (nb_sectors * BDRV_SECTOR_SIZE))
ret = 0;
return ret;
}
/*
* Check if all memory in this vector is sector aligned.
*/
@@ -323,7 +587,7 @@ static BlockDriverAIOCB *raw_aio_submit(BlockDriverState *bs,
/*
* If O_DIRECT is used the buffer needs to be aligned on a sector
* boundary. Check if this is the case or tell the low-level
* boundary. Check if this is the case or telll the low-level
* driver that it needs to copy the buffer.
*/
if (s->aligned_buf) {
@@ -382,24 +646,10 @@ static void raw_close(BlockDriverState *bs)
static int raw_truncate(BlockDriverState *bs, int64_t offset)
{
BDRVRawState *s = bs->opaque;
struct stat st;
if (fstat(s->fd, &st)) {
return -errno;
}
if (S_ISREG(st.st_mode)) {
if (ftruncate(s->fd, offset) < 0) {
return -errno;
}
} else if (S_ISCHR(st.st_mode) || S_ISBLK(st.st_mode)) {
if (offset > raw_getlength(bs)) {
return -EINVAL;
}
} else {
if (s->type != FTYPE_FILE)
return -ENOTSUP;
}
if (ftruncate(s->fd, offset) < 0)
return -errno;
return 0;
}
@@ -505,7 +755,7 @@ again:
}
if (size == 0)
#endif
#if defined(__APPLE__) && defined(__MACH__)
#ifdef CONFIG_COCOA
size = LONG_LONG_MAX;
#else
size = lseek(fd, 0LL, SEEK_END);
@@ -583,6 +833,12 @@ static int raw_create(const char *filename, QEMUOptionParameter *options)
return result;
}
static int raw_flush(BlockDriverState *bs)
{
BDRVRawState *s = bs->opaque;
return qemu_fdatasync(s->fd);
}
#ifdef CONFIG_XFS
static int xfs_discard(BDRVRawState *s, int64_t sector_num, int nb_sectors)
{
@@ -602,8 +858,7 @@ static int xfs_discard(BDRVRawState *s, int64_t sector_num, int nb_sectors)
}
#endif
static coroutine_fn int raw_co_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors)
static int raw_discard(BlockDriverState *bs, int64_t sector_num, int nb_sectors)
{
#ifdef CONFIG_XFS
BDRVRawState *s = bs->opaque;
@@ -631,9 +886,12 @@ static BlockDriver bdrv_file = {
.instance_size = sizeof(BDRVRawState),
.bdrv_probe = NULL, /* no probe for protocols */
.bdrv_file_open = raw_open,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_close = raw_close,
.bdrv_create = raw_create,
.bdrv_co_discard = raw_co_discard,
.bdrv_flush = raw_flush,
.bdrv_discard = raw_discard,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
@@ -650,7 +908,7 @@ static BlockDriver bdrv_file = {
/***********************************************/
/* host device */
#if defined(__APPLE__) && defined(__MACH__)
#ifdef CONFIG_COCOA
static kern_return_t FindEjectableCDMedia( io_iterator_t *mediaIterator );
static kern_return_t GetBSDPath( io_iterator_t mediaIterator, char *bsdPath, CFIndex maxPathSize );
@@ -728,7 +986,7 @@ static int hdev_open(BlockDriverState *bs, const char *filename, int flags)
{
BDRVRawState *s = bs->opaque;
#if defined(__APPLE__) && defined(__MACH__)
#ifdef CONFIG_COCOA
if (strstart(filename, "/dev/cdrom", NULL)) {
kern_return_t kernResult;
io_iterator_t mediaIterator;
@@ -902,12 +1160,14 @@ static BlockDriver bdrv_host_device = {
.bdrv_create = hdev_create,
.create_options = raw_create_options,
.bdrv_has_zero_init = hdev_has_zero_init,
.bdrv_flush = raw_flush,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_truncate = raw_truncate,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_getlength = raw_getlength,
.bdrv_get_allocated_file_size
= raw_get_allocated_file_size,
@@ -994,7 +1254,7 @@ static int floppy_media_changed(BlockDriverState *bs)
return ret;
}
static void floppy_eject(BlockDriverState *bs, bool eject_flag)
static int floppy_eject(BlockDriverState *bs, int eject_flag)
{
BDRVRawState *s = bs->opaque;
int fd;
@@ -1009,6 +1269,8 @@ static void floppy_eject(BlockDriverState *bs, bool eject_flag)
perror("FDEJECT");
close(fd);
}
return 0;
}
static BlockDriver bdrv_host_floppy = {
@@ -1021,12 +1283,14 @@ static BlockDriver bdrv_host_floppy = {
.bdrv_create = hdev_create,
.create_options = raw_create_options,
.bdrv_has_zero_init = hdev_has_zero_init,
.bdrv_flush = raw_flush,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_truncate = raw_truncate,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_getlength = raw_getlength,
.bdrv_get_allocated_file_size
= raw_get_allocated_file_size,
@@ -1084,7 +1348,7 @@ static int cdrom_is_inserted(BlockDriverState *bs)
return 0;
}
static void cdrom_eject(BlockDriverState *bs, bool eject_flag)
static int cdrom_eject(BlockDriverState *bs, int eject_flag)
{
BDRVRawState *s = bs->opaque;
@@ -1095,9 +1359,11 @@ static void cdrom_eject(BlockDriverState *bs, bool eject_flag)
if (ioctl(s->fd, CDROMCLOSETRAY, NULL) < 0)
perror("CDROMEJECT");
}
return 0;
}
static void cdrom_lock_medium(BlockDriverState *bs, bool locked)
static int cdrom_set_locked(BlockDriverState *bs, int locked)
{
BDRVRawState *s = bs->opaque;
@@ -1108,6 +1374,8 @@ static void cdrom_lock_medium(BlockDriverState *bs, bool locked)
*/
/* perror("CDROM_LOCKDOOR"); */
}
return 0;
}
static BlockDriver bdrv_host_cdrom = {
@@ -1120,12 +1388,14 @@ static BlockDriver bdrv_host_cdrom = {
.bdrv_create = hdev_create,
.create_options = raw_create_options,
.bdrv_has_zero_init = hdev_has_zero_init,
.bdrv_flush = raw_flush,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_truncate = raw_truncate,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_getlength = raw_getlength,
.bdrv_get_allocated_file_size
= raw_get_allocated_file_size,
@@ -1133,7 +1403,7 @@ static BlockDriver bdrv_host_cdrom = {
/* removable device support */
.bdrv_is_inserted = cdrom_is_inserted,
.bdrv_eject = cdrom_eject,
.bdrv_lock_medium = cdrom_lock_medium,
.bdrv_set_locked = cdrom_set_locked,
/* generic scsi device */
.bdrv_ioctl = hdev_ioctl,
@@ -1153,7 +1423,7 @@ static int cdrom_open(BlockDriverState *bs, const char *filename, int flags)
if (ret)
return ret;
/* make sure the door isn't locked at this time */
/* make sure the door isnt locked at this time */
ioctl(s->fd, CDIOCALLOW);
return 0;
}
@@ -1184,7 +1454,7 @@ static int cdrom_reopen(BlockDriverState *bs)
}
s->fd = fd;
/* make sure the door isn't locked at this time */
/* make sure the door isnt locked at this time */
ioctl(s->fd, CDIOCALLOW);
return 0;
}
@@ -1194,12 +1464,12 @@ static int cdrom_is_inserted(BlockDriverState *bs)
return raw_getlength(bs) > 0;
}
static void cdrom_eject(BlockDriverState *bs, bool eject_flag)
static int cdrom_eject(BlockDriverState *bs, int eject_flag)
{
BDRVRawState *s = bs->opaque;
if (s->fd < 0)
return;
return -ENOTSUP;
(void) ioctl(s->fd, CDIOCALLOW);
@@ -1211,15 +1481,17 @@ static void cdrom_eject(BlockDriverState *bs, bool eject_flag)
perror("CDIOCCLOSE");
}
cdrom_reopen(bs);
if (cdrom_reopen(bs) < 0)
return -ENOTSUP;
return 0;
}
static void cdrom_lock_medium(BlockDriverState *bs, bool locked)
static int cdrom_set_locked(BlockDriverState *bs, int locked)
{
BDRVRawState *s = bs->opaque;
if (s->fd < 0)
return;
return -ENOTSUP;
if (ioctl(s->fd, (locked ? CDIOCPREVENT : CDIOCALLOW)) < 0) {
/*
* Note: an error can happen if the distribution automatically
@@ -1227,6 +1499,8 @@ static void cdrom_lock_medium(BlockDriverState *bs, bool locked)
*/
/* perror("CDROM_LOCKDOOR"); */
}
return 0;
}
static BlockDriver bdrv_host_cdrom = {
@@ -1239,12 +1513,14 @@ static BlockDriver bdrv_host_cdrom = {
.bdrv_create = hdev_create,
.create_options = raw_create_options,
.bdrv_has_zero_init = hdev_has_zero_init,
.bdrv_flush = raw_flush,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_truncate = raw_truncate,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_getlength = raw_getlength,
.bdrv_get_allocated_file_size
= raw_get_allocated_file_size,
@@ -1252,7 +1528,7 @@ static BlockDriver bdrv_host_cdrom = {
/* removable device support */
.bdrv_is_inserted = cdrom_is_inserted,
.bdrv_eject = cdrom_eject,
.bdrv_lock_medium = cdrom_lock_medium,
.bdrv_set_locked = cdrom_set_locked,
};
#endif /* __FreeBSD__ */

View File

@@ -41,7 +41,6 @@ typedef struct BDRVRawState {
int qemu_ftruncate64(int fd, int64_t length)
{
LARGE_INTEGER li;
DWORD dw;
LONG high;
HANDLE h;
BOOL res;
@@ -54,15 +53,12 @@ int qemu_ftruncate64(int fd, int64_t length)
/* get current position, ftruncate do not change position */
li.HighPart = 0;
li.LowPart = SetFilePointer (h, 0, &li.HighPart, FILE_CURRENT);
if (li.LowPart == INVALID_SET_FILE_POINTER && GetLastError() != NO_ERROR) {
if (li.LowPart == 0xffffffffUL && GetLastError() != NO_ERROR)
return -1;
}
high = length >> 32;
dw = SetFilePointer(h, (DWORD) length, &high, FILE_BEGIN);
if (dw == INVALID_SET_FILE_POINTER && GetLastError() != NO_ERROR) {
if (!SetFilePointer(h, (DWORD) length, &high, FILE_BEGIN))
return -1;
}
res = SetEndOfFile(h);
/* back to old position */
@@ -281,11 +277,9 @@ static BlockDriver bdrv_file = {
.bdrv_file_open = raw_open,
.bdrv_close = raw_close,
.bdrv_create = raw_create,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_co_flush_to_disk = raw_flush,
.bdrv_flush = raw_flush,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_truncate = raw_truncate,
.bdrv_getlength = raw_getlength,
.bdrv_get_allocated_file_size
@@ -399,6 +393,41 @@ static int hdev_open(BlockDriverState *bs, const char *filename, int flags)
return 0;
}
#if 0
/***********************************************/
/* removable device additional commands */
static int raw_is_inserted(BlockDriverState *bs)
{
return 1;
}
static int raw_media_changed(BlockDriverState *bs)
{
return -ENOTSUP;
}
static int raw_eject(BlockDriverState *bs, int eject_flag)
{
DWORD ret_count;
if (s->type == FTYPE_FILE)
return -ENOTSUP;
if (eject_flag) {
DeviceIoControl(s->hfile, IOCTL_STORAGE_EJECT_MEDIA,
NULL, 0, NULL, 0, &lpBytesReturned, NULL);
} else {
DeviceIoControl(s->hfile, IOCTL_STORAGE_LOAD_MEDIA,
NULL, 0, NULL, 0, &lpBytesReturned, NULL);
}
}
static int raw_set_locked(BlockDriverState *bs, int locked)
{
return -ENOTSUP;
}
#endif
static int hdev_has_zero_init(BlockDriverState *bs)
{
return 0;
@@ -411,12 +440,11 @@ static BlockDriver bdrv_host_device = {
.bdrv_probe_device = hdev_probe_device,
.bdrv_file_open = hdev_open,
.bdrv_close = raw_close,
.bdrv_flush = raw_flush,
.bdrv_has_zero_init = hdev_has_zero_init,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_co_flush_to_disk = raw_flush,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_getlength = raw_getlength,
.bdrv_get_allocated_file_size
= raw_get_allocated_file_size,

View File

@@ -9,22 +9,47 @@ static int raw_open(BlockDriverState *bs, int flags)
return 0;
}
static int coroutine_fn raw_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
static int raw_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
return bdrv_co_readv(bs->file, sector_num, nb_sectors, qiov);
return bdrv_read(bs->file, sector_num, buf, nb_sectors);
}
static int coroutine_fn raw_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
static int raw_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
return bdrv_co_writev(bs->file, sector_num, nb_sectors, qiov);
return bdrv_write(bs->file, sector_num, buf, nb_sectors);
}
static BlockDriverAIOCB *raw_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
return bdrv_aio_readv(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
}
static BlockDriverAIOCB *raw_aio_writev(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
return bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
}
static void raw_close(BlockDriverState *bs)
{
}
static int raw_flush(BlockDriverState *bs)
{
return bdrv_flush(bs->file);
}
static BlockDriverAIOCB *raw_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
return bdrv_aio_flush(bs->file, cb, opaque);
}
static int64_t raw_getlength(BlockDriverState *bs)
{
return bdrv_getlength(bs->file);
@@ -40,10 +65,9 @@ static int raw_probe(const uint8_t *buf, int buf_size, const char *filename)
return 1; /* everything can be opened as raw image */
}
static int coroutine_fn raw_co_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors)
static int raw_discard(BlockDriverState *bs, int64_t sector_num, int nb_sectors)
{
return bdrv_co_discard(bs->file, sector_num, nb_sectors);
return bdrv_discard(bs->file, sector_num, nb_sectors);
}
static int raw_is_inserted(BlockDriverState *bs)
@@ -51,19 +75,15 @@ static int raw_is_inserted(BlockDriverState *bs)
return bdrv_is_inserted(bs->file);
}
static int raw_media_changed(BlockDriverState *bs)
static int raw_eject(BlockDriverState *bs, int eject_flag)
{
return bdrv_media_changed(bs->file);
return bdrv_eject(bs->file, eject_flag);
}
static void raw_eject(BlockDriverState *bs, bool eject_flag)
static int raw_set_locked(BlockDriverState *bs, int locked)
{
bdrv_eject(bs->file, eject_flag);
}
static void raw_lock_medium(BlockDriverState *bs, bool locked)
{
bdrv_lock_medium(bs->file, locked);
bdrv_set_locked(bs->file, locked);
return 0;
}
static int raw_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
@@ -100,25 +120,26 @@ static int raw_has_zero_init(BlockDriverState *bs)
static BlockDriver bdrv_raw = {
.format_name = "raw",
/* It's really 0, but we need to make g_malloc() happy */
/* It's really 0, but we need to make qemu_malloc() happy */
.instance_size = 1,
.bdrv_open = raw_open,
.bdrv_close = raw_close,
.bdrv_co_readv = raw_co_readv,
.bdrv_co_writev = raw_co_writev,
.bdrv_co_discard = raw_co_discard,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_flush = raw_flush,
.bdrv_probe = raw_probe,
.bdrv_getlength = raw_getlength,
.bdrv_truncate = raw_truncate,
.bdrv_is_inserted = raw_is_inserted,
.bdrv_media_changed = raw_media_changed,
.bdrv_eject = raw_eject,
.bdrv_lock_medium = raw_lock_medium,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_discard = raw_discard,
.bdrv_is_inserted = raw_is_inserted,
.bdrv_eject = raw_eject,
.bdrv_set_locked = raw_set_locked,
.bdrv_ioctl = raw_ioctl,
.bdrv_aio_ioctl = raw_aio_ioctl,

View File

@@ -7,50 +7,43 @@
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include <inttypes.h>
#include "qemu-common.h"
#include "qemu-error.h"
#include "block_int.h"
#include <rbd/librbd.h>
/*
* When specifying the image filename use:
*
* rbd:poolname/devicename[@snapshotname][:option1=value1[:option2=value2...]]
*
* poolname must be the name of an existing rados pool.
* poolname must be the name of an existing rados pool
*
* devicename is the name of the rbd image.
* devicename is the basename for all objects used to
* emulate the raw device.
*
* Each option given is used to configure rados, and may be any valid
* Ceph option, "id", or "conf".
* Each option given is used to configure rados, and may be
* any Ceph option, or "conf". The "conf" option specifies
* a Ceph configuration file to read.
*
* The "id" option indicates what user we should authenticate as to
* the Ceph cluster. If it is excluded we will use the Ceph default
* (normally 'admin').
* Metadata information (image size, ...) is stored in an
* object with the name "devicename.rbd".
*
* The "conf" option specifies a Ceph configuration file to read. If
* it is not specified, we will read from the default Ceph locations
* (e.g., /etc/ceph/ceph.conf). To avoid reading _any_ configuration
* file, specify conf=/dev/null.
* The raw device is split into 4MB sized objects by default.
* The sequencenumber is encoded in a 12 byte long hex-string,
* and is attached to the devicename, separated by a dot.
* e.g. "devicename.1234567890ab"
*
* Configuration values containing :, @, or = can be escaped with a
* leading "\".
*/
/* rbd_aio_discard added in 0.1.2 */
#if LIBRBD_VERSION_CODE >= LIBRBD_VERSION(0, 1, 2)
#define LIBRBD_SUPPORTS_DISCARD
#else
#undef LIBRBD_SUPPORTS_DISCARD
#endif
#define OBJ_MAX_SIZE (1UL << OBJ_DEFAULT_OBJ_ORDER)
#define RBD_MAX_CONF_NAME_SIZE 128
@@ -60,19 +53,13 @@
#define RBD_MAX_SNAP_NAME_SIZE 128
#define RBD_MAX_SNAPS 100
typedef enum {
RBD_AIO_READ,
RBD_AIO_WRITE,
RBD_AIO_DISCARD
} RBDAIOCmd;
typedef struct RBDAIOCB {
BlockDriverAIOCB common;
QEMUBH *bh;
int ret;
QEMUIOVector *qiov;
char *bounce;
RBDAIOCmd cmd;
int write;
int64_t sector_num;
int error;
struct BDRVRBDState *s;
@@ -117,15 +104,8 @@ static int qemu_rbd_next_tok(char *dst, int dst_len,
*p = NULL;
if (delim != '\0') {
for (end = src; *end; ++end) {
if (*end == delim) {
break;
}
if (*end == '\\' && end[1] != '\0') {
end++;
}
}
if (*end == delim) {
end = strchr(src, delim);
if (end) {
*p = end + 1;
*end = '\0';
}
@@ -144,19 +124,6 @@ static int qemu_rbd_next_tok(char *dst, int dst_len,
return 0;
}
static void qemu_rbd_unescape(char *src)
{
char *p;
for (p = src; *src; ++src, ++p) {
if (*src == '\\' && src[1] != '\0') {
src++;
}
*p = *src;
}
*p = '\0';
}
static int qemu_rbd_parsename(const char *filename,
char *pool, int pool_len,
char *snap, int snap_len,
@@ -171,7 +138,7 @@ static int qemu_rbd_parsename(const char *filename,
return -EINVAL;
}
buf = g_strdup(start);
buf = qemu_strdup(start);
p = buf;
*snap = '\0';
*conf = '\0';
@@ -181,7 +148,6 @@ static int qemu_rbd_parsename(const char *filename,
ret = -EINVAL;
goto done;
}
qemu_rbd_unescape(pool);
if (strchr(p, '@')) {
ret = qemu_rbd_next_tok(name, name_len, p, '@', "object name", &p);
@@ -189,11 +155,9 @@ static int qemu_rbd_parsename(const char *filename,
goto done;
}
ret = qemu_rbd_next_tok(snap, snap_len, p, ':', "snap name", &p);
qemu_rbd_unescape(snap);
} else {
ret = qemu_rbd_next_tok(name, name_len, p, ':', "object name", &p);
}
qemu_rbd_unescape(name);
if (ret < 0 || !p) {
goto done;
}
@@ -201,38 +165,10 @@ static int qemu_rbd_parsename(const char *filename,
ret = qemu_rbd_next_tok(conf, conf_len, p, '\0', "configuration", &p);
done:
g_free(buf);
qemu_free(buf);
return ret;
}
static char *qemu_rbd_parse_clientname(const char *conf, char *clientname)
{
const char *p = conf;
while (*p) {
int len;
const char *end = strchr(p, ':');
if (end) {
len = end - p;
} else {
len = strlen(p);
}
if (strncmp(p, "id=", 3) == 0) {
len -= 3;
strncpy(clientname, p + 3, len);
clientname[len] = '\0';
return clientname;
}
if (end == NULL) {
break;
}
p = end + 1;
}
return NULL;
}
static int qemu_rbd_set_conf(rados_t cluster, const char *conf)
{
char *p, *buf;
@@ -240,7 +176,7 @@ static int qemu_rbd_set_conf(rados_t cluster, const char *conf)
char value[RBD_MAX_CONF_VAL_SIZE];
int ret = 0;
buf = g_strdup(conf);
buf = qemu_strdup(conf);
p = buf;
while (p) {
@@ -249,7 +185,6 @@ static int qemu_rbd_set_conf(rados_t cluster, const char *conf)
if (ret < 0) {
break;
}
qemu_rbd_unescape(name);
if (!p) {
error_report("conf option %s has no value", name);
@@ -262,27 +197,24 @@ static int qemu_rbd_set_conf(rados_t cluster, const char *conf)
if (ret < 0) {
break;
}
qemu_rbd_unescape(value);
if (strcmp(name, "conf") == 0) {
ret = rados_conf_read_file(cluster, value);
if (ret < 0) {
error_report("error reading conf file %s", value);
break;
}
} else if (strcmp(name, "id") == 0) {
/* ignore, this is parsed by qemu_rbd_parse_clientname() */
} else {
if (strcmp(name, "conf")) {
ret = rados_conf_set(cluster, name, value);
if (ret < 0) {
error_report("invalid conf option %s", name);
ret = -EINVAL;
break;
}
} else {
ret = rados_conf_read_file(cluster, value);
if (ret < 0) {
error_report("error reading conf file %s", value);
break;
}
}
}
g_free(buf);
qemu_free(buf);
return ret;
}
@@ -295,8 +227,6 @@ static int qemu_rbd_create(const char *filename, QEMUOptionParameter *options)
char name[RBD_MAX_IMAGE_NAME_SIZE];
char snap_buf[RBD_MAX_SNAP_NAME_SIZE];
char conf[RBD_MAX_CONF_SIZE];
char clientname_buf[RBD_MAX_CONF_SIZE];
char *clientname;
rados_t cluster;
rados_ioctx_t io_ctx;
int ret;
@@ -329,15 +259,17 @@ static int qemu_rbd_create(const char *filename, QEMUOptionParameter *options)
options++;
}
clientname = qemu_rbd_parse_clientname(conf, clientname_buf);
if (rados_create(&cluster, clientname) < 0) {
if (rados_create(&cluster, NULL) < 0) {
error_report("error initializing");
return -EIO;
}
if (strstr(conf, "conf=") == NULL) {
/* try default location, but ignore failure */
rados_conf_read_file(cluster, NULL);
if (rados_conf_read_file(cluster, NULL) < 0) {
error_report("error reading config file");
rados_shutdown(cluster);
return -EIO;
}
}
if (conf[0] != '\0' &&
@@ -384,8 +316,7 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb)
r = rcb->ret;
if (acb->cmd == RBD_AIO_WRITE ||
acb->cmd == RBD_AIO_DISCARD) {
if (acb->write) {
if (r < 0) {
acb->ret = r;
acb->error = 1;
@@ -410,7 +341,7 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb)
acb->bh = qemu_bh_new(rbd_aio_bh_cb, acb);
qemu_bh_schedule(acb->bh);
done:
g_free(rcb);
qemu_free(rcb);
}
/*
@@ -427,14 +358,15 @@ static void qemu_rbd_aio_event_reader(void *opaque)
char *p = (char *)&s->event_rcb;
/* now read the rcb pointer that was sent from a non qemu thread */
ret = read(s->fds[RBD_FD_READ], p + s->event_reader_pos,
sizeof(s->event_rcb) - s->event_reader_pos);
if (ret > 0) {
s->event_reader_pos += ret;
if (s->event_reader_pos == sizeof(s->event_rcb)) {
s->event_reader_pos = 0;
qemu_rbd_complete_aio(s->event_rcb);
s->qemu_aio_count--;
if ((ret = read(s->fds[RBD_FD_READ], p + s->event_reader_pos,
sizeof(s->event_rcb) - s->event_reader_pos)) > 0) {
if (ret > 0) {
s->event_reader_pos += ret;
if (s->event_reader_pos == sizeof(s->event_rcb)) {
s->event_reader_pos = 0;
qemu_rbd_complete_aio(s->event_rcb);
s->qemu_aio_count--;
}
}
}
} while (ret < 0 && errno == EINTR);
@@ -453,8 +385,6 @@ static int qemu_rbd_open(BlockDriverState *bs, const char *filename, int flags)
char pool[RBD_MAX_POOL_NAME_SIZE];
char snap_buf[RBD_MAX_SNAP_NAME_SIZE];
char conf[RBD_MAX_CONF_SIZE];
char clientname_buf[RBD_MAX_CONF_SIZE];
char *clientname;
int r;
if (qemu_rbd_parsename(filename, pool, sizeof(pool),
@@ -463,48 +393,55 @@ static int qemu_rbd_open(BlockDriverState *bs, const char *filename, int flags)
conf, sizeof(conf)) < 0) {
return -EINVAL;
}
s->snap = NULL;
if (snap_buf[0] != '\0') {
s->snap = qemu_strdup(snap_buf);
}
clientname = qemu_rbd_parse_clientname(conf, clientname_buf);
r = rados_create(&s->cluster, clientname);
r = rados_create(&s->cluster, NULL);
if (r < 0) {
error_report("error initializing");
return r;
}
s->snap = NULL;
if (snap_buf[0] != '\0') {
s->snap = g_strdup(snap_buf);
}
if (strstr(conf, "conf=") == NULL) {
/* try default location, but ignore failure */
rados_conf_read_file(s->cluster, NULL);
r = rados_conf_read_file(s->cluster, NULL);
if (r < 0) {
error_report("error reading config file");
rados_shutdown(s->cluster);
return r;
}
}
if (conf[0] != '\0') {
r = qemu_rbd_set_conf(s->cluster, conf);
if (r < 0) {
error_report("error setting config options");
goto failed_shutdown;
rados_shutdown(s->cluster);
return r;
}
}
r = rados_connect(s->cluster);
if (r < 0) {
error_report("error connecting");
goto failed_shutdown;
rados_shutdown(s->cluster);
return r;
}
r = rados_ioctx_create(s->cluster, pool, &s->io_ctx);
if (r < 0) {
error_report("error opening pool %s", pool);
goto failed_shutdown;
rados_shutdown(s->cluster);
return r;
}
r = rbd_open(s->io_ctx, s->name, &s->image, s->snap);
if (r < 0) {
error_report("error reading header from %s", s->name);
goto failed_open;
rados_ioctx_destroy(s->io_ctx);
rados_shutdown(s->cluster);
return r;
}
bs->read_only = (s->snap != NULL);
@@ -518,18 +455,15 @@ static int qemu_rbd_open(BlockDriverState *bs, const char *filename, int flags)
fcntl(s->fds[0], F_SETFL, O_NONBLOCK);
fcntl(s->fds[1], F_SETFL, O_NONBLOCK);
qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], qemu_rbd_aio_event_reader,
NULL, qemu_rbd_aio_flush_cb, s);
NULL, qemu_rbd_aio_flush_cb, NULL, s);
return 0;
failed:
rbd_close(s->image);
failed_open:
rados_ioctx_destroy(s->io_ctx);
failed_shutdown:
rados_shutdown(s->cluster);
g_free(s->snap);
return r;
}
@@ -539,11 +473,12 @@ static void qemu_rbd_close(BlockDriverState *bs)
close(s->fds[0]);
close(s->fds[1]);
qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], NULL, NULL, NULL, NULL);
qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], NULL , NULL, NULL, NULL,
NULL);
rbd_close(s->image);
rados_ioctx_destroy(s->io_ctx);
g_free(s->snap);
qemu_free(s->snap);
rados_shutdown(s->cluster);
}
@@ -609,7 +544,7 @@ static void rbd_finish_aiocb(rbd_completion_t c, RADOSCB *rcb)
ret = qemu_rbd_send_pipe(rcb->s, rcb);
if (ret < 0) {
error_report("failed writing to acb->s->fds");
g_free(rcb);
qemu_free(rcb);
}
}
@@ -619,7 +554,7 @@ static void rbd_aio_bh_cb(void *opaque)
{
RBDAIOCB *acb = opaque;
if (acb->cmd == RBD_AIO_READ) {
if (!acb->write) {
qemu_iovec_from_buffer(acb->qiov, acb->bounce, acb->qiov->size);
}
qemu_vfree(acb->bounce);
@@ -630,25 +565,12 @@ static void rbd_aio_bh_cb(void *opaque)
qemu_aio_release(acb);
}
static int rbd_aio_discard_wrapper(rbd_image_t image,
uint64_t off,
uint64_t len,
rbd_completion_t comp)
{
#ifdef LIBRBD_SUPPORTS_DISCARD
return rbd_aio_discard(image, off, len, comp);
#else
return -ENOTSUP;
#endif
}
static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov,
int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque,
RBDAIOCmd cmd)
static BlockDriverAIOCB *rbd_aio_rw_vector(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov,
int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque, int write)
{
RBDAIOCB *acb;
RADOSCB *rcb;
@@ -660,20 +582,19 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
BDRVRBDState *s = bs->opaque;
acb = qemu_aio_get(&rbd_aio_pool, bs, cb, opaque);
acb->cmd = cmd;
acb->qiov = qiov;
if (cmd == RBD_AIO_DISCARD) {
acb->bounce = NULL;
} else {
acb->bounce = qemu_blockalign(bs, qiov->size);
if (!acb) {
return NULL;
}
acb->write = write;
acb->qiov = qiov;
acb->bounce = qemu_blockalign(bs, qiov->size);
acb->ret = 0;
acb->error = 0;
acb->s = s;
acb->cancelled = 0;
acb->bh = NULL;
if (cmd == RBD_AIO_WRITE) {
if (write) {
qemu_iovec_to_buffer(acb->qiov, acb->bounce);
}
@@ -684,7 +605,7 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
s->qemu_aio_count++; /* All the RADOSCB */
rcb = g_malloc(sizeof(RADOSCB));
rcb = qemu_malloc(sizeof(RADOSCB));
rcb->done = 0;
rcb->acb = acb;
rcb->buf = buf;
@@ -695,18 +616,10 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
goto failed;
}
switch (cmd) {
case RBD_AIO_WRITE:
if (write) {
r = rbd_aio_write(s->image, off, size, buf, c);
break;
case RBD_AIO_READ:
} else {
r = rbd_aio_read(s->image, off, size, buf, c);
break;
case RBD_AIO_DISCARD:
r = rbd_aio_discard_wrapper(s->image, off, size, c);
break;
default:
r = -EINVAL;
}
if (r < 0) {
@@ -716,7 +629,7 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
return &acb->common;
failed:
g_free(rcb);
qemu_free(rcb);
s->qemu_aio_count--;
qemu_aio_release(acb);
return NULL;
@@ -729,8 +642,7 @@ static BlockDriverAIOCB *qemu_rbd_aio_readv(BlockDriverState *bs,
BlockDriverCompletionFunc *cb,
void *opaque)
{
return rbd_start_aio(bs, sector_num, qiov, nb_sectors, cb, opaque,
RBD_AIO_READ);
return rbd_aio_rw_vector(bs, sector_num, qiov, nb_sectors, cb, opaque, 0);
}
static BlockDriverAIOCB *qemu_rbd_aio_writev(BlockDriverState *bs,
@@ -740,19 +652,7 @@ static BlockDriverAIOCB *qemu_rbd_aio_writev(BlockDriverState *bs,
BlockDriverCompletionFunc *cb,
void *opaque)
{
return rbd_start_aio(bs, sector_num, qiov, nb_sectors, cb, opaque,
RBD_AIO_WRITE);
}
static int qemu_rbd_co_flush(BlockDriverState *bs)
{
#if LIBRBD_VERSION_CODE >= LIBRBD_VERSION(0, 1, 1)
/* rbd_flush added in 0.1.1 */
BDRVRBDState *s = bs->opaque;
return rbd_flush(s->image);
#else
return 0;
#endif
return rbd_aio_rw_vector(bs, sector_num, qiov, nb_sectors, cb, opaque, 1);
}
static int qemu_rbd_getinfo(BlockDriverState *bs, BlockDriverInfo *bdi)
@@ -829,26 +729,6 @@ static int qemu_rbd_snap_create(BlockDriverState *bs,
return 0;
}
static int qemu_rbd_snap_remove(BlockDriverState *bs,
const char *snapshot_name)
{
BDRVRBDState *s = bs->opaque;
int r;
r = rbd_snap_remove(s->image, snapshot_name);
return r;
}
static int qemu_rbd_snap_rollback(BlockDriverState *bs,
const char *snapshot_name)
{
BDRVRBDState *s = bs->opaque;
int r;
r = rbd_snap_rollback(s->image, snapshot_name);
return r;
}
static int qemu_rbd_snap_list(BlockDriverState *bs,
QEMUSnapshotInfo **psn_tab)
{
@@ -859,18 +739,18 @@ static int qemu_rbd_snap_list(BlockDriverState *bs,
int max_snaps = RBD_MAX_SNAPS;
do {
snaps = g_malloc(sizeof(*snaps) * max_snaps);
snaps = qemu_malloc(sizeof(*snaps) * max_snaps);
snap_count = rbd_snap_list(s->image, snaps, &max_snaps);
if (snap_count < 0) {
g_free(snaps);
qemu_free(snaps);
}
} while (snap_count == -ERANGE);
if (snap_count <= 0) {
goto done;
return snap_count;
}
sn_tab = g_malloc0(snap_count * sizeof(QEMUSnapshotInfo));
sn_tab = qemu_mallocz(snap_count * sizeof(QEMUSnapshotInfo));
for (i = 0; i < snap_count; i++) {
const char *snap_name = snaps[i].name;
@@ -886,23 +766,10 @@ static int qemu_rbd_snap_list(BlockDriverState *bs,
}
rbd_snap_list_end(snaps);
done:
*psn_tab = sn_tab;
return snap_count;
}
#ifdef LIBRBD_SUPPORTS_DISCARD
static BlockDriverAIOCB* qemu_rbd_aio_discard(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque)
{
return rbd_start_aio(bs, sector_num, NULL, nb_sectors, cb, opaque,
RBD_AIO_DISCARD);
}
#endif
static QEMUOptionParameter qemu_rbd_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
@@ -929,18 +796,11 @@ static BlockDriver bdrv_rbd = {
.bdrv_truncate = qemu_rbd_truncate,
.protocol_name = "rbd",
.bdrv_aio_readv = qemu_rbd_aio_readv,
.bdrv_aio_writev = qemu_rbd_aio_writev,
.bdrv_co_flush_to_disk = qemu_rbd_co_flush,
.bdrv_aio_readv = qemu_rbd_aio_readv,
.bdrv_aio_writev = qemu_rbd_aio_writev,
#ifdef LIBRBD_SUPPORTS_DISCARD
.bdrv_aio_discard = qemu_rbd_aio_discard,
#endif
.bdrv_snapshot_create = qemu_rbd_snap_create,
.bdrv_snapshot_delete = qemu_rbd_snap_remove,
.bdrv_snapshot_list = qemu_rbd_snap_list,
.bdrv_snapshot_goto = qemu_rbd_snap_rollback,
.bdrv_snapshot_create = qemu_rbd_snap_create,
.bdrv_snapshot_list = qemu_rbd_snap_list,
};
static void bdrv_rbd_init(void)

File diff suppressed because it is too large Load Diff

View File

@@ -1,280 +0,0 @@
/*
* Image streaming
*
* Copyright IBM, Corp. 2011
*
* Authors:
* Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
*
* This work is licensed under the terms of the GNU LGPL, version 2 or later.
* See the COPYING.LIB file in the top-level directory.
*
*/
#include "trace.h"
#include "block_int.h"
enum {
/*
* Size of data buffer for populating the image file. This should be large
* enough to process multiple clusters in a single call, so that populating
* contiguous regions of the image is efficient.
*/
STREAM_BUFFER_SIZE = 512 * 1024, /* in bytes */
};
#define SLICE_TIME 100000000ULL /* ns */
typedef struct {
int64_t next_slice_time;
uint64_t slice_quota;
uint64_t dispatched;
} RateLimit;
static int64_t ratelimit_calculate_delay(RateLimit *limit, uint64_t n)
{
int64_t now = qemu_get_clock_ns(rt_clock);
if (limit->next_slice_time < now) {
limit->next_slice_time = now + SLICE_TIME;
limit->dispatched = 0;
}
if (limit->dispatched == 0 || limit->dispatched + n <= limit->slice_quota) {
limit->dispatched += n;
return 0;
} else {
limit->dispatched = n;
return limit->next_slice_time - now;
}
}
static void ratelimit_set_speed(RateLimit *limit, uint64_t speed)
{
limit->slice_quota = speed / (1000000000ULL / SLICE_TIME);
}
typedef struct StreamBlockJob {
BlockJob common;
RateLimit limit;
BlockDriverState *base;
char backing_file_id[1024];
} StreamBlockJob;
static int coroutine_fn stream_populate(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
void *buf)
{
struct iovec iov = {
.iov_base = buf,
.iov_len = nb_sectors * BDRV_SECTOR_SIZE,
};
QEMUIOVector qiov;
qemu_iovec_init_external(&qiov, &iov, 1);
/* Copy-on-read the unallocated clusters */
return bdrv_co_copy_on_readv(bs, sector_num, nb_sectors, &qiov);
}
static void close_unused_images(BlockDriverState *top, BlockDriverState *base,
const char *base_id)
{
BlockDriverState *intermediate;
intermediate = top->backing_hd;
while (intermediate) {
BlockDriverState *unused;
/* reached base */
if (intermediate == base) {
break;
}
unused = intermediate;
intermediate = intermediate->backing_hd;
unused->backing_hd = NULL;
bdrv_delete(unused);
}
top->backing_hd = base;
}
/*
* Given an image chain: [BASE] -> [INTER1] -> [INTER2] -> [TOP]
*
* Return true if the given sector is allocated in top.
* Return false if the given sector is allocated in intermediate images.
* Return true otherwise.
*
* 'pnum' is set to the number of sectors (including and immediately following
* the specified sector) that are known to be in the same
* allocated/unallocated state.
*
*/
static int coroutine_fn is_allocated_base(BlockDriverState *top,
BlockDriverState *base,
int64_t sector_num,
int nb_sectors, int *pnum)
{
BlockDriverState *intermediate;
int ret, n;
ret = bdrv_co_is_allocated(top, sector_num, nb_sectors, &n);
if (ret) {
*pnum = n;
return ret;
}
/*
* Is the unallocated chunk [sector_num, n] also
* unallocated between base and top?
*/
intermediate = top->backing_hd;
while (intermediate != base) {
int pnum_inter;
ret = bdrv_co_is_allocated(intermediate, sector_num, nb_sectors,
&pnum_inter);
if (ret < 0) {
return ret;
} else if (ret) {
*pnum = pnum_inter;
return 0;
}
/*
* [sector_num, nb_sectors] is unallocated on top but intermediate
* might have
*
* [sector_num+x, nr_sectors] allocated.
*/
if (n > pnum_inter) {
n = pnum_inter;
}
intermediate = intermediate->backing_hd;
}
*pnum = n;
return 1;
}
static void coroutine_fn stream_run(void *opaque)
{
StreamBlockJob *s = opaque;
BlockDriverState *bs = s->common.bs;
BlockDriverState *base = s->base;
int64_t sector_num, end;
int ret = 0;
int n = 0;
void *buf;
s->common.len = bdrv_getlength(bs);
if (s->common.len < 0) {
block_job_complete(&s->common, s->common.len);
return;
}
end = s->common.len >> BDRV_SECTOR_BITS;
buf = qemu_blockalign(bs, STREAM_BUFFER_SIZE);
/* Turn on copy-on-read for the whole block device so that guest read
* requests help us make progress. Only do this when copying the entire
* backing chain since the copy-on-read operation does not take base into
* account.
*/
if (!base) {
bdrv_enable_copy_on_read(bs);
}
for (sector_num = 0; sector_num < end; sector_num += n) {
uint64_t delay_ns = 0;
wait:
/* Note that even when no rate limit is applied we need to yield
* with no pending I/O here so that qemu_aio_flush() returns.
*/
block_job_sleep_ns(&s->common, rt_clock, delay_ns);
if (block_job_is_cancelled(&s->common)) {
break;
}
ret = is_allocated_base(bs, base, sector_num,
STREAM_BUFFER_SIZE / BDRV_SECTOR_SIZE, &n);
trace_stream_one_iteration(s, sector_num, n, ret);
if (ret == 0) {
if (s->common.speed) {
delay_ns = ratelimit_calculate_delay(&s->limit, n);
if (delay_ns > 0) {
goto wait;
}
}
ret = stream_populate(bs, sector_num, n, buf);
}
if (ret < 0) {
break;
}
ret = 0;
/* Publish progress */
s->common.offset += n * BDRV_SECTOR_SIZE;
}
if (!base) {
bdrv_disable_copy_on_read(bs);
}
if (!block_job_is_cancelled(&s->common) && sector_num == end && ret == 0) {
const char *base_id = NULL, *base_fmt = NULL;
if (base) {
base_id = s->backing_file_id;
if (base->drv) {
base_fmt = base->drv->format_name;
}
}
ret = bdrv_change_backing_file(bs, base_id, base_fmt);
close_unused_images(bs, base, base_id);
}
qemu_vfree(buf);
block_job_complete(&s->common, ret);
}
static void stream_set_speed(BlockJob *job, int64_t speed, Error **errp)
{
StreamBlockJob *s = container_of(job, StreamBlockJob, common);
if (speed < 0) {
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE);
}
static BlockJobType stream_job_type = {
.instance_size = sizeof(StreamBlockJob),
.job_type = "stream",
.set_speed = stream_set_speed,
};
void stream_start(BlockDriverState *bs, BlockDriverState *base,
const char *base_id, int64_t speed,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp)
{
StreamBlockJob *s;
s = block_job_create(&stream_job_type, bs, speed, cb, opaque, errp);
if (!s) {
return;
}
s->base = base;
if (base_id) {
pstrcpy(s->backing_file_id, sizeof(s->backing_file_id), base_id);
}
s->common.co = qemu_coroutine_create(stream_run);
trace_stream_start(bs, base, s, s->common.co, opaque);
qemu_coroutine_enter(s->common.co, s);
}

View File

@@ -1,7 +1,7 @@
/*
* Block driver for the Virtual Disk Image (VDI) format
*
* Copyright (c) 2009, 2012 Stefan Weil
* Copyright (c) 2009 Stefan Weil
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -52,7 +52,6 @@
#include "qemu-common.h"
#include "block_int.h"
#include "module.h"
#include "migration.h"
#if defined(CONFIG_UUID)
#include <uuid/uuid.h>
@@ -115,13 +114,8 @@ void uuid_unparse(const uuid_t uu, char *out);
*/
#define VDI_TEXT "<<< QEMU VM Virtual Disk Image >>>\n"
/* A never-allocated block; semantically arbitrary content. */
#define VDI_UNALLOCATED 0xffffffffU
/* A discarded (no longer allocated) block; semantically zero-filled. */
#define VDI_DISCARDED 0xfffffffeU
#define VDI_IS_ALLOCATED(X) ((X) < VDI_DISCARDED)
/* Unallocated blocks use this index (no need to convert endianness). */
#define VDI_UNALLOCATED UINT32_MAX
#if !defined(CONFIG_UUID)
void uuid_generate(uuid_t out)
@@ -143,6 +137,29 @@ void uuid_unparse(const uuid_t uu, char *out)
}
#endif
typedef struct {
BlockDriverAIOCB common;
int64_t sector_num;
QEMUIOVector *qiov;
uint8_t *buf;
/* Total number of sectors. */
int nb_sectors;
/* Number of sectors for current AIO. */
int n_sectors;
/* New allocated block map entry. */
uint32_t bmap_first;
uint32_t bmap_last;
/* Buffer for new allocated block. */
void *block_buffer;
void *orig_buf;
bool is_write;
int header_modified;
BlockDriverAIOCB *hd_aiocb;
struct iovec hd_iov;
QEMUIOVector hd_qiov;
QEMUBH *bh;
} VdiAIOCB;
typedef struct {
char text[0x40];
uint32_t signature;
@@ -181,8 +198,6 @@ typedef struct {
uint32_t bmap_sector;
/* VDI header (converted to host endianness). */
VdiHeader header;
Error *migration_blocker;
} BDRVVdiState;
/* Change UUID from little endian (IPRT = VirtualBox format) to big endian
@@ -286,16 +301,16 @@ static int vdi_check(BlockDriverState *bs, BdrvCheckResult *res)
uint32_t *bmap;
logout("\n");
bmap = g_malloc(s->header.blocks_in_image * sizeof(uint32_t));
bmap = qemu_malloc(s->header.blocks_in_image * sizeof(uint32_t));
memset(bmap, 0xff, s->header.blocks_in_image * sizeof(uint32_t));
/* Check block map and value of blocks_allocated. */
for (block = 0; block < s->header.blocks_in_image; block++) {
uint32_t bmap_entry = le32_to_cpu(s->bmap[block]);
if (VDI_IS_ALLOCATED(bmap_entry)) {
if (bmap_entry != VDI_UNALLOCATED) {
if (bmap_entry < s->header.blocks_in_image) {
blocks_allocated++;
if (!VDI_IS_ALLOCATED(bmap[bmap_entry])) {
if (bmap[bmap_entry] == VDI_UNALLOCATED) {
bmap[bmap_entry] = bmap_entry;
} else {
fprintf(stderr, "ERROR: block index %" PRIu32
@@ -316,7 +331,7 @@ static int vdi_check(BlockDriverState *bs, BdrvCheckResult *res)
res->corruptions++;
}
g_free(bmap);
qemu_free(bmap);
return 0;
}
@@ -428,29 +443,23 @@ static int vdi_open(BlockDriverState *bs, int flags)
bmap_size = header.blocks_in_image * sizeof(uint32_t);
bmap_size = (bmap_size + SECTOR_SIZE - 1) / SECTOR_SIZE;
if (bmap_size > 0) {
s->bmap = g_malloc(bmap_size * SECTOR_SIZE);
s->bmap = qemu_malloc(bmap_size * SECTOR_SIZE);
}
if (bdrv_read(bs->file, s->bmap_sector, (uint8_t *)s->bmap, bmap_size) < 0) {
goto fail_free_bmap;
}
/* Disable migration when vdi images are used */
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"vdi", bs->device_name, "live migration");
migrate_add_blocker(s->migration_blocker);
return 0;
fail_free_bmap:
g_free(s->bmap);
qemu_free(s->bmap);
fail:
return -1;
}
static int coroutine_fn vdi_co_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum)
static int vdi_is_allocated(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum)
{
/* TODO: Check for too large sector_num (in bdrv_is_allocated or here). */
BDRVVdiState *s = (BDRVVdiState *)bs->opaque;
@@ -463,153 +472,361 @@ static int coroutine_fn vdi_co_is_allocated(BlockDriverState *bs,
n_sectors = nb_sectors;
}
*pnum = n_sectors;
return VDI_IS_ALLOCATED(bmap_entry);
return bmap_entry != VDI_UNALLOCATED;
}
static int vdi_co_read(BlockDriverState *bs,
int64_t sector_num, uint8_t *buf, int nb_sectors)
static void vdi_aio_cancel(BlockDriverAIOCB *blockacb)
{
BDRVVdiState *s = bs->opaque;
uint32_t bmap_entry;
uint32_t block_index;
uint32_t sector_in_block;
uint32_t n_sectors;
int ret = 0;
/* TODO: This code is untested. How can I get it executed? */
VdiAIOCB *acb = container_of(blockacb, VdiAIOCB, common);
logout("\n");
while (ret >= 0 && nb_sectors > 0) {
block_index = sector_num / s->block_sectors;
sector_in_block = sector_num % s->block_sectors;
n_sectors = s->block_sectors - sector_in_block;
if (n_sectors > nb_sectors) {
n_sectors = nb_sectors;
}
logout("will read %u sectors starting at sector %" PRIu64 "\n",
n_sectors, sector_num);
/* prepare next AIO request */
bmap_entry = le32_to_cpu(s->bmap[block_index]);
if (!VDI_IS_ALLOCATED(bmap_entry)) {
/* Block not allocated, return zeros, no need to wait. */
memset(buf, 0, n_sectors * SECTOR_SIZE);
ret = 0;
} else {
uint64_t offset = s->header.offset_data / SECTOR_SIZE +
(uint64_t)bmap_entry * s->block_sectors +
sector_in_block;
ret = bdrv_read(bs->file, offset, buf, n_sectors);
}
logout("%u sectors read\n", n_sectors);
nb_sectors -= n_sectors;
sector_num += n_sectors;
buf += n_sectors * SECTOR_SIZE;
if (acb->hd_aiocb) {
bdrv_aio_cancel(acb->hd_aiocb);
}
return ret;
qemu_aio_release(acb);
}
static int vdi_co_write(BlockDriverState *bs,
int64_t sector_num, const uint8_t *buf, int nb_sectors)
static AIOPool vdi_aio_pool = {
.aiocb_size = sizeof(VdiAIOCB),
.cancel = vdi_aio_cancel,
};
static VdiAIOCB *vdi_aio_setup(BlockDriverState *bs, int64_t sector_num,
QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque, int is_write)
{
BDRVVdiState *s = bs->opaque;
uint32_t bmap_entry;
uint32_t block_index;
uint32_t sector_in_block;
uint32_t n_sectors;
uint32_t bmap_first = VDI_UNALLOCATED;
uint32_t bmap_last = VDI_UNALLOCATED;
uint8_t *block = NULL;
int ret = 0;
VdiAIOCB *acb;
logout("\n");
logout("%p, %" PRId64 ", %p, %d, %p, %p, %d\n",
bs, sector_num, qiov, nb_sectors, cb, opaque, is_write);
while (ret >= 0 && nb_sectors > 0) {
block_index = sector_num / s->block_sectors;
sector_in_block = sector_num % s->block_sectors;
n_sectors = s->block_sectors - sector_in_block;
if (n_sectors > nb_sectors) {
n_sectors = nb_sectors;
}
acb = qemu_aio_get(&vdi_aio_pool, bs, cb, opaque);
if (acb) {
acb->hd_aiocb = NULL;
acb->sector_num = sector_num;
acb->qiov = qiov;
acb->is_write = is_write;
logout("will write %u sectors starting at sector %" PRIu64 "\n",
n_sectors, sector_num);
/* prepare next AIO request */
bmap_entry = le32_to_cpu(s->bmap[block_index]);
if (!VDI_IS_ALLOCATED(bmap_entry)) {
/* Allocate new block and write to it. */
uint64_t offset;
bmap_entry = s->header.blocks_allocated;
s->bmap[block_index] = cpu_to_le32(bmap_entry);
s->header.blocks_allocated++;
offset = s->header.offset_data / SECTOR_SIZE +
(uint64_t)bmap_entry * s->block_sectors;
if (block == NULL) {
block = g_malloc(s->block_size);
bmap_first = block_index;
if (qiov->niov > 1) {
acb->buf = qemu_blockalign(bs, qiov->size);
acb->orig_buf = acb->buf;
if (is_write) {
qemu_iovec_to_buffer(qiov, acb->buf);
}
bmap_last = block_index;
/* Copy data to be written to new block and zero unused parts. */
memset(block, 0, sector_in_block * SECTOR_SIZE);
memcpy(block + sector_in_block * SECTOR_SIZE,
buf, n_sectors * SECTOR_SIZE);
memset(block + (sector_in_block + n_sectors) * SECTOR_SIZE, 0,
(s->block_sectors - n_sectors - sector_in_block) * SECTOR_SIZE);
ret = bdrv_write(bs->file, offset, block, s->block_sectors);
} else {
uint64_t offset = s->header.offset_data / SECTOR_SIZE +
(uint64_t)bmap_entry * s->block_sectors +
sector_in_block;
ret = bdrv_write(bs->file, offset, buf, n_sectors);
acb->buf = (uint8_t *)qiov->iov->iov_base;
}
acb->nb_sectors = nb_sectors;
acb->n_sectors = 0;
acb->bmap_first = VDI_UNALLOCATED;
acb->bmap_last = VDI_UNALLOCATED;
acb->block_buffer = NULL;
acb->header_modified = 0;
}
return acb;
}
nb_sectors -= n_sectors;
sector_num += n_sectors;
buf += n_sectors * SECTOR_SIZE;
static int vdi_schedule_bh(QEMUBHFunc *cb, VdiAIOCB *acb)
{
logout("\n");
logout("%u sectors written\n", n_sectors);
if (acb->bh) {
return -EIO;
}
logout("finished data write\n");
acb->bh = qemu_bh_new(cb, acb);
if (!acb->bh) {
return -EIO;
}
qemu_bh_schedule(acb->bh);
return 0;
}
static void vdi_aio_read_cb(void *opaque, int ret);
static void vdi_aio_write_cb(void *opaque, int ret);
static void vdi_aio_rw_bh(void *opaque)
{
VdiAIOCB *acb = opaque;
logout("\n");
qemu_bh_delete(acb->bh);
acb->bh = NULL;
if (acb->is_write) {
vdi_aio_write_cb(opaque, 0);
} else {
vdi_aio_read_cb(opaque, 0);
}
}
static void vdi_aio_read_cb(void *opaque, int ret)
{
VdiAIOCB *acb = opaque;
BlockDriverState *bs = acb->common.bs;
BDRVVdiState *s = bs->opaque;
uint32_t bmap_entry;
uint32_t block_index;
uint32_t sector_in_block;
uint32_t n_sectors;
logout("%u sectors read\n", acb->n_sectors);
acb->hd_aiocb = NULL;
if (ret < 0) {
return ret;
goto done;
}
if (block) {
/* One or more new blocks were allocated. */
VdiHeader *header = (VdiHeader *) block;
uint8_t *base;
uint64_t offset;
acb->nb_sectors -= acb->n_sectors;
logout("now writing modified header\n");
assert(VDI_IS_ALLOCATED(bmap_first));
*header = s->header;
vdi_header_to_le(header);
ret = bdrv_write(bs->file, 0, block, 1);
g_free(block);
block = NULL;
if (acb->nb_sectors == 0) {
/* request completed */
ret = 0;
goto done;
}
acb->sector_num += acb->n_sectors;
acb->buf += acb->n_sectors * SECTOR_SIZE;
block_index = acb->sector_num / s->block_sectors;
sector_in_block = acb->sector_num % s->block_sectors;
n_sectors = s->block_sectors - sector_in_block;
if (n_sectors > acb->nb_sectors) {
n_sectors = acb->nb_sectors;
}
logout("will read %u sectors starting at sector %" PRIu64 "\n",
n_sectors, acb->sector_num);
/* prepare next AIO request */
acb->n_sectors = n_sectors;
bmap_entry = le32_to_cpu(s->bmap[block_index]);
if (bmap_entry == VDI_UNALLOCATED) {
/* Block not allocated, return zeros, no need to wait. */
memset(acb->buf, 0, n_sectors * SECTOR_SIZE);
ret = vdi_schedule_bh(vdi_aio_rw_bh, acb);
if (ret < 0) {
return ret;
goto done;
}
} else {
uint64_t offset = s->header.offset_data / SECTOR_SIZE +
(uint64_t)bmap_entry * s->block_sectors +
sector_in_block;
acb->hd_iov.iov_base = (void *)acb->buf;
acb->hd_iov.iov_len = n_sectors * SECTOR_SIZE;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
acb->hd_aiocb = bdrv_aio_readv(bs->file, offset, &acb->hd_qiov,
n_sectors, vdi_aio_read_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
}
return;
done:
if (acb->qiov->niov > 1) {
qemu_iovec_from_buffer(acb->qiov, acb->orig_buf, acb->qiov->size);
qemu_vfree(acb->orig_buf);
}
acb->common.cb(acb->common.opaque, ret);
qemu_aio_release(acb);
}
logout("now writing modified block map entry %u...%u\n",
bmap_first, bmap_last);
/* Write modified sectors from block map. */
bmap_first /= (SECTOR_SIZE / sizeof(uint32_t));
bmap_last /= (SECTOR_SIZE / sizeof(uint32_t));
n_sectors = bmap_last - bmap_first + 1;
offset = s->bmap_sector + bmap_first;
base = ((uint8_t *)&s->bmap[0]) + bmap_first * SECTOR_SIZE;
logout("will write %u block map sectors starting from entry %u\n",
n_sectors, bmap_first);
ret = bdrv_write(bs->file, offset, base, n_sectors);
static BlockDriverAIOCB *vdi_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
VdiAIOCB *acb;
int ret;
logout("\n");
acb = vdi_aio_setup(bs, sector_num, qiov, nb_sectors, cb, opaque, 0);
if (!acb) {
return NULL;
}
return ret;
ret = vdi_schedule_bh(vdi_aio_rw_bh, acb);
if (ret < 0) {
if (acb->qiov->niov > 1) {
qemu_vfree(acb->orig_buf);
}
qemu_aio_release(acb);
return NULL;
}
return &acb->common;
}
static void vdi_aio_write_cb(void *opaque, int ret)
{
VdiAIOCB *acb = opaque;
BlockDriverState *bs = acb->common.bs;
BDRVVdiState *s = bs->opaque;
uint32_t bmap_entry;
uint32_t block_index;
uint32_t sector_in_block;
uint32_t n_sectors;
acb->hd_aiocb = NULL;
if (ret < 0) {
goto done;
}
acb->nb_sectors -= acb->n_sectors;
acb->sector_num += acb->n_sectors;
acb->buf += acb->n_sectors * SECTOR_SIZE;
if (acb->nb_sectors == 0) {
logout("finished data write\n");
acb->n_sectors = 0;
if (acb->header_modified) {
VdiHeader *header = acb->block_buffer;
logout("now writing modified header\n");
assert(acb->bmap_first != VDI_UNALLOCATED);
*header = s->header;
vdi_header_to_le(header);
acb->header_modified = 0;
acb->hd_iov.iov_base = acb->block_buffer;
acb->hd_iov.iov_len = SECTOR_SIZE;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
acb->hd_aiocb = bdrv_aio_writev(bs->file, 0, &acb->hd_qiov, 1,
vdi_aio_write_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
return;
} else if (acb->bmap_first != VDI_UNALLOCATED) {
/* One or more new blocks were allocated. */
uint64_t offset;
uint32_t bmap_first;
uint32_t bmap_last;
qemu_free(acb->block_buffer);
acb->block_buffer = NULL;
bmap_first = acb->bmap_first;
bmap_last = acb->bmap_last;
logout("now writing modified block map entry %u...%u\n",
bmap_first, bmap_last);
/* Write modified sectors from block map. */
bmap_first /= (SECTOR_SIZE / sizeof(uint32_t));
bmap_last /= (SECTOR_SIZE / sizeof(uint32_t));
n_sectors = bmap_last - bmap_first + 1;
offset = s->bmap_sector + bmap_first;
acb->bmap_first = VDI_UNALLOCATED;
acb->hd_iov.iov_base = (void *)((uint8_t *)&s->bmap[0] +
bmap_first * SECTOR_SIZE);
acb->hd_iov.iov_len = n_sectors * SECTOR_SIZE;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
logout("will write %u block map sectors starting from entry %u\n",
n_sectors, bmap_first);
acb->hd_aiocb = bdrv_aio_writev(bs->file, offset, &acb->hd_qiov,
n_sectors, vdi_aio_write_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
return;
}
ret = 0;
goto done;
}
logout("%u sectors written\n", acb->n_sectors);
block_index = acb->sector_num / s->block_sectors;
sector_in_block = acb->sector_num % s->block_sectors;
n_sectors = s->block_sectors - sector_in_block;
if (n_sectors > acb->nb_sectors) {
n_sectors = acb->nb_sectors;
}
logout("will write %u sectors starting at sector %" PRIu64 "\n",
n_sectors, acb->sector_num);
/* prepare next AIO request */
acb->n_sectors = n_sectors;
bmap_entry = le32_to_cpu(s->bmap[block_index]);
if (bmap_entry == VDI_UNALLOCATED) {
/* Allocate new block and write to it. */
uint64_t offset;
uint8_t *block;
bmap_entry = s->header.blocks_allocated;
s->bmap[block_index] = cpu_to_le32(bmap_entry);
s->header.blocks_allocated++;
offset = s->header.offset_data / SECTOR_SIZE +
(uint64_t)bmap_entry * s->block_sectors;
block = acb->block_buffer;
if (block == NULL) {
block = qemu_mallocz(s->block_size);
acb->block_buffer = block;
acb->bmap_first = block_index;
assert(!acb->header_modified);
acb->header_modified = 1;
}
acb->bmap_last = block_index;
memcpy(block + sector_in_block * SECTOR_SIZE,
acb->buf, n_sectors * SECTOR_SIZE);
acb->hd_iov.iov_base = (void *)block;
acb->hd_iov.iov_len = s->block_size;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
acb->hd_aiocb = bdrv_aio_writev(bs->file, offset,
&acb->hd_qiov, s->block_sectors,
vdi_aio_write_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
} else {
uint64_t offset = s->header.offset_data / SECTOR_SIZE +
(uint64_t)bmap_entry * s->block_sectors +
sector_in_block;
acb->hd_iov.iov_base = (void *)acb->buf;
acb->hd_iov.iov_len = n_sectors * SECTOR_SIZE;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
acb->hd_aiocb = bdrv_aio_writev(bs->file, offset, &acb->hd_qiov,
n_sectors, vdi_aio_write_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
}
return;
done:
if (acb->qiov->niov > 1) {
qemu_vfree(acb->orig_buf);
}
acb->common.cb(acb->common.opaque, ret);
qemu_aio_release(acb);
}
static BlockDriverAIOCB *vdi_aio_writev(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
VdiAIOCB *acb;
int ret;
logout("\n");
acb = vdi_aio_setup(bs, sector_num, qiov, nb_sectors, cb, opaque, 1);
if (!acb) {
return NULL;
}
ret = vdi_schedule_bh(vdi_aio_rw_bh, acb);
if (ret < 0) {
if (acb->qiov->niov > 1) {
qemu_vfree(acb->orig_buf);
}
qemu_aio_release(acb);
return NULL;
}
return &acb->common;
}
static int vdi_create(const char *filename, QEMUOptionParameter *options)
@@ -689,7 +906,7 @@ static int vdi_create(const char *filename, QEMUOptionParameter *options)
bmap = NULL;
if (bmap_size > 0) {
bmap = (uint32_t *)g_malloc0(bmap_size);
bmap = (uint32_t *)qemu_mallocz(bmap_size);
}
for (i = 0; i < blocks; i++) {
if (image_type == VDI_TYPE_STATIC) {
@@ -701,7 +918,7 @@ static int vdi_create(const char *filename, QEMUOptionParameter *options)
if (write(fd, bmap, bmap_size) < 0) {
result = -errno;
}
g_free(bmap);
qemu_free(bmap);
if (image_type == VDI_TYPE_STATIC) {
if (ftruncate(fd, sizeof(header) + bmap_size + blocks * block_size)) {
result = -errno;
@@ -717,14 +934,15 @@ static int vdi_create(const char *filename, QEMUOptionParameter *options)
static void vdi_close(BlockDriverState *bs)
{
BDRVVdiState *s = bs->opaque;
g_free(s->bmap);
migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
}
static int vdi_flush(BlockDriverState *bs)
{
logout("\n");
return bdrv_flush(bs->file);
}
static QEMUOptionParameter vdi_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
@@ -757,12 +975,13 @@ static BlockDriver bdrv_vdi = {
.bdrv_open = vdi_open,
.bdrv_close = vdi_close,
.bdrv_create = vdi_create,
.bdrv_co_is_allocated = vdi_co_is_allocated,
.bdrv_flush = vdi_flush,
.bdrv_is_allocated = vdi_is_allocated,
.bdrv_make_empty = vdi_make_empty,
.bdrv_read = vdi_co_read,
.bdrv_aio_readv = vdi_aio_readv,
#if defined(CONFIG_VDI_WRITE)
.bdrv_write = vdi_co_write,
.bdrv_aio_writev = vdi_aio_writev,
#endif
.bdrv_get_info = vdi_get_info,

View File

@@ -26,16 +26,9 @@
#include "qemu-common.h"
#include "block_int.h"
#include "module.h"
#include "migration.h"
#include <zlib.h>
#define VMDK3_MAGIC (('C' << 24) | ('O' << 16) | ('W' << 8) | 'D')
#define VMDK4_MAGIC (('K' << 24) | ('D' << 16) | ('M' << 8) | 'V')
#define VMDK4_COMPRESSION_DEFLATE 1
#define VMDK4_FLAG_RGD (1 << 1)
#define VMDK4_FLAG_COMPRESS (1 << 16)
#define VMDK4_FLAG_MARKER (1 << 17)
#define VMDK4_GD_AT_END 0xffffffffffffffffULL
typedef struct {
uint32_t version;
@@ -63,16 +56,13 @@ typedef struct {
int64_t grain_offset;
char filler[1];
char check_bytes[4];
uint16_t compressAlgorithm;
} QEMU_PACKED VMDK4Header;
} __attribute__((packed)) VMDK4Header;
#define L2_CACHE_SIZE 16
typedef struct VmdkExtent {
BlockDriverState *file;
bool flat;
bool compressed;
bool has_marker;
int64_t sectors;
int64_t end_sector;
int64_t flat_start_offset;
@@ -92,14 +82,12 @@ typedef struct VmdkExtent {
} VmdkExtent;
typedef struct BDRVVmdkState {
CoMutex lock;
int desc_offset;
bool cid_updated;
uint32_t parent_cid;
int num_extents;
/* Extent array with num_extents entries, ascend ordered by address */
VmdkExtent *extents;
Error *migration_blocker;
} BDRVVmdkState;
typedef struct VmdkMetaData {
@@ -110,19 +98,6 @@ typedef struct VmdkMetaData {
int valid;
} VmdkMetaData;
typedef struct VmdkGrainMarker {
uint64_t lba;
uint32_t size;
uint8_t data[0];
} VmdkGrainMarker;
enum {
MARKER_END_OF_STREAM = 0,
MARKER_GRAIN_TABLE = 1,
MARKER_GRAIN_DIRECTORY = 2,
MARKER_FOOTER = 3,
};
static int vmdk_probe(const uint8_t *buf, int buf_size, const char *filename)
{
uint32_t magic;
@@ -190,42 +165,24 @@ static void vmdk_free_extents(BlockDriverState *bs)
{
int i;
BDRVVmdkState *s = bs->opaque;
VmdkExtent *e;
for (i = 0; i < s->num_extents; i++) {
e = &s->extents[i];
g_free(e->l1_table);
g_free(e->l2_cache);
g_free(e->l1_backup_table);
if (e->file != bs->file) {
bdrv_delete(e->file);
}
qemu_free(s->extents[i].l1_table);
qemu_free(s->extents[i].l2_cache);
qemu_free(s->extents[i].l1_backup_table);
}
g_free(s->extents);
}
static void vmdk_free_last_extent(BlockDriverState *bs)
{
BDRVVmdkState *s = bs->opaque;
if (s->num_extents == 0) {
return;
}
s->num_extents--;
s->extents = g_realloc(s->extents, s->num_extents * sizeof(VmdkExtent));
qemu_free(s->extents);
}
static uint32_t vmdk_read_cid(BlockDriverState *bs, int parent)
{
char desc[DESC_SIZE];
uint32_t cid = 0xffffffff;
uint32_t cid;
const char *p_name, *cid_str;
size_t cid_str_size;
BDRVVmdkState *s = bs->opaque;
int ret;
ret = bdrv_pread(bs->file, s->desc_offset, desc, DESC_SIZE);
if (ret < 0) {
if (bdrv_pread(bs->file, s->desc_offset, desc, DESC_SIZE) != DESC_SIZE) {
return 0;
}
@@ -237,7 +194,6 @@ static uint32_t vmdk_read_cid(BlockDriverState *bs, int parent)
cid_str_size = sizeof("CID");
}
desc[DESC_SIZE - 1] = '\0';
p_name = strstr(desc, cid_str);
if (p_name != NULL) {
p_name += cid_str_size;
@@ -252,19 +208,13 @@ static int vmdk_write_cid(BlockDriverState *bs, uint32_t cid)
char desc[DESC_SIZE], tmp_desc[DESC_SIZE];
char *p_name, *tmp_str;
BDRVVmdkState *s = bs->opaque;
int ret;
ret = bdrv_pread(bs->file, s->desc_offset, desc, DESC_SIZE);
if (ret < 0) {
return ret;
memset(desc, 0, sizeof(desc));
if (bdrv_pread(bs->file, s->desc_offset, desc, DESC_SIZE) != DESC_SIZE) {
return -EIO;
}
desc[DESC_SIZE - 1] = '\0';
tmp_str = strstr(desc, "parentCID");
if (tmp_str == NULL) {
return -EINVAL;
}
pstrcpy(tmp_desc, sizeof(tmp_desc), tmp_str);
p_name = strstr(desc, "CID");
if (p_name != NULL) {
@@ -273,11 +223,9 @@ static int vmdk_write_cid(BlockDriverState *bs, uint32_t cid)
pstrcat(desc, sizeof(desc), tmp_desc);
}
ret = bdrv_pwrite_sync(bs->file, s->desc_offset, desc, DESC_SIZE);
if (ret < 0) {
return ret;
if (bdrv_pwrite_sync(bs->file, s->desc_offset, desc, DESC_SIZE) < 0) {
return -EIO;
}
return 0;
}
@@ -305,12 +253,10 @@ static int vmdk_parent_open(BlockDriverState *bs)
char *p_name;
char desc[DESC_SIZE + 1];
BDRVVmdkState *s = bs->opaque;
int ret;
desc[DESC_SIZE] = '\0';
ret = bdrv_pread(bs->file, s->desc_offset, desc, DESC_SIZE);
if (ret < 0) {
return ret;
if (bdrv_pread(bs->file, s->desc_offset, desc, DESC_SIZE) != DESC_SIZE) {
return -1;
}
p_name = strstr(desc, "parentFileNameHint");
@@ -320,10 +266,10 @@ static int vmdk_parent_open(BlockDriverState *bs)
p_name += sizeof("parentFileNameHint") + 1;
end_name = strchr(p_name, '\"');
if (end_name == NULL) {
return -EINVAL;
return -1;
}
if ((end_name - p_name) > sizeof(bs->backing_file) - 1) {
return -EINVAL;
return -1;
}
pstrcpy(bs->backing_file, end_name - p_name + 1, p_name);
@@ -343,7 +289,7 @@ static VmdkExtent *vmdk_add_extent(BlockDriverState *bs,
VmdkExtent *extent;
BDRVVmdkState *s = bs->opaque;
s->extents = g_realloc(s->extents,
s->extents = qemu_realloc(s->extents,
(s->num_extents + 1) * sizeof(VmdkExtent));
extent = &s->extents[s->num_extents];
s->num_extents++;
@@ -375,7 +321,7 @@ static int vmdk_init_tables(BlockDriverState *bs, VmdkExtent *extent)
/* read the L1 table */
l1_size = extent->l1_size * sizeof(uint32_t);
extent->l1_table = g_malloc(l1_size);
extent->l1_table = qemu_malloc(l1_size);
ret = bdrv_pread(extent->file,
extent->l1_table_offset,
extent->l1_table,
@@ -388,7 +334,7 @@ static int vmdk_init_tables(BlockDriverState *bs, VmdkExtent *extent)
}
if (extent->l1_backup_table_offset) {
extent->l1_backup_table = g_malloc(l1_size);
extent->l1_backup_table = qemu_malloc(l1_size);
ret = bdrv_pread(extent->file,
extent->l1_backup_table_offset,
extent->l1_backup_table,
@@ -402,27 +348,27 @@ static int vmdk_init_tables(BlockDriverState *bs, VmdkExtent *extent)
}
extent->l2_cache =
g_malloc(extent->l2_size * L2_CACHE_SIZE * sizeof(uint32_t));
qemu_malloc(extent->l2_size * L2_CACHE_SIZE * sizeof(uint32_t));
return 0;
fail_l1b:
g_free(extent->l1_backup_table);
qemu_free(extent->l1_backup_table);
fail_l1:
g_free(extent->l1_table);
qemu_free(extent->l1_table);
return ret;
}
static int vmdk_open_vmdk3(BlockDriverState *bs,
BlockDriverState *file,
int flags)
static int vmdk_open_vmdk3(BlockDriverState *bs, int flags)
{
int ret;
uint32_t magic;
VMDK3Header header;
BDRVVmdkState *s = bs->opaque;
VmdkExtent *extent;
ret = bdrv_pread(file, sizeof(magic), &header, sizeof(header));
s->desc_offset = 0x200;
ret = bdrv_pread(bs->file, sizeof(magic), &header, sizeof(header));
if (ret < 0) {
return ret;
goto fail;
}
extent = vmdk_add_extent(bs,
bs->file, false,
@@ -432,106 +378,58 @@ static int vmdk_open_vmdk3(BlockDriverState *bs,
le32_to_cpu(header.granularity));
ret = vmdk_init_tables(bs, extent);
if (ret) {
/* free extent allocated by vmdk_add_extent */
vmdk_free_last_extent(bs);
/* vmdk_init_tables cleans up on fail, so only free allocation of
* vmdk_add_extent here. */
goto fail;
}
return 0;
fail:
vmdk_free_extents(bs);
return ret;
}
static int vmdk_open_desc_file(BlockDriverState *bs, int flags,
int64_t desc_offset);
static int vmdk_open_vmdk4(BlockDriverState *bs,
BlockDriverState *file,
int flags)
static int vmdk_open_vmdk4(BlockDriverState *bs, int flags)
{
int ret;
uint32_t magic;
uint32_t l1_size, l1_entry_sectors;
VMDK4Header header;
BDRVVmdkState *s = bs->opaque;
VmdkExtent *extent;
int64_t l1_backup_offset = 0;
ret = bdrv_pread(file, sizeof(magic), &header, sizeof(header));
s->desc_offset = 0x200;
ret = bdrv_pread(bs->file, sizeof(magic), &header, sizeof(header));
if (ret < 0) {
return ret;
goto fail;
}
if (header.capacity == 0 && header.desc_offset) {
return vmdk_open_desc_file(bs, flags, header.desc_offset << 9);
}
if (le64_to_cpu(header.gd_offset) == VMDK4_GD_AT_END) {
/*
* The footer takes precedence over the header, so read it in. The
* footer starts at offset -1024 from the end: One sector for the
* footer, and another one for the end-of-stream marker.
*/
struct {
struct {
uint64_t val;
uint32_t size;
uint32_t type;
uint8_t pad[512 - 16];
} QEMU_PACKED footer_marker;
uint32_t magic;
VMDK4Header header;
uint8_t pad[512 - 4 - sizeof(VMDK4Header)];
struct {
uint64_t val;
uint32_t size;
uint32_t type;
uint8_t pad[512 - 16];
} QEMU_PACKED eos_marker;
} QEMU_PACKED footer;
ret = bdrv_pread(file,
bs->file->total_sectors * 512 - 1536,
&footer, sizeof(footer));
if (ret < 0) {
return ret;
}
/* Some sanity checks for the footer */
if (be32_to_cpu(footer.magic) != VMDK4_MAGIC ||
le32_to_cpu(footer.footer_marker.size) != 0 ||
le32_to_cpu(footer.footer_marker.type) != MARKER_FOOTER ||
le64_to_cpu(footer.eos_marker.val) != 0 ||
le32_to_cpu(footer.eos_marker.size) != 0 ||
le32_to_cpu(footer.eos_marker.type) != MARKER_END_OF_STREAM)
{
return -EINVAL;
}
header = footer.header;
}
l1_entry_sectors = le32_to_cpu(header.num_gtes_per_gte)
* le64_to_cpu(header.granularity);
if (l1_entry_sectors == 0) {
return -EINVAL;
}
l1_size = (le64_to_cpu(header.capacity) + l1_entry_sectors - 1)
/ l1_entry_sectors;
if (le32_to_cpu(header.flags) & VMDK4_FLAG_RGD) {
l1_backup_offset = le64_to_cpu(header.rgd_offset) << 9;
}
extent = vmdk_add_extent(bs, file, false,
extent = vmdk_add_extent(bs, bs->file, false,
le64_to_cpu(header.capacity),
le64_to_cpu(header.gd_offset) << 9,
l1_backup_offset,
le64_to_cpu(header.rgd_offset) << 9,
l1_size,
le32_to_cpu(header.num_gtes_per_gte),
le64_to_cpu(header.granularity));
extent->compressed =
le16_to_cpu(header.compressAlgorithm) == VMDK4_COMPRESSION_DEFLATE;
extent->has_marker = le32_to_cpu(header.flags) & VMDK4_FLAG_MARKER;
if (extent->l1_entry_sectors <= 0) {
ret = -EINVAL;
goto fail;
}
/* try to open parent images, if exist */
ret = vmdk_parent_open(bs);
if (ret) {
goto fail;
}
s->parent_cid = vmdk_read_cid(bs, 1);
ret = vmdk_init_tables(bs, extent);
if (ret) {
/* free extent allocated by vmdk_add_extent */
vmdk_free_last_extent(bs);
goto fail;
}
return 0;
fail:
vmdk_free_extents(bs);
return ret;
}
@@ -562,31 +460,6 @@ static int vmdk_parse_description(const char *desc, const char *opt_name,
return 0;
}
/* Open an extent file and append to bs array */
static int vmdk_open_sparse(BlockDriverState *bs,
BlockDriverState *file,
int flags)
{
uint32_t magic;
if (bdrv_pread(file, 0, &magic, sizeof(magic)) != sizeof(magic)) {
return -EIO;
}
magic = be32_to_cpu(magic);
switch (magic) {
case VMDK3_MAGIC:
return vmdk_open_vmdk3(bs, file, flags);
break;
case VMDK4_MAGIC:
return vmdk_open_vmdk4(bs, file, flags);
break;
default:
return -EINVAL;
break;
}
}
static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
const char *desc_file_path)
{
@@ -597,8 +470,6 @@ static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
const char *p = desc;
int64_t sectors = 0;
int64_t flat_offset;
char extent_path[PATH_MAX];
BlockDriverState *extent_file;
while (*p) {
/* parse extent line:
@@ -633,29 +504,24 @@ static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
goto next_line;
}
path_combine(extent_path, sizeof(extent_path),
desc_file_path, fname);
ret = bdrv_file_open(&extent_file, extent_path, bs->open_flags);
if (ret) {
return ret;
}
/* save to extents array */
if (!strcmp(type, "FLAT")) {
/* FLAT extent */
char extent_path[PATH_MAX];
BlockDriverState *extent_file;
VmdkExtent *extent;
extent = vmdk_add_extent(bs, extent_file, true, sectors,
0, 0, 0, 0, sectors);
extent->flat_start_offset = flat_offset << 9;
} else if (!strcmp(type, "SPARSE")) {
/* SPARSE extent */
ret = vmdk_open_sparse(bs, extent_file, bs->open_flags);
path_combine(extent_path, sizeof(extent_path),
desc_file_path, fname);
ret = bdrv_file_open(&extent_file, extent_path, bs->open_flags);
if (ret) {
bdrv_delete(extent_file);
return ret;
}
extent = vmdk_add_extent(bs, extent_file, true, sectors,
0, 0, 0, 0, sectors);
extent->flat_start_offset = flat_offset;
} else {
/* SPARSE extent, not supported for now */
fprintf(stderr,
"VMDK: Not supported extent type \"%s\""".\n", type);
return -ENOTSUP;
@@ -670,15 +536,14 @@ next_line:
return 0;
}
static int vmdk_open_desc_file(BlockDriverState *bs, int flags,
int64_t desc_offset)
static int vmdk_open_desc_file(BlockDriverState *bs, int flags)
{
int ret;
char buf[2048];
char ct[128];
BDRVVmdkState *s = bs->opaque;
ret = bdrv_pread(bs->file, desc_offset, buf, sizeof(buf));
ret = bdrv_pread(bs->file, 0, buf, sizeof(buf));
if (ret < 0) {
return ret;
}
@@ -686,49 +551,42 @@ static int vmdk_open_desc_file(BlockDriverState *bs, int flags,
if (vmdk_parse_description(buf, "createType", ct, sizeof(ct))) {
return -EINVAL;
}
if (strcmp(ct, "monolithicFlat") &&
strcmp(ct, "twoGbMaxExtentSparse") &&
strcmp(ct, "twoGbMaxExtentFlat")) {
if (strcmp(ct, "monolithicFlat")) {
fprintf(stderr,
"VMDK: Not supported image type \"%s\""".\n", ct);
return -ENOTSUP;
}
s->desc_offset = 0;
return vmdk_parse_extents(buf, bs, bs->file->filename);
ret = vmdk_parse_extents(buf, bs, bs->file->filename);
if (ret) {
return ret;
}
/* try to open parent images, if exist */
if (vmdk_parent_open(bs)) {
qemu_free(s->extents);
return -EINVAL;
}
s->parent_cid = vmdk_read_cid(bs, 1);
return 0;
}
static int vmdk_open(BlockDriverState *bs, int flags)
{
int ret;
BDRVVmdkState *s = bs->opaque;
uint32_t magic;
if (vmdk_open_sparse(bs, bs->file, flags) == 0) {
s->desc_offset = 0x200;
if (bdrv_pread(bs->file, 0, &magic, sizeof(magic)) != sizeof(magic)) {
return -EIO;
}
magic = be32_to_cpu(magic);
if (magic == VMDK3_MAGIC) {
return vmdk_open_vmdk3(bs, flags);
} else if (magic == VMDK4_MAGIC) {
return vmdk_open_vmdk4(bs, flags);
} else {
ret = vmdk_open_desc_file(bs, flags, 0);
if (ret) {
goto fail;
}
return vmdk_open_desc_file(bs, flags);
}
/* try to open parent images, if exist */
ret = vmdk_parent_open(bs);
if (ret) {
goto fail;
}
s->parent_cid = vmdk_read_cid(bs, 1);
qemu_co_mutex_init(&s->lock);
/* Disable migration when VMDK images are used */
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"vmdk", bs->device_name, "live migration");
migrate_add_blocker(s->migration_blocker);
return 0;
fail:
vmdk_free_extents(bs);
return ret;
}
static int get_whole_cluster(BlockDriverState *bs,
@@ -814,7 +672,6 @@ static int get_cluster_offset(BlockDriverState *bs,
return 0;
}
offset -= (extent->end_sector - extent->sectors) * SECTOR_SIZE;
l1_index = (offset >> 9) / extent->l1_entry_sectors;
if (l1_index >= extent->l1_size) {
return -1;
@@ -867,12 +724,10 @@ static int get_cluster_offset(BlockDriverState *bs,
/* Avoid the L2 tables update for the images that have snapshots. */
*cluster_offset = bdrv_getlength(extent->file);
if (!extent->compressed) {
bdrv_truncate(
extent->file,
*cluster_offset + (extent->cluster_sectors << 9)
);
}
bdrv_truncate(
extent->file,
*cluster_offset + (extent->cluster_sectors << 9)
);
*cluster_offset >>= 9;
tmp = cpu_to_le32(*cluster_offset);
@@ -917,8 +772,8 @@ static VmdkExtent *find_extent(BDRVVmdkState *s,
return NULL;
}
static int coroutine_fn vmdk_co_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum)
static int vmdk_is_allocated(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum)
{
BDRVVmdkState *s = bs->opaque;
int64_t index_in_cluster, n, ret;
@@ -929,10 +784,8 @@ static int coroutine_fn vmdk_co_is_allocated(BlockDriverState *bs,
if (!extent) {
return 0;
}
qemu_co_mutex_lock(&s->lock);
ret = get_cluster_offset(bs, extent, NULL,
sector_num * 512, 0, &offset);
qemu_co_mutex_unlock(&s->lock);
/* get_cluster_offset returning 0 means success */
ret = !ret;
@@ -945,113 +798,6 @@ static int coroutine_fn vmdk_co_is_allocated(BlockDriverState *bs,
return ret;
}
static int vmdk_write_extent(VmdkExtent *extent, int64_t cluster_offset,
int64_t offset_in_cluster, const uint8_t *buf,
int nb_sectors, int64_t sector_num)
{
int ret;
VmdkGrainMarker *data = NULL;
uLongf buf_len;
const uint8_t *write_buf = buf;
int write_len = nb_sectors * 512;
if (extent->compressed) {
if (!extent->has_marker) {
ret = -EINVAL;
goto out;
}
buf_len = (extent->cluster_sectors << 9) * 2;
data = g_malloc(buf_len + sizeof(VmdkGrainMarker));
if (compress(data->data, &buf_len, buf, nb_sectors << 9) != Z_OK ||
buf_len == 0) {
ret = -EINVAL;
goto out;
}
data->lba = sector_num;
data->size = buf_len;
write_buf = (uint8_t *)data;
write_len = buf_len + sizeof(VmdkGrainMarker);
}
ret = bdrv_pwrite(extent->file,
cluster_offset + offset_in_cluster,
write_buf,
write_len);
if (ret != write_len) {
ret = ret < 0 ? ret : -EIO;
goto out;
}
ret = 0;
out:
g_free(data);
return ret;
}
static int vmdk_read_extent(VmdkExtent *extent, int64_t cluster_offset,
int64_t offset_in_cluster, uint8_t *buf,
int nb_sectors)
{
int ret;
int cluster_bytes, buf_bytes;
uint8_t *cluster_buf, *compressed_data;
uint8_t *uncomp_buf;
uint32_t data_len;
VmdkGrainMarker *marker;
uLongf buf_len;
if (!extent->compressed) {
ret = bdrv_pread(extent->file,
cluster_offset + offset_in_cluster,
buf, nb_sectors * 512);
if (ret == nb_sectors * 512) {
return 0;
} else {
return -EIO;
}
}
cluster_bytes = extent->cluster_sectors * 512;
/* Read two clusters in case GrainMarker + compressed data > one cluster */
buf_bytes = cluster_bytes * 2;
cluster_buf = g_malloc(buf_bytes);
uncomp_buf = g_malloc(cluster_bytes);
ret = bdrv_pread(extent->file,
cluster_offset,
cluster_buf, buf_bytes);
if (ret < 0) {
goto out;
}
compressed_data = cluster_buf;
buf_len = cluster_bytes;
data_len = cluster_bytes;
if (extent->has_marker) {
marker = (VmdkGrainMarker *)cluster_buf;
compressed_data = marker->data;
data_len = le32_to_cpu(marker->size);
}
if (!data_len || data_len > buf_bytes) {
ret = -EINVAL;
goto out;
}
ret = uncompress(uncomp_buf, &buf_len, compressed_data, data_len);
if (ret != Z_OK) {
ret = -EINVAL;
goto out;
}
if (offset_in_cluster < 0 ||
offset_in_cluster + nb_sectors * 512 > buf_len) {
ret = -EINVAL;
goto out;
}
memcpy(buf, uncomp_buf + offset_in_cluster, nb_sectors * 512);
ret = 0;
out:
g_free(uncomp_buf);
g_free(cluster_buf);
return ret;
}
static int vmdk_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
@@ -1088,10 +834,10 @@ static int vmdk_read(BlockDriverState *bs, int64_t sector_num,
memset(buf, 0, 512 * n);
}
} else {
ret = vmdk_read_extent(extent,
cluster_offset, index_in_cluster * 512,
buf, n);
if (ret) {
ret = bdrv_pread(extent->file,
cluster_offset + index_in_cluster * 512,
buf, n * 512);
if (ret < 0) {
return ret;
}
}
@@ -1102,17 +848,6 @@ static int vmdk_read(BlockDriverState *bs, int64_t sector_num,
return 0;
}
static coroutine_fn int vmdk_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVVmdkState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = vmdk_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static int vmdk_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
@@ -1140,25 +875,8 @@ static int vmdk_write(BlockDriverState *bs, int64_t sector_num,
bs,
extent,
&m_data,
sector_num << 9, !extent->compressed,
sector_num << 9, 1,
&cluster_offset);
if (extent->compressed) {
if (ret == 0) {
/* Refuse write to allocated cluster for streamOptimized */
fprintf(stderr,
"VMDK: can't write to allocated cluster"
" for streamOptimized\n");
return -EIO;
} else {
/* allocate */
ret = get_cluster_offset(
bs,
extent,
&m_data,
sector_num << 9, 1,
&cluster_offset);
}
}
if (ret) {
return -EINVAL;
}
@@ -1168,10 +886,11 @@ static int vmdk_write(BlockDriverState *bs, int64_t sector_num,
n = nb_sectors;
}
ret = vmdk_write_extent(extent,
cluster_offset, index_in_cluster * 512,
buf, n, sector_num);
if (ret) {
ret = bdrv_pwrite(extent->file,
cluster_offset + index_in_cluster * 512,
buf,
n * 512);
if (ret < 0) {
return ret;
}
if (m_data.valid) {
@@ -1187,30 +906,15 @@ static int vmdk_write(BlockDriverState *bs, int64_t sector_num,
/* update CID on the first write every time the virtual disk is
* opened */
if (!s->cid_updated) {
ret = vmdk_write_cid(bs, time(NULL));
if (ret < 0) {
return ret;
}
vmdk_write_cid(bs, time(NULL));
s->cid_updated = true;
}
}
return 0;
}
static coroutine_fn int vmdk_co_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
int ret;
BDRVVmdkState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = vmdk_write(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static int vmdk_create_extent(const char *filename, int64_t filesize,
bool flat, bool compress)
static int vmdk_create_extent(const char *filename, int64_t filesize, bool flat)
{
int ret, i;
int fd = 0;
@@ -1234,9 +938,7 @@ static int vmdk_create_extent(const char *filename, int64_t filesize,
magic = cpu_to_be32(VMDK4_MAGIC);
memset(&header, 0, sizeof(header));
header.version = 1;
header.flags =
3 | (compress ? VMDK4_FLAG_COMPRESS | VMDK4_FLAG_MARKER : 0);
header.compressAlgorithm = compress ? VMDK4_COMPRESSION_DEFLATE : 0;
header.flags = 3; /* ?? */
header.capacity = filesize / 512;
header.granularity = 128;
header.num_gtes_per_gte = 512;
@@ -1266,7 +968,6 @@ static int vmdk_create_extent(const char *filename, int64_t filesize,
header.rgd_offset = cpu_to_le64(header.rgd_offset);
header.gd_offset = cpu_to_le64(header.gd_offset);
header.grain_offset = cpu_to_le64(header.grain_offset);
header.compressAlgorithm = cpu_to_le16(header.compressAlgorithm);
header.check_bytes[0] = 0xa;
header.check_bytes[1] = 0x20;
@@ -1408,7 +1109,7 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options)
const char *fmt = NULL;
int flags = 0;
int ret = 0;
bool flat, split, compress;
bool flat, split;
char ext_desc_lines[BUF_SIZE] = "";
char path[PATH_MAX], prefix[PATH_MAX], postfix[PATH_MAX];
const int64_t split_size = 0x80000000; /* VMDK has constant split size */
@@ -1457,8 +1158,7 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options)
} else if (strcmp(fmt, "monolithicFlat") &&
strcmp(fmt, "monolithicSparse") &&
strcmp(fmt, "twoGbMaxExtentSparse") &&
strcmp(fmt, "twoGbMaxExtentFlat") &&
strcmp(fmt, "streamOptimized")) {
strcmp(fmt, "twoGbMaxExtentFlat")) {
fprintf(stderr, "VMDK: Unknown subformat: %s\n", fmt);
return -EINVAL;
}
@@ -1466,7 +1166,6 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options)
strcmp(fmt, "twoGbMaxExtentSparse"));
flat = !(strcmp(fmt, "monolithicFlat") &&
strcmp(fmt, "twoGbMaxExtentFlat"));
compress = !strcmp(fmt, "streamOptimized");
if (flat) {
desc_extent_line = "RW %lld FLAT \"%s\" 0\n";
} else {
@@ -1488,6 +1187,7 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options)
bdrv_delete(bs);
return -EINVAL;
}
filesize = bdrv_getlength(bs);
parent_cid = vmdk_read_cid(bs, 0);
bdrv_delete(bs);
relative_path(parent_filename, sizeof(parent_filename),
@@ -1520,7 +1220,7 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options)
snprintf(ext_filename, sizeof(ext_filename), "%s%s",
path, desc_filename);
if (vmdk_create_extent(ext_filename, size, flat, compress)) {
if (vmdk_create_extent(ext_filename, size, flat)) {
return -EINVAL;
}
filesize -= size;
@@ -1571,22 +1271,17 @@ exit:
static void vmdk_close(BlockDriverState *bs)
{
BDRVVmdkState *s = bs->opaque;
vmdk_free_extents(bs);
migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
}
static coroutine_fn int vmdk_co_flush(BlockDriverState *bs)
static int vmdk_flush(BlockDriverState *bs)
{
int i, ret, err;
BDRVVmdkState *s = bs->opaque;
int i, err;
int ret = 0;
ret = bdrv_flush(bs->file);
for (i = 0; i < s->num_extents; i++) {
err = bdrv_co_flush(s->extents[i].file);
err = bdrv_flush(s->extents[i].file);
if (err < 0) {
ret = err;
}
@@ -1639,7 +1334,7 @@ static QEMUOptionParameter vmdk_create_options[] = {
.type = OPT_STRING,
.help =
"VMDK flat extent format, can be one of "
"{monolithicSparse (default) | monolithicFlat | twoGbMaxExtentSparse | twoGbMaxExtentFlat | streamOptimized} "
"{monolithicSparse (default) | monolithicFlat | twoGbMaxExtentSparse | twoGbMaxExtentFlat} "
},
{ NULL }
};
@@ -1649,12 +1344,12 @@ static BlockDriver bdrv_vmdk = {
.instance_size = sizeof(BDRVVmdkState),
.bdrv_probe = vmdk_probe,
.bdrv_open = vmdk_open,
.bdrv_read = vmdk_co_read,
.bdrv_write = vmdk_co_write,
.bdrv_read = vmdk_read,
.bdrv_write = vmdk_write,
.bdrv_close = vmdk_close,
.bdrv_create = vmdk_create,
.bdrv_co_flush_to_disk = vmdk_co_flush,
.bdrv_co_is_allocated = vmdk_co_is_allocated,
.bdrv_flush = vmdk_flush,
.bdrv_is_allocated = vmdk_is_allocated,
.bdrv_get_allocated_file_size = vmdk_get_allocated_file_size,
.create_options = vmdk_create_options,

View File

@@ -25,7 +25,6 @@
#include "qemu-common.h"
#include "block_int.h"
#include "module.h"
#include "migration.h"
/**************************************************************/
@@ -111,7 +110,6 @@ struct vhd_dyndisk_header {
};
typedef struct BDRVVPCState {
CoMutex lock;
uint8_t footer_buf[HEADER_SIZE];
uint64_t free_data_block_offset;
int max_table_entries;
@@ -129,8 +127,6 @@ typedef struct BDRVVPCState {
uint64_t last_bitmap;
#endif
Error *migration_blocker;
} BDRVVPCState;
static uint32_t vpc_checksum(uint8_t* buf, size_t size)
@@ -160,28 +156,13 @@ static int vpc_open(BlockDriverState *bs, int flags)
struct vhd_dyndisk_header* dyndisk_header;
uint8_t buf[HEADER_SIZE];
uint32_t checksum;
int err = -1;
int disk_type = VHD_DYNAMIC;
if (bdrv_pread(bs->file, 0, s->footer_buf, HEADER_SIZE) != HEADER_SIZE)
goto fail;
footer = (struct vhd_footer*) s->footer_buf;
if (strncmp(footer->creator, "conectix", 8)) {
int64_t offset = bdrv_getlength(bs->file);
if (offset < HEADER_SIZE) {
goto fail;
}
/* If a fixed disk, the footer is found only at the end of the file */
if (bdrv_pread(bs->file, offset-HEADER_SIZE, s->footer_buf, HEADER_SIZE)
!= HEADER_SIZE) {
goto fail;
}
if (strncmp(footer->creator, "conectix", 8)) {
goto fail;
}
disk_type = VHD_FIXED;
}
if (strncmp(footer->creator, "conectix", 8))
goto fail;
checksum = be32_to_cpu(footer->checksum);
footer->checksum = 0;
@@ -189,80 +170,59 @@ static int vpc_open(BlockDriverState *bs, int flags)
fprintf(stderr, "block-vpc: The header checksum of '%s' is "
"incorrect.\n", bs->filename);
/* Write 'checksum' back to footer, or else will leave it with zero. */
footer->checksum = be32_to_cpu(checksum);
// The visible size of a image in Virtual PC depends on the geometry
// rather than on the size stored in the footer (the size in the footer
// is too large usually)
bs->total_sectors = (int64_t)
be16_to_cpu(footer->cyls) * footer->heads * footer->secs_per_cyl;
if (bs->total_sectors >= 65535 * 16 * 255) {
err = -EFBIG;
if (bdrv_pread(bs->file, be64_to_cpu(footer->data_offset), buf, HEADER_SIZE)
!= HEADER_SIZE)
goto fail;
dyndisk_header = (struct vhd_dyndisk_header*) buf;
if (strncmp(dyndisk_header->magic, "cxsparse", 8))
goto fail;
s->block_size = be32_to_cpu(dyndisk_header->block_size);
s->bitmap_size = ((s->block_size / (8 * 512)) + 511) & ~511;
s->max_table_entries = be32_to_cpu(dyndisk_header->max_table_entries);
s->pagetable = qemu_malloc(s->max_table_entries * 4);
s->bat_offset = be64_to_cpu(dyndisk_header->table_offset);
if (bdrv_pread(bs->file, s->bat_offset, s->pagetable,
s->max_table_entries * 4) != s->max_table_entries * 4)
goto fail;
s->free_data_block_offset =
(s->bat_offset + (s->max_table_entries * 4) + 511) & ~511;
for (i = 0; i < s->max_table_entries; i++) {
be32_to_cpus(&s->pagetable[i]);
if (s->pagetable[i] != 0xFFFFFFFF) {
int64_t next = (512 * (int64_t) s->pagetable[i]) +
s->bitmap_size + s->block_size;
if (next> s->free_data_block_offset)
s->free_data_block_offset = next;
}
}
if (disk_type == VHD_DYNAMIC) {
if (bdrv_pread(bs->file, be64_to_cpu(footer->data_offset), buf,
HEADER_SIZE) != HEADER_SIZE) {
goto fail;
}
dyndisk_header = (struct vhd_dyndisk_header *) buf;
if (strncmp(dyndisk_header->magic, "cxsparse", 8)) {
goto fail;
}
s->block_size = be32_to_cpu(dyndisk_header->block_size);
s->bitmap_size = ((s->block_size / (8 * 512)) + 511) & ~511;
s->max_table_entries = be32_to_cpu(dyndisk_header->max_table_entries);
s->pagetable = g_malloc(s->max_table_entries * 4);
s->bat_offset = be64_to_cpu(dyndisk_header->table_offset);
if (bdrv_pread(bs->file, s->bat_offset, s->pagetable,
s->max_table_entries * 4) != s->max_table_entries * 4) {
goto fail;
}
s->free_data_block_offset =
(s->bat_offset + (s->max_table_entries * 4) + 511) & ~511;
for (i = 0; i < s->max_table_entries; i++) {
be32_to_cpus(&s->pagetable[i]);
if (s->pagetable[i] != 0xFFFFFFFF) {
int64_t next = (512 * (int64_t) s->pagetable[i]) +
s->bitmap_size + s->block_size;
if (next > s->free_data_block_offset) {
s->free_data_block_offset = next;
}
}
}
s->last_bitmap_offset = (int64_t) -1;
s->last_bitmap_offset = (int64_t) -1;
#ifdef CACHE
s->pageentry_u8 = g_malloc(512);
s->pageentry_u32 = s->pageentry_u8;
s->pageentry_u16 = s->pageentry_u8;
s->last_pagetable = -1;
s->pageentry_u8 = qemu_malloc(512);
s->pageentry_u32 = s->pageentry_u8;
s->pageentry_u16 = s->pageentry_u8;
s->last_pagetable = -1;
#endif
}
qemu_co_mutex_init(&s->lock);
/* Disable migration when VHD images are used */
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"vpc", bs->device_name, "live migration");
migrate_add_blocker(s->migration_blocker);
return 0;
fail:
return err;
return -1;
}
/*
@@ -384,11 +344,8 @@ static int64_t alloc_block(BlockDriverState* bs, int64_t sector_num)
// Initialize the block's bitmap
memset(bitmap, 0xff, s->bitmap_size);
ret = bdrv_pwrite_sync(bs->file, s->free_data_block_offset, bitmap,
bdrv_pwrite_sync(bs->file, s->free_data_block_offset, bitmap,
s->bitmap_size);
if (ret < 0) {
return ret;
}
// Write new footer (the old one will be overwritten)
s->free_data_block_offset += s->block_size + s->bitmap_size;
@@ -417,11 +374,7 @@ static int vpc_read(BlockDriverState *bs, int64_t sector_num,
int ret;
int64_t offset;
int64_t sectors, sectors_per_block;
struct vhd_footer *footer = (struct vhd_footer *) s->footer_buf;
if (cpu_to_be32(footer->type) == VHD_FIXED) {
return bdrv_read(bs->file, sector_num, buf, nb_sectors);
}
while (nb_sectors > 0) {
offset = get_sector_offset(bs, sector_num, 0);
@@ -448,17 +401,6 @@ static int vpc_read(BlockDriverState *bs, int64_t sector_num,
return 0;
}
static coroutine_fn int vpc_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVVPCState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = vpc_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static int vpc_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
@@ -466,11 +408,7 @@ static int vpc_write(BlockDriverState *bs, int64_t sector_num,
int64_t offset;
int64_t sectors, sectors_per_block;
int ret;
struct vhd_footer *footer = (struct vhd_footer *) s->footer_buf;
if (cpu_to_be32(footer->type) == VHD_FIXED) {
return bdrv_write(bs->file, sector_num, buf, nb_sectors);
}
while (nb_sectors > 0) {
offset = get_sector_offset(bs, sector_num, 1);
@@ -499,15 +437,9 @@ static int vpc_write(BlockDriverState *bs, int64_t sector_num,
return 0;
}
static coroutine_fn int vpc_co_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
static int vpc_flush(BlockDriverState *bs)
{
int ret;
BDRVVPCState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = vpc_write(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
return bdrv_flush(bs->file);
}
/*
@@ -558,14 +490,70 @@ static int calculate_geometry(int64_t total_sectors, uint16_t* cyls,
return 0;
}
static int create_dynamic_disk(int fd, uint8_t *buf, int64_t total_sectors)
static int vpc_create(const char *filename, QEMUOptionParameter *options)
{
uint8_t buf[1024];
struct vhd_footer* footer = (struct vhd_footer*) buf;
struct vhd_dyndisk_header* dyndisk_header =
(struct vhd_dyndisk_header*) buf;
int fd, i;
uint16_t cyls = 0;
uint8_t heads = 0;
uint8_t secs_per_cyl = 0;
size_t block_size, num_bat_entries;
int i;
int64_t total_sectors = 0;
int ret = -EIO;
// Read out options
total_sectors = get_option_parameter(options, BLOCK_OPT_SIZE)->value.n /
BDRV_SECTOR_SIZE;
// Create the file
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, 0644);
if (fd < 0)
return -EIO;
/* Calculate matching total_size and geometry. Increase the number of
sectors requested until we get enough (or fail). */
for (i = 0; total_sectors > (int64_t)cyls * heads * secs_per_cyl; i++) {
if (calculate_geometry(total_sectors + i,
&cyls, &heads, &secs_per_cyl)) {
ret = -EFBIG;
goto fail;
}
}
total_sectors = (int64_t) cyls * heads * secs_per_cyl;
// Prepare the Hard Disk Footer
memset(buf, 0, 1024);
memcpy(footer->creator, "conectix", 8);
// TODO Check if "qemu" creator_app is ok for VPC
memcpy(footer->creator_app, "qemu", 4);
memcpy(footer->creator_os, "Wi2k", 4);
footer->features = be32_to_cpu(0x02);
footer->version = be32_to_cpu(0x00010000);
footer->data_offset = be64_to_cpu(HEADER_SIZE);
footer->timestamp = be32_to_cpu(time(NULL) - VHD_TIMESTAMP_BASE);
// Version of Virtual PC 2007
footer->major = be16_to_cpu(0x0005);
footer->minor =be16_to_cpu(0x0003);
footer->orig_size = be64_to_cpu(total_sectors * 512);
footer->size = be64_to_cpu(total_sectors * 512);
footer->cyls = be16_to_cpu(cyls);
footer->heads = heads;
footer->secs_per_cyl = secs_per_cyl;
footer->type = be32_to_cpu(VHD_DYNAMIC);
// TODO uuid is missing
footer->checksum = be32_to_cpu(vpc_checksum(buf, HEADER_SIZE));
// Write the footer (twice: at the beginning and at the end)
block_size = 0x200000;
num_bat_entries = (total_sectors + block_size / 512) / (block_size / 512);
@@ -593,16 +581,13 @@ static int create_dynamic_disk(int fd, uint8_t *buf, int64_t total_sectors)
}
}
// Prepare the Dynamic Disk Header
memset(buf, 0, 1024);
memcpy(dyndisk_header->magic, "cxsparse", 8);
/*
* Note: The spec is actually wrong here for data_offset, it says
* 0xFFFFFFFF, but MS tools expect all 64 bits to be set.
*/
dyndisk_header->data_offset = be64_to_cpu(0xFFFFFFFFFFFFFFFFULL);
dyndisk_header->data_offset = be64_to_cpu(0xFFFFFFFF);
dyndisk_header->table_offset = be64_to_cpu(3 * 512);
dyndisk_header->version = be32_to_cpu(0x00010000);
dyndisk_header->block_size = be32_to_cpu(block_size);
@@ -620,129 +605,6 @@ static int create_dynamic_disk(int fd, uint8_t *buf, int64_t total_sectors)
}
ret = 0;
fail:
return ret;
}
static int create_fixed_disk(int fd, uint8_t *buf, int64_t total_size)
{
int ret = -EIO;
/* Add footer to total size */
total_size += 512;
if (ftruncate(fd, total_size) != 0) {
ret = -errno;
goto fail;
}
if (lseek(fd, -512, SEEK_END) < 0) {
goto fail;
}
if (write(fd, buf, HEADER_SIZE) != HEADER_SIZE) {
goto fail;
}
ret = 0;
fail:
return ret;
}
static int vpc_create(const char *filename, QEMUOptionParameter *options)
{
uint8_t buf[1024];
struct vhd_footer *footer = (struct vhd_footer *) buf;
QEMUOptionParameter *disk_type_param;
int fd, i;
uint16_t cyls = 0;
uint8_t heads = 0;
uint8_t secs_per_cyl = 0;
int64_t total_sectors;
int64_t total_size;
int disk_type;
int ret = -EIO;
/* Read out options */
total_size = get_option_parameter(options, BLOCK_OPT_SIZE)->value.n;
disk_type_param = get_option_parameter(options, BLOCK_OPT_SUBFMT);
if (disk_type_param && disk_type_param->value.s) {
if (!strcmp(disk_type_param->value.s, "dynamic")) {
disk_type = VHD_DYNAMIC;
} else if (!strcmp(disk_type_param->value.s, "fixed")) {
disk_type = VHD_FIXED;
} else {
return -EINVAL;
}
} else {
disk_type = VHD_DYNAMIC;
}
/* Create the file */
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, 0644);
if (fd < 0) {
return -EIO;
}
/*
* Calculate matching total_size and geometry. Increase the number of
* sectors requested until we get enough (or fail). This ensures that
* qemu-img convert doesn't truncate images, but rather rounds up.
*/
total_sectors = total_size / BDRV_SECTOR_SIZE;
for (i = 0; total_sectors > (int64_t)cyls * heads * secs_per_cyl; i++) {
if (calculate_geometry(total_sectors + i, &cyls, &heads,
&secs_per_cyl))
{
ret = -EFBIG;
goto fail;
}
}
total_sectors = (int64_t) cyls * heads * secs_per_cyl;
/* Prepare the Hard Disk Footer */
memset(buf, 0, 1024);
memcpy(footer->creator, "conectix", 8);
/* TODO Check if "qemu" creator_app is ok for VPC */
memcpy(footer->creator_app, "qemu", 4);
memcpy(footer->creator_os, "Wi2k", 4);
footer->features = be32_to_cpu(0x02);
footer->version = be32_to_cpu(0x00010000);
if (disk_type == VHD_DYNAMIC) {
footer->data_offset = be64_to_cpu(HEADER_SIZE);
} else {
footer->data_offset = be64_to_cpu(0xFFFFFFFFFFFFFFFFULL);
}
footer->timestamp = be32_to_cpu(time(NULL) - VHD_TIMESTAMP_BASE);
/* Version of Virtual PC 2007 */
footer->major = be16_to_cpu(0x0005);
footer->minor = be16_to_cpu(0x0003);
if (disk_type == VHD_DYNAMIC) {
footer->orig_size = be64_to_cpu(total_sectors * 512);
footer->size = be64_to_cpu(total_sectors * 512);
} else {
footer->orig_size = be64_to_cpu(total_size);
footer->size = be64_to_cpu(total_size);
}
footer->cyls = be16_to_cpu(cyls);
footer->heads = heads;
footer->secs_per_cyl = secs_per_cyl;
footer->type = be32_to_cpu(disk_type);
/* TODO uuid is missing */
footer->checksum = be32_to_cpu(vpc_checksum(buf, HEADER_SIZE));
if (disk_type == VHD_DYNAMIC) {
ret = create_dynamic_disk(fd, buf, total_sectors);
} else {
ret = create_fixed_disk(fd, buf, total_size);
}
fail:
close(fd);
return ret;
@@ -751,13 +613,10 @@ static int vpc_create(const char *filename, QEMUOptionParameter *options)
static void vpc_close(BlockDriverState *bs)
{
BDRVVPCState *s = bs->opaque;
g_free(s->pagetable);
qemu_free(s->pagetable);
#ifdef CACHE
g_free(s->pageentry_u8);
qemu_free(s->pageentry_u8);
#endif
migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
}
static QEMUOptionParameter vpc_create_options[] = {
@@ -766,28 +625,20 @@ static QEMUOptionParameter vpc_create_options[] = {
.type = OPT_SIZE,
.help = "Virtual disk size"
},
{
.name = BLOCK_OPT_SUBFMT,
.type = OPT_STRING,
.help =
"Type of virtual hard disk format. Supported formats are "
"{dynamic (default) | fixed} "
},
{ NULL }
};
static BlockDriver bdrv_vpc = {
.format_name = "vpc",
.instance_size = sizeof(BDRVVPCState),
.bdrv_probe = vpc_probe,
.bdrv_open = vpc_open,
.bdrv_read = vpc_read,
.bdrv_write = vpc_write,
.bdrv_flush = vpc_flush,
.bdrv_close = vpc_close,
.bdrv_create = vpc_create,
.bdrv_read = vpc_co_read,
.bdrv_write = vpc_co_write,
.create_options = vpc_create_options,
};

View File

@@ -27,7 +27,6 @@
#include "qemu-common.h"
#include "block_int.h"
#include "module.h"
#include "migration.h"
#ifndef S_IWGRP
#define S_IWGRP 0
@@ -87,7 +86,8 @@ static inline void array_init(array_t* array,unsigned int item_size)
static inline void array_free(array_t* array)
{
g_free(array->pointer);
if(array->pointer)
free(array->pointer);
array->size=array->next=0;
}
@@ -101,7 +101,7 @@ static inline int array_ensure_allocated(array_t* array, int index)
{
if((index + 1) * array->item_size > array->size) {
int new_size = (index + 32) * array->item_size;
array->pointer = g_realloc(array->pointer, new_size);
array->pointer = qemu_realloc(array->pointer, new_size);
if (!array->pointer)
return -1;
array->size = new_size;
@@ -127,7 +127,7 @@ static inline void* array_get_next(array_t* array) {
static inline void* array_insert(array_t* array,unsigned int index,unsigned int count) {
if((array->next+count)*array->item_size>array->size) {
int increment=count*array->item_size;
array->pointer=g_realloc(array->pointer,array->size+increment);
array->pointer=qemu_realloc(array->pointer,array->size+increment);
if(!array->pointer)
return NULL;
array->size+=increment;
@@ -159,7 +159,7 @@ static inline int array_roll(array_t* array,int index_to,int index_from,int coun
is=array->item_size;
from=array->pointer+index_from*is;
to=array->pointer+index_to*is;
buf=g_malloc(is*count);
buf=qemu_malloc(is*count);
memcpy(buf,from,is*count);
if(index_to<index_from)
@@ -169,7 +169,7 @@ static inline int array_roll(array_t* array,int index_to,int index_from,int coun
memcpy(to,buf,is*count);
g_free(buf);
free(buf);
return 0;
}
@@ -200,7 +200,7 @@ static int array_index(array_t* array, void* pointer)
}
/* These structures are used to fake a disk and the VFAT filesystem.
* For this reason we need to use QEMU_PACKED. */
* For this reason we need to use __attribute__((packed)). */
typedef struct bootsector_t {
uint8_t jump[3];
@@ -224,7 +224,7 @@ typedef struct bootsector_t {
uint8_t signature;
uint32_t id;
uint8_t volume_label[11];
} QEMU_PACKED fat16;
} __attribute__((packed)) fat16;
struct {
uint32_t sectors_per_fat;
uint16_t flags;
@@ -233,12 +233,12 @@ typedef struct bootsector_t {
uint16_t info_sector;
uint16_t backup_boot_sector;
uint16_t ignored;
} QEMU_PACKED fat32;
} __attribute__((packed)) fat32;
} u;
uint8_t fat_type[8];
uint8_t ignored[0x1c0];
uint8_t magic[2];
} QEMU_PACKED bootsector_t;
} __attribute__((packed)) bootsector_t;
typedef struct {
uint8_t head;
@@ -253,7 +253,7 @@ typedef struct partition_t {
mbr_chs_t end_CHS;
uint32_t start_sector_long;
uint32_t length_sector_long;
} QEMU_PACKED partition_t;
} __attribute__((packed)) partition_t;
typedef struct mbr_t {
uint8_t ignored[0x1b8];
@@ -261,7 +261,7 @@ typedef struct mbr_t {
uint8_t ignored2[2];
partition_t partition[4];
uint8_t magic[2];
} QEMU_PACKED mbr_t;
} __attribute__((packed)) mbr_t;
typedef struct direntry_t {
uint8_t name[8];
@@ -276,7 +276,7 @@ typedef struct direntry_t {
uint16_t mdate;
uint16_t begin;
uint32_t size;
} QEMU_PACKED direntry_t;
} __attribute__((packed)) direntry_t;
/* this structure are used to transparently access the files */
@@ -318,7 +318,6 @@ static void print_mapping(const struct mapping_t* mapping);
/* here begins the real VVFAT driver */
typedef struct BDRVVVFATState {
CoMutex lock;
BlockDriverState* bs; /* pointer to parent */
unsigned int first_sectors_number; /* 1 for a single partition, 0x40 for a disk with partition table */
unsigned char first_sectors[0x40*0x200];
@@ -351,8 +350,6 @@ typedef struct BDRVVVFATState {
array_t commits;
const char* path;
int downcase_short_names;
Error *migration_blocker;
} BDRVVVFATState;
/* take the sector position spos and convert it to Cylinder/Head/Sector position
@@ -731,11 +728,11 @@ static int read_directory(BDRVVVFATState* s, int mapping_index)
if(first_cluster == 0 && (is_dotdot || is_dot))
continue;
buffer=(char*)g_malloc(length);
buffer=(char*)qemu_malloc(length);
snprintf(buffer,length,"%s/%s",dirname,entry->d_name);
if(stat(buffer,&st)<0) {
g_free(buffer);
free(buffer);
continue;
}
@@ -758,7 +755,7 @@ static int read_directory(BDRVVVFATState* s, int mapping_index)
direntry->begin=0; /* do that later */
if (st.st_size > 0x7fffffff) {
fprintf(stderr, "File %s is larger than 2GB\n", buffer);
g_free(buffer);
free(buffer);
closedir(dir);
return -2;
}
@@ -802,7 +799,6 @@ static int read_directory(BDRVVVFATState* s, int mapping_index)
/* root directory */
int cur = s->directory.next;
array_ensure_allocated(&(s->directory), ROOT_ENTRIES - 1);
s->directory.next = ROOT_ENTRIES;
memset(array_get(&(s->directory), cur), 0,
(ROOT_ENTRIES - cur) * sizeof(direntry_t));
}
@@ -829,6 +825,20 @@ static inline off_t cluster2sector(BDRVVVFATState* s, uint32_t cluster_num)
return s->faked_sectors + s->sectors_per_cluster * cluster_num;
}
static inline uint32_t sector_offset_in_cluster(BDRVVVFATState* s,off_t sector_num)
{
return (sector_num-s->first_sectors_number-2*s->sectors_per_fat)%s->sectors_per_cluster;
}
#ifdef DBG
static direntry_t* get_direntry_for_mapping(BDRVVVFATState* s,mapping_t* mapping)
{
if(mapping->mode==MODE_UNDEFINED)
return 0;
return (direntry_t*)(s->directory.pointer+sizeof(direntry_t)*mapping->dir_index);
}
#endif
static int init_directories(BDRVVVFATState* s,
const char* dirname)
{
@@ -840,7 +850,7 @@ static int init_directories(BDRVVVFATState* s,
memset(&(s->first_sectors[0]),0,0x40*0x200);
s->cluster_size=s->sectors_per_cluster*0x200;
s->cluster_buffer=g_malloc(s->cluster_size);
s->cluster_buffer=qemu_malloc(s->cluster_size);
/*
* The formula: sc = spf+1+spf*spc*(512*8/fat_type),
@@ -874,7 +884,7 @@ static int init_directories(BDRVVVFATState* s,
mapping->dir_index = 0;
mapping->info.dir.parent_mapping_index = -1;
mapping->first_mapping_index = -1;
mapping->path = g_strdup(dirname);
mapping->path = qemu_strdup(dirname);
i = strlen(mapping->path);
if (i > 0 && mapping->path[i - 1] == '/')
mapping->path[i - 1] = '\0';
@@ -919,8 +929,11 @@ static int init_directories(BDRVVVFATState* s,
cluster = mapping->end;
if(cluster > s->cluster_count) {
fprintf(stderr,"Directory does not fit in FAT%d (capacity %.2f MB)\n",
s->fat_type, s->sector_count / 2000.0);
fprintf(stderr,"Directory does not fit in FAT%d (capacity %s)\n",
s->fat_type,
s->fat_type == 12 ? s->sector_count == 2880 ? "1.44 MB"
: "2.88 MB"
: "504MB");
return -EINVAL;
}
@@ -954,7 +967,7 @@ static int init_directories(BDRVVVFATState* s,
bootsector->number_of_fats=0x2; /* number of FATs */
bootsector->root_entries=cpu_to_le16(s->sectors_of_root_directory*0x10);
bootsector->total_sectors16=s->sector_count>0xffff?0:cpu_to_le16(s->sector_count);
bootsector->media_type=(s->first_sectors_number>1?0xf8:0xf0); /* media descriptor (f8=hd, f0=3.5 fd)*/
bootsector->media_type=(s->fat_type!=12?0xf8:s->sector_count==5760?0xf9:0xf8); /* media descriptor */
s->fat.pointer[0] = bootsector->media_type;
bootsector->sectors_per_fat=cpu_to_le16(s->sectors_per_fat);
bootsector->sectors_per_track=cpu_to_le16(s->bs->secs);
@@ -963,7 +976,7 @@ static int init_directories(BDRVVVFATState* s,
bootsector->total_sectors=cpu_to_le32(s->sector_count>0xffff?s->sector_count:0);
/* LATER TODO: if FAT32, this is wrong */
bootsector->u.fat16.drive_number=s->first_sectors_number==1?0:0x80; /* fda=0, hda=0x80 */
bootsector->u.fat16.drive_number=s->fat_type==12?0:0x80; /* assume this is hda (TODO) */
bootsector->u.fat16.current_head=0;
bootsector->u.fat16.signature=0x29;
bootsector->u.fat16.id=cpu_to_le32(0xfabe1afd);
@@ -982,15 +995,10 @@ static BDRVVVFATState *vvv = NULL;
static int enable_write_target(BDRVVVFATState *s);
static int is_consistent(BDRVVVFATState *s);
static void vvfat_rebind(BlockDriverState *bs)
{
BDRVVVFATState *s = bs->opaque;
s->bs = bs;
}
static int vvfat_open(BlockDriverState *bs, const char* dirname, int flags)
{
BDRVVVFATState *s = bs->opaque;
int floppy = 0;
int i;
#ifdef DEBUG
@@ -1004,8 +1012,11 @@ DLOG(if (stderr == NULL) {
s->bs = bs;
s->fat_type=16;
/* LATER TODO: if FAT32, adjust */
s->sectors_per_cluster=0x10;
/* 504MB disk*/
bs->cyls=1024; bs->heads=16; bs->secs=63;
s->current_cluster=0xffffffff;
@@ -1020,6 +1031,16 @@ DLOG(if (stderr == NULL) {
if (!strstart(dirname, "fat:", NULL))
return -1;
if (strstr(dirname, ":floppy:")) {
floppy = 1;
s->fat_type = 12;
s->first_sectors_number = 1;
s->sectors_per_cluster=2;
bs->cyls = 80; bs->heads = 2; bs->secs = 36;
}
s->sector_count=bs->cyls*bs->heads*bs->secs;
if (strstr(dirname, ":32:")) {
fprintf(stderr, "Big fat greek warning: FAT32 has not been tested. You are welcome to do so!\n");
s->fat_type = 32;
@@ -1027,31 +1048,9 @@ DLOG(if (stderr == NULL) {
s->fat_type = 16;
} else if (strstr(dirname, ":12:")) {
s->fat_type = 12;
s->sector_count=2880;
}
if (strstr(dirname, ":floppy:")) {
/* 1.44MB or 2.88MB floppy. 2.88MB can be FAT12 (default) or FAT16. */
if (!s->fat_type) {
s->fat_type = 12;
bs->secs = 36;
s->sectors_per_cluster=2;
} else {
bs->secs=(s->fat_type == 12 ? 18 : 36);
s->sectors_per_cluster=1;
}
s->first_sectors_number = 1;
bs->cyls=80; bs->heads=2;
} else {
/* 32MB or 504MB disk*/
if (!s->fat_type) {
s->fat_type = 16;
}
bs->cyls=(s->fat_type == 12 ? 64 : 1024);
bs->heads=16; bs->secs=63;
}
s->sector_count=bs->cyls*bs->heads*bs->secs-(s->first_sectors_number-1);
if (strstr(dirname, ":rw:")) {
if (enable_write_target(s))
return -1;
@@ -1075,22 +1074,12 @@ DLOG(if (stderr == NULL) {
if(s->first_sectors_number==0x40)
init_mbr(s);
else {
/* MS-DOS does not like to know about CHS (?). */
/* for some reason or other, MS-DOS does not like to know about CHS... */
if (floppy)
bs->heads = bs->cyls = bs->secs = 0;
}
// assert(is_consistent(s));
qemu_co_mutex_init(&s->lock);
/* Disable migration when vvfat is used rw */
if (s->qcow) {
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"vvfat (rw)", bs->device_name, "live migration");
migrate_add_blocker(s->migration_blocker);
}
return 0;
}
@@ -1149,6 +1138,25 @@ static inline mapping_t* find_mapping_for_cluster(BDRVVVFATState* s,int cluster_
return mapping;
}
/*
* This function simply compares path == mapping->path. Since the mappings
* are sorted by cluster, this is expensive: O(n).
*/
static inline mapping_t* find_mapping_for_path(BDRVVVFATState* s,
const char* path)
{
int i;
for (i = 0; i < s->mapping.next; i++) {
mapping_t* mapping = array_get(&(s->mapping), i);
if (mapping->first_mapping_index < 0 &&
!strcmp(path, mapping->path))
return mapping;
}
return NULL;
}
static int open_file(BDRVVVFATState* s,mapping_t* mapping)
{
if(!mapping)
@@ -1215,6 +1223,23 @@ read_cluster_directory:
}
#ifdef DEBUG
static void hexdump(const void* address, uint32_t len)
{
const unsigned char* p = address;
int i, j;
for (i = 0; i < len; i += 16) {
for (j = 0; j < 16 && i + j < len; j++)
fprintf(stderr, "%02x ", p[i + j]);
for (; j < 16; j++)
fprintf(stderr, " ");
fprintf(stderr, " ");
for (j = 0; j < 16 && i + j < len; j++)
fprintf(stderr, "%c", (p[i + j] < ' ' || p[i + j] > 0x7f) ? '.' : p[i + j]);
fprintf(stderr, "\n");
}
}
static void print_direntry(const direntry_t* direntry)
{
int j = 0;
@@ -1268,19 +1293,19 @@ static int vvfat_read(BlockDriverState *bs, int64_t sector_num,
int i;
for(i=0;i<nb_sectors;i++,sector_num++) {
if (sector_num >= bs->total_sectors)
if (sector_num >= s->sector_count)
return -1;
if (s->qcow) {
int n;
if (bdrv_is_allocated(s->qcow, sector_num, nb_sectors-i, &n)) {
if (s->qcow->drv->bdrv_is_allocated(s->qcow,
sector_num, nb_sectors-i, &n)) {
DLOG(fprintf(stderr, "sectors %d+%d allocated\n", (int)sector_num, n));
if (bdrv_read(s->qcow, sector_num, buf + i*0x200, n)) {
return -1;
}
i += n - 1;
sector_num += n - 1;
continue;
}
if (s->qcow->drv->bdrv_read(s->qcow, sector_num, buf+i*0x200, n))
return -1;
i += n - 1;
sector_num += n - 1;
continue;
}
DLOG(fprintf(stderr, "sector %d not allocated\n", (int)sector_num));
}
if(sector_num<s->faked_sectors) {
@@ -1294,7 +1319,7 @@ DLOG(fprintf(stderr, "sector %d not allocated\n", (int)sector_num));
uint32_t sector=sector_num-s->faked_sectors,
sector_offset_in_cluster=(sector%s->sectors_per_cluster),
cluster_num=sector/s->sectors_per_cluster;
if(cluster_num > s->cluster_count || read_cluster(s, cluster_num) != 0) {
if(read_cluster(s, cluster_num) != 0) {
/* LATER TODO: strict: return -1; */
memset(buf+i*0x200,0,0x200);
continue;
@@ -1305,17 +1330,6 @@ DLOG(fprintf(stderr, "sector %d not allocated\n", (int)sector_num));
return 0;
}
static coroutine_fn int vvfat_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVVVFATState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = vvfat_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
/* LATER TODO: statify all functions */
/*
@@ -1361,7 +1375,7 @@ DLOG(fprintf(stderr, "clear_commits (%d commits)\n", s->commits.next));
assert(commit->path || commit->action == ACTION_WRITEOUT);
if (commit->action != ACTION_WRITEOUT) {
assert(commit->path);
g_free(commit->path);
free(commit->path);
} else
assert(commit->path == NULL);
}
@@ -1534,7 +1548,7 @@ static inline int cluster_was_modified(BDRVVVFATState* s, uint32_t cluster_num)
return 0;
for (i = 0; !was_modified && i < s->sectors_per_cluster; i++)
was_modified = bdrv_is_allocated(s->qcow,
was_modified = s->qcow->drv->bdrv_is_allocated(s->qcow,
cluster2sector(s, cluster_num) + i, 1, &dummy);
return was_modified;
@@ -1624,10 +1638,10 @@ static uint32_t get_cluster_count_for_direntry(BDRVVVFATState* s,
/* rename */
if (strcmp(basename, basename2))
schedule_rename(s, cluster_num, g_strdup(path));
schedule_rename(s, cluster_num, qemu_strdup(path));
} else if (is_file(direntry))
/* new file */
schedule_new_file(s, g_strdup(path), cluster_num);
schedule_new_file(s, qemu_strdup(path), cluster_num);
else {
abort();
return 0;
@@ -1683,16 +1697,16 @@ static uint32_t get_cluster_count_for_direntry(BDRVVVFATState* s,
int64_t offset = cluster2sector(s, cluster_num);
vvfat_close_current_file(s);
for (i = 0; i < s->sectors_per_cluster; i++) {
if (!bdrv_is_allocated(s->qcow, offset + i, 1, &dummy)) {
if (vvfat_read(s->bs, offset, s->cluster_buffer, 1)) {
return -1;
}
if (bdrv_write(s->qcow, offset, s->cluster_buffer, 1)) {
return -2;
}
}
}
for (i = 0; i < s->sectors_per_cluster; i++)
if (!s->qcow->drv->bdrv_is_allocated(s->qcow,
offset + i, 1, &dummy)) {
if (vvfat_read(s->bs,
offset, s->cluster_buffer, 1))
return -1;
if (s->qcow->drv->bdrv_write(s->qcow,
offset, s->cluster_buffer, 1))
return -2;
}
}
}
@@ -1721,13 +1735,13 @@ static int check_directory_consistency(BDRVVVFATState *s,
int cluster_num, const char* path)
{
int ret = 0;
unsigned char* cluster = g_malloc(s->cluster_size);
unsigned char* cluster = qemu_malloc(s->cluster_size);
direntry_t* direntries = (direntry_t*)cluster;
mapping_t* mapping = find_mapping_for_cluster(s, cluster_num);
long_file_name lfn;
int path_len = strlen(path);
char path2[PATH_MAX + 1];
char path2[PATH_MAX];
assert(path_len < PATH_MAX); /* len was tested before! */
pstrcpy(path2, sizeof(path2), path);
@@ -1744,10 +1758,10 @@ static int check_directory_consistency(BDRVVVFATState *s,
mapping->mode &= ~MODE_DELETED;
if (strcmp(basename, basename2))
schedule_rename(s, cluster_num, g_strdup(path));
schedule_rename(s, cluster_num, qemu_strdup(path));
} else
/* new directory */
schedule_mkdir(s, cluster_num, g_strdup(path));
schedule_mkdir(s, cluster_num, qemu_strdup(path));
lfn_init(&lfn);
do {
@@ -1768,14 +1782,14 @@ DLOG(fprintf(stderr, "read cluster %d (sector %d)\n", (int)cluster_num, (int)clu
if (subret) {
fprintf(stderr, "Error fetching direntries\n");
fail:
g_free(cluster);
free(cluster);
return 0;
}
for (i = 0; i < 0x10 * s->sectors_per_cluster; i++) {
int cluster_count = 0;
DLOG(fprintf(stderr, "check direntry %d:\n", i); print_direntry(direntries + i));
DLOG(fprintf(stderr, "check direntry %d: \n", i); print_direntry(direntries + i));
if (is_volume_label(direntries + i) || is_dot(direntries + i) ||
is_free(direntries + i))
continue;
@@ -1836,7 +1850,7 @@ DLOG(fprintf(stderr, "check direntry %d:\n", i); print_direntry(direntries + i))
cluster_num = modified_fat_get(s, cluster_num);
} while(!fat_eof(s, cluster_num));
g_free(cluster);
free(cluster);
return ret;
}
@@ -1862,7 +1876,7 @@ DLOG(checkpoint());
*/
if (s->fat2 == NULL) {
int size = 0x200 * s->sectors_per_fat;
s->fat2 = g_malloc(size);
s->fat2 = qemu_malloc(size);
memcpy(s->fat2, s->fat.pointer, size);
}
check = vvfat_read(s->bs,
@@ -1981,9 +1995,8 @@ static int remove_mapping(BDRVVVFATState* s, int mapping_index)
mapping_t* first_mapping = array_get(&(s->mapping), 0);
/* free mapping */
if (mapping->first_mapping_index < 0) {
g_free(mapping->path);
}
if (mapping->first_mapping_index < 0)
free(mapping->path);
/* remove from s->mapping */
array_remove(&(s->mapping), mapping_index);
@@ -2205,7 +2218,7 @@ static int commit_one_file(BDRVVVFATState* s,
uint32_t first_cluster = c;
mapping_t* mapping = find_mapping_for_cluster(s, c);
uint32_t size = filesize_of_direntry(direntry);
char* cluster = g_malloc(s->cluster_size);
char* cluster = qemu_malloc(s->cluster_size);
uint32_t i;
int fd = 0;
@@ -2219,16 +2232,11 @@ static int commit_one_file(BDRVVVFATState* s,
if (fd < 0) {
fprintf(stderr, "Could not open %s... (%s, %d)\n", mapping->path,
strerror(errno), errno);
g_free(cluster);
return fd;
}
if (offset > 0) {
if (lseek(fd, offset, SEEK_SET) != offset) {
close(fd);
g_free(cluster);
return -3;
}
}
if (offset > 0)
if (lseek(fd, offset, SEEK_SET) != offset)
return -3;
while (offset < size) {
uint32_t c1;
@@ -2244,17 +2252,11 @@ static int commit_one_file(BDRVVVFATState* s,
ret = vvfat_read(s->bs, cluster2sector(s, c),
(uint8_t*)cluster, (rest_size + 0x1ff) / 0x200);
if (ret < 0) {
close(fd);
g_free(cluster);
return ret;
}
if (ret < 0)
return ret;
if (write(fd, cluster, rest_size) < 0) {
close(fd);
g_free(cluster);
return -2;
}
if (write(fd, cluster, rest_size) < 0)
return -2;
offset += rest_size;
c = c1;
@@ -2263,11 +2265,9 @@ static int commit_one_file(BDRVVVFATState* s,
if (ftruncate(fd, size)) {
perror("ftruncate()");
close(fd);
g_free(cluster);
return -4;
}
close(fd);
g_free(cluster);
return commit_mappings(s, first_cluster, dir_index);
}
@@ -2383,7 +2383,7 @@ static int handle_renames_and_mkdirs(BDRVVVFATState* s)
mapping_t* m = find_mapping_for_cluster(s,
begin_of_direntry(d));
int l = strlen(m->path);
char* new_path = g_malloc(l + diff + 1);
char* new_path = qemu_malloc(l + diff + 1);
assert(!strncmp(m->path, mapping->path, l2));
@@ -2399,7 +2399,7 @@ static int handle_renames_and_mkdirs(BDRVVVFATState* s)
}
}
g_free(old_path);
free(old_path);
array_remove(&(s->commits), i);
continue;
} else if (commit->action == ACTION_MKDIR) {
@@ -2640,9 +2640,7 @@ static int do_commit(BDRVVVFATState* s)
return ret;
}
if (s->qcow->drv->bdrv_make_empty) {
s->qcow->drv->bdrv_make_empty(s->qcow);
}
s->qcow->drv->bdrv_make_empty(s->qcow);
memset(s->used_clusters, 0, sector2cluster(s, s->sector_count));
@@ -2737,7 +2735,7 @@ DLOG(checkpoint());
* Use qcow backend. Commit later.
*/
DLOG(fprintf(stderr, "Write to qcow backend: %d + %d\n", (int)sector_num, nb_sectors));
ret = bdrv_write(s->qcow, sector_num, buf, nb_sectors);
ret = s->qcow->drv->bdrv_write(s->qcow, sector_num, buf, nb_sectors);
if (ret < 0) {
fprintf(stderr, "Error writing to qcow backend\n");
return ret;
@@ -2756,18 +2754,7 @@ DLOG(checkpoint());
return 0;
}
static coroutine_fn int vvfat_co_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
int ret;
BDRVVVFATState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = vvfat_write(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static int coroutine_fn vvfat_co_is_allocated(BlockDriverState *bs,
static int vvfat_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int* n)
{
BDRVVVFATState* s = bs->opaque;
@@ -2788,7 +2775,7 @@ static int write_target_commit(BlockDriverState *bs, int64_t sector_num,
static void write_target_close(BlockDriverState *bs) {
BDRVVVFATState* s = *((BDRVVVFATState**) bs->opaque);
bdrv_delete(s->qcow);
g_free(s->qcow_filename);
free(s->qcow_filename);
}
static BlockDriver vvfat_write_target = {
@@ -2807,13 +2794,8 @@ static int enable_write_target(BDRVVVFATState *s)
array_init(&(s->commits), sizeof(commit_t));
s->qcow_filename = g_malloc(1024);
ret = get_tmp_filename(s->qcow_filename, 1024);
if (ret < 0) {
g_free(s->qcow_filename);
s->qcow_filename = NULL;
return ret;
}
s->qcow_filename = qemu_malloc(1024);
get_tmp_filename(s->qcow_filename, 1024);
bdrv_qcow = bdrv_find_format("qcow");
options = parse_option_parameters("", bdrv_qcow->create_options, NULL);
@@ -2840,7 +2822,7 @@ static int enable_write_target(BDRVVVFATState *s)
s->bs->backing_hd = calloc(sizeof(BlockDriverState), 1);
s->bs->backing_hd->drv = &vvfat_write_target;
s->bs->backing_hd->opaque = g_malloc(sizeof(void*));
s->bs->backing_hd->opaque = qemu_malloc(sizeof(void*));
*(void**)s->bs->backing_hd->opaque = s;
return 0;
@@ -2854,23 +2836,18 @@ static void vvfat_close(BlockDriverState *bs)
array_free(&(s->fat));
array_free(&(s->directory));
array_free(&(s->mapping));
g_free(s->cluster_buffer);
if (s->qcow) {
migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
}
if(s->cluster_buffer)
free(s->cluster_buffer);
}
static BlockDriver bdrv_vvfat = {
.format_name = "vvfat",
.instance_size = sizeof(BDRVVVFATState),
.bdrv_file_open = vvfat_open,
.bdrv_rebind = vvfat_rebind,
.bdrv_read = vvfat_co_read,
.bdrv_write = vvfat_co_write,
.bdrv_read = vvfat_read,
.bdrv_write = vvfat_write,
.bdrv_close = vvfat_close,
.bdrv_co_is_allocated = vvfat_co_is_allocated,
.bdrv_is_allocated = vvfat_is_allocated,
.protocol_name = "fat",
};
@@ -2901,5 +2878,11 @@ static void checkpoint(void) {
direntry = array_get(&(vvv->directory), mapping->dir_index);
assert(!memcmp(direntry->name, "USB H ", 11) || direntry->name[0]==0);
#endif
return;
/* avoid compiler warnings: */
hexdump(NULL, 100);
remove_mapping(vvv, 0);
print_mapping(NULL);
print_direntry(NULL);
}
#endif

View File

@@ -27,20 +27,10 @@
#include "block.h"
#include "qemu-option.h"
#include "qemu-queue.h"
#include "qemu-coroutine.h"
#include "qemu-timer.h"
#include "qapi-types.h"
#define BLOCK_FLAG_ENCRYPT 1
#define BLOCK_FLAG_COMPAT6 4
#define BLOCK_IO_LIMIT_READ 0
#define BLOCK_IO_LIMIT_WRITE 1
#define BLOCK_IO_LIMIT_TOTAL 2
#define BLOCK_IO_SLICE_TIME 100000000
#define NANOSECONDS_PER_SECOND 1000000000.0
#define BLOCK_OPT_SIZE "size"
#define BLOCK_OPT_ENCRYPT "encryption"
#define BLOCK_OPT_COMPAT6 "compat6"
@@ -50,86 +40,12 @@
#define BLOCK_OPT_TABLE_SIZE "table_size"
#define BLOCK_OPT_PREALLOC "preallocation"
#define BLOCK_OPT_SUBFMT "subformat"
#define BLOCK_OPT_COMPAT_LEVEL "compat"
typedef struct BdrvTrackedRequest BdrvTrackedRequest;
typedef struct BlockIOLimit {
int64_t bps[3];
int64_t iops[3];
} BlockIOLimit;
typedef struct BlockIOBaseValue {
uint64_t bytes[2];
uint64_t ios[2];
} BlockIOBaseValue;
typedef struct BlockJob BlockJob;
/**
* BlockJobType:
*
* A class type for block job objects.
*/
typedef struct BlockJobType {
/** Derived BlockJob struct size */
size_t instance_size;
/** String describing the operation, part of query-block-jobs QMP API */
const char *job_type;
/** Optional callback for job types that support setting a speed limit */
void (*set_speed)(BlockJob *job, int64_t speed, Error **errp);
} BlockJobType;
/**
* BlockJob:
*
* Long-running operation on a BlockDriverState.
*/
struct BlockJob {
/** The job type, including the job vtable. */
const BlockJobType *job_type;
/** The block device on which the job is operating. */
BlockDriverState *bs;
/**
* The coroutine that executes the job. If not NULL, it is
* reentered when busy is false and the job is cancelled.
*/
Coroutine *co;
/**
* Set to true if the job should cancel itself. The flag must
* always be tested just before toggling the busy flag from false
* to true. After a job has been cancelled, it should only yield
* if #qemu_aio_wait will ("sooner or later") reenter the coroutine.
*/
bool cancelled;
/**
* Set to false by the job while it is in a quiescent state, where
* no I/O is pending and the job has yielded on any condition
* that is not detected by #qemu_aio_wait, such as a timer.
*/
bool busy;
/** Offset that is published by the query-block-jobs QMP API */
int64_t offset;
/** Length that is published by the query-block-jobs QMP API */
int64_t len;
/** Speed that was set with @block_job_set_speed. */
int64_t speed;
/** The completion function that will be called when the job completes. */
BlockDriverCompletionFunc *cb;
/** The opaque value that is passed to the completion function. */
void *opaque;
};
typedef struct AIOPool {
void (*cancel)(BlockDriverAIOCB *acb);
int aiocb_size;
BlockDriverAIOCB *free_aiocb;
} AIOPool;
struct BlockDriver {
const char *format_name;
@@ -143,8 +59,10 @@ struct BlockDriver {
int (*bdrv_write)(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors);
void (*bdrv_close)(BlockDriverState *bs);
void (*bdrv_rebind)(BlockDriverState *bs);
int (*bdrv_create)(const char *filename, QEMUOptionParameter *options);
int (*bdrv_flush)(BlockDriverState *bs);
int (*bdrv_is_allocated)(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum);
int (*bdrv_set_key)(BlockDriverState *bs, const char *key);
int (*bdrv_make_empty)(BlockDriverState *bs);
/* aio */
@@ -156,44 +74,14 @@ struct BlockDriver {
BlockDriverCompletionFunc *cb, void *opaque);
BlockDriverAIOCB *(*bdrv_aio_flush)(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque);
BlockDriverAIOCB *(*bdrv_aio_discard)(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque);
int (*bdrv_discard)(BlockDriverState *bs, int64_t sector_num,
int nb_sectors);
int coroutine_fn (*bdrv_co_readv)(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov);
int coroutine_fn (*bdrv_co_writev)(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov);
/*
* Efficiently zero a region of the disk image. Typically an image format
* would use a compact metadata representation to implement this. This
* function pointer may be NULL and .bdrv_co_writev() will be called
* instead.
*/
int coroutine_fn (*bdrv_co_write_zeroes)(BlockDriverState *bs,
int64_t sector_num, int nb_sectors);
int coroutine_fn (*bdrv_co_discard)(BlockDriverState *bs,
int64_t sector_num, int nb_sectors);
int coroutine_fn (*bdrv_co_is_allocated)(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum);
int (*bdrv_aio_multiwrite)(BlockDriverState *bs, BlockRequest *reqs,
int num_reqs);
int (*bdrv_merge_requests)(BlockDriverState *bs, BlockRequest* a,
BlockRequest *b);
/*
* Invalidate any cached meta-data.
*/
void (*bdrv_invalidate_cache)(BlockDriverState *bs);
/*
* Flushes all data that was already written to the OS all the way down to
* the disk (for example raw-posix calls fsync()).
*/
int coroutine_fn (*bdrv_co_flush_to_disk)(BlockDriverState *bs);
/*
* Flushes all internal caches to the OS. The data may still sit in a
* writeback cache of the host OS, but it will survive a crash of the qemu
* process.
*/
int coroutine_fn (*bdrv_co_flush_to_os)(BlockDriverState *bs);
const char *protocol_name;
int (*bdrv_truncate)(BlockDriverState *bs, int64_t offset);
@@ -224,8 +112,8 @@ struct BlockDriver {
/* removable device specific */
int (*bdrv_is_inserted)(BlockDriverState *bs);
int (*bdrv_media_changed)(BlockDriverState *bs);
void (*bdrv_eject)(BlockDriverState *bs, bool eject_flag);
void (*bdrv_lock_medium)(BlockDriverState *bs, bool locked);
int (*bdrv_eject)(BlockDriverState *bs, int eject_flag);
int (*bdrv_set_locked)(BlockDriverState *bs, int locked);
/* to control generic scsi devices */
int (*bdrv_ioctl)(BlockDriverState *bs, unsigned long int req, void *buf);
@@ -254,58 +142,46 @@ struct BlockDriver {
QLIST_ENTRY(BlockDriver) list;
};
/*
* Note: the function bdrv_append() copies and swaps contents of
* BlockDriverStates, so if you add new fields to this struct, please
* inspect bdrv_append() to determine if the new fields need to be
* copied as well.
*/
struct BlockDriverState {
int64_t total_sectors; /* if we are reading a disk image, give its
size in sectors */
int read_only; /* if true, the media is read only */
int keep_read_only; /* if true, the media was requested to stay read only */
int open_flags; /* flags used to open the file, re-used for re-open */
int removable; /* if true, the media can be removed */
int locked; /* if true, the media cannot temporarily be ejected */
int tray_open; /* if true, the virtual tray is open */
int encrypted; /* if true, the media is encrypted */
int valid_key; /* if true, a valid encryption key has been set */
int sg; /* if true, the device is a /dev/sg* */
int copy_on_read; /* if true, copy read backing sectors into image
note this is a reference count */
/* event callback when inserting/removing */
void (*change_cb)(void *opaque, int reason);
void *change_opaque;
BlockDriver *drv; /* NULL means no media */
void *opaque;
void *dev; /* attached device model, if any */
/* TODO change to DeviceState when all users are qdevified */
const BlockDevOps *dev_ops;
void *dev_opaque;
DeviceState *peer;
char filename[1024];
char backing_file[1024]; /* if non zero, the image is a diff of
this file image */
char backing_format[16]; /* if non-zero and backing_file exists */
int is_temporary;
int media_changed;
BlockDriverState *backing_hd;
BlockDriverState *file;
/* number of in-flight copy-on-read requests */
unsigned int copy_on_read_in_flight;
/* async read/write emulation */
/* the time for latest disk I/O */
int64_t slice_time;
int64_t slice_start;
int64_t slice_end;
BlockIOLimit io_limits;
BlockIOBaseValue io_base;
CoQueue throttled_reqs;
QEMUTimer *block_timer;
bool io_limits_enabled;
void *sync_aiocb;
/* I/O stats (display with "info blockstats"). */
uint64_t nr_bytes[BDRV_MAX_IOTYPE];
uint64_t nr_ops[BDRV_MAX_IOTYPE];
uint64_t total_time_ns[BDRV_MAX_IOTYPE];
uint64_t rd_bytes;
uint64_t wr_bytes;
uint64_t rd_ops;
uint64_t wr_ops;
uint64_t wr_highest_sector;
/* Whether the disk can expand beyond total_sectors */
@@ -321,136 +197,70 @@ struct BlockDriverState {
drivers. They are not used by the block driver */
int cyls, heads, secs, translation;
BlockErrorAction on_read_error, on_write_error;
bool iostatus_enabled;
BlockDeviceIoStatus iostatus;
char device_name[32];
unsigned long *dirty_bitmap;
int64_t dirty_count;
int in_use; /* users other than guest access, eg. block migration */
QTAILQ_ENTRY(BlockDriverState) list;
QLIST_HEAD(, BdrvTrackedRequest) tracked_requests;
/* long-running background operation */
BlockJob *job;
void *private;
};
int get_tmp_filename(char *filename, int size);
#define CHANGE_MEDIA 0x01
#define CHANGE_SIZE 0x02
void bdrv_set_io_limits(BlockDriverState *bs,
BlockIOLimit *io_limits);
struct BlockDriverAIOCB {
AIOPool *pool;
BlockDriverState *bs;
BlockDriverCompletionFunc *cb;
void *opaque;
BlockDriverAIOCB *next;
};
void get_tmp_filename(char *filename, int size);
void *qemu_aio_get(AIOPool *pool, BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque);
void qemu_aio_release(void *p);
void *qemu_blockalign(BlockDriverState *bs, size_t size);
#ifdef _WIN32
int is_windows_drive(const char *filename);
#endif
/**
* block_job_create:
* @job_type: The class object for the newly-created job.
* @bs: The block
* @speed: The maximum speed, in bytes per second, or 0 for unlimited.
* @cb: Completion function for the job.
* @opaque: Opaque pointer value passed to @cb.
* @errp: Error object.
*
* Create a new long-running block device job and return it. The job
* will call @cb asynchronously when the job completes. Note that
* @bs may have been closed at the time the @cb it is called. If
* this is the case, the job may be reported as either cancelled or
* completed.
*
* This function is not part of the public job interface; it should be
* called from a wrapper that is specific to the job type.
*/
void *block_job_create(const BlockJobType *job_type, BlockDriverState *bs,
int64_t speed, BlockDriverCompletionFunc *cb,
void *opaque, Error **errp);
typedef struct BlockConf {
BlockDriverState *bs;
uint16_t physical_block_size;
uint16_t logical_block_size;
uint16_t min_io_size;
uint32_t opt_io_size;
int32_t bootindex;
uint32_t discard_granularity;
} BlockConf;
/**
* block_job_sleep_ns:
* @job: The job that calls the function.
* @clock: The clock to sleep on.
* @ns: How many nanoseconds to stop for.
*
* Put the job to sleep (assuming that it wasn't canceled) for @ns
* nanoseconds. Canceling the job will interrupt the wait immediately.
*/
void block_job_sleep_ns(BlockJob *job, QEMUClock *clock, int64_t ns);
static inline unsigned int get_physical_block_exp(BlockConf *conf)
{
unsigned int exp = 0, size;
/**
* block_job_complete:
* @job: The job being completed.
* @ret: The status code.
*
* Call the completion function that was registered at creation time, and
* free @job.
*/
void block_job_complete(BlockJob *job, int ret);
for (size = conf->physical_block_size;
size > conf->logical_block_size;
size >>= 1) {
exp++;
}
/**
* block_job_set_speed:
* @job: The job to set the speed for.
* @speed: The new value
* @errp: Error object.
*
* Set a rate-limiting parameter for the job; the actual meaning may
* vary depending on the job type.
*/
void block_job_set_speed(BlockJob *job, int64_t speed, Error **errp);
return exp;
}
/**
* block_job_cancel:
* @job: The job to be canceled.
*
* Asynchronously cancel the specified job.
*/
void block_job_cancel(BlockJob *job);
/**
* block_job_is_cancelled:
* @job: The job being queried.
*
* Returns whether the job is scheduled for cancellation.
*/
bool block_job_is_cancelled(BlockJob *job);
/**
* block_job_cancel:
* @job: The job to be canceled.
*
* Asynchronously cancel the job and wait for it to reach a quiescent
* state. Note that the completion callback will still be called
* asynchronously, hence it is *not* valid to call #bdrv_delete
* immediately after #block_job_cancel_sync. Users of block jobs
* will usually protect the BlockDriverState objects with a reference
* count, should this be a concern.
*
* Returns the return value from the job if the job actually completed
* during the call, or -ECANCELED if it was canceled.
*/
int block_job_cancel_sync(BlockJob *job);
/**
* stream_start:
* @bs: Block device to operate on.
* @base: Block device that will become the new base, or %NULL to
* flatten the whole backing file chain onto @bs.
* @base_id: The file name that will be written to @bs as the new
* backing file if the job completes. Ignored if @base is %NULL.
* @speed: The maximum speed, in bytes per second, or 0 for unlimited.
* @cb: Completion function for the job.
* @opaque: Opaque pointer value passed to @cb.
* @errp: Error object.
*
* Start a streaming operation on @bs. Clusters that are unallocated
* in @bs, but allocated in any image between @base and @bs (both
* exclusive) will be written to @bs. At the end of a successful
* streaming job, the backing file of @bs will be changed to
* @base_id in the written image and to @base in the live BlockDriverState.
*/
void stream_start(BlockDriverState *bs, BlockDriverState *base,
const char *base_id, int64_t speed,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp);
#define DEFINE_BLOCK_PROPERTIES(_state, _conf) \
DEFINE_PROP_DRIVE("drive", _state, _conf.bs), \
DEFINE_PROP_UINT16("logical_block_size", _state, \
_conf.logical_block_size, 512), \
DEFINE_PROP_UINT16("physical_block_size", _state, \
_conf.physical_block_size, 512), \
DEFINE_PROP_UINT16("min_io_size", _state, _conf.min_io_size, 0), \
DEFINE_PROP_UINT32("opt_io_size", _state, _conf.opt_io_size, 0), \
DEFINE_PROP_INT32("bootindex", _state, _conf.bootindex, -1), \
DEFINE_PROP_UINT32("discard_granularity", _state, \
_conf.discard_granularity, 0)
#endif /* BLOCK_INT_H */

View File

@@ -13,12 +13,11 @@
#include "qerror.h"
#include "qemu-option.h"
#include "qemu-config.h"
#include "qemu-objects.h"
#include "sysemu.h"
#include "hw/qdev.h"
#include "block_int.h"
#include "qmp-commands.h"
#include "trace.h"
#include "arch_init.h"
DriveInfo *extboot_drive = NULL;
static QTAILQ_HEAD(drivelist, DriveInfo) drives = QTAILQ_HEAD_INITIALIZER(drives);
@@ -64,9 +63,6 @@ void blockdev_mark_auto_del(BlockDriverState *bs)
{
DriveInfo *dinfo = drive_get_by_blockdev(bs);
if (bs->job) {
block_job_cancel(bs->job);
}
if (dinfo) {
dinfo->auto_del = 1;
}
@@ -188,9 +184,9 @@ static void drive_uninit(DriveInfo *dinfo)
{
qemu_opts_del(dinfo->opts);
bdrv_delete(dinfo->bdrv);
g_free(dinfo->id);
qemu_free(dinfo->id);
QTAILQ_REMOVE(&drives, dinfo, next);
g_free(dinfo);
qemu_free(dinfo);
}
void drive_put_ref(DriveInfo *dinfo)
@@ -206,37 +202,6 @@ void drive_get_ref(DriveInfo *dinfo)
dinfo->refcount++;
}
typedef struct {
QEMUBH *bh;
DriveInfo *dinfo;
} DrivePutRefBH;
static void drive_put_ref_bh(void *opaque)
{
DrivePutRefBH *s = opaque;
drive_put_ref(s->dinfo);
qemu_bh_delete(s->bh);
g_free(s);
}
/*
* Release a drive reference in a BH
*
* It is not possible to use drive_put_ref() from a callback function when the
* callers still need the drive. In such cases we schedule a BH to release the
* reference.
*/
static void drive_put_ref_bh_schedule(DriveInfo *dinfo)
{
DrivePutRefBH *s;
s = g_new(DrivePutRefBH, 1);
s->bh = qemu_bh_new(drive_put_ref_bh, s);
s->dinfo = dinfo;
qemu_bh_schedule(s->bh);
}
static int parse_block_error_action(const char *buf, int is_read)
{
if (!strcmp(buf, "ignore")) {
@@ -254,26 +219,6 @@ static int parse_block_error_action(const char *buf, int is_read)
}
}
static bool do_check_io_limits(BlockIOLimit *io_limits)
{
bool bps_flag;
bool iops_flag;
assert(io_limits);
bps_flag = (io_limits->bps[BLOCK_IO_LIMIT_TOTAL] != 0)
&& ((io_limits->bps[BLOCK_IO_LIMIT_READ] != 0)
|| (io_limits->bps[BLOCK_IO_LIMIT_WRITE] != 0));
iops_flag = (io_limits->iops[BLOCK_IO_LIMIT_TOTAL] != 0)
&& ((io_limits->iops[BLOCK_IO_LIMIT_READ] != 0)
|| (io_limits->iops[BLOCK_IO_LIMIT_WRITE] != 0));
if (bps_flag || iops_flag) {
return false;
}
return true;
}
DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
{
const char *buf;
@@ -293,9 +238,8 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
int on_read_error, on_write_error;
const char *devaddr;
DriveInfo *dinfo;
BlockIOLimit io_limits;
int is_extboot = 0;
int snapshot = 0;
bool copy_on_read;
int ret;
translation = BIOS_ATA_TRANSLATION_AUTO;
@@ -312,7 +256,6 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
snapshot = qemu_opt_get_bool(opts, "snapshot", 0);
ro = qemu_opt_get_bool(opts, "readonly", 0);
copy_on_read = qemu_opt_get_bool(opts, "copy-on-read", false);
file = qemu_opt_get(opts, "file");
serial = qemu_opt_get(opts, "serial");
@@ -381,9 +324,18 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
}
if ((buf = qemu_opt_get(opts, "cache")) != NULL) {
if (bdrv_parse_cache_flags(buf, &bdrv_flags) != 0) {
error_report("invalid cache option");
return NULL;
if (!strcmp(buf, "off") || !strcmp(buf, "none")) {
bdrv_flags |= BDRV_O_NOCACHE | BDRV_O_CACHE_WB;
} else if (!strcmp(buf, "writeback")) {
bdrv_flags |= BDRV_O_CACHE_WB;
} else if (!strcmp(buf, "unsafe")) {
bdrv_flags |= BDRV_O_CACHE_WB;
bdrv_flags |= BDRV_O_NO_FLUSH;
} else if (!strcmp(buf, "writethrough")) {
/* this is the default */
} else {
error_report("invalid cache option");
return NULL;
}
}
@@ -414,23 +366,9 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
}
}
/* disk I/O throttling */
io_limits.bps[BLOCK_IO_LIMIT_TOTAL] =
qemu_opt_get_number(opts, "bps", 0);
io_limits.bps[BLOCK_IO_LIMIT_READ] =
qemu_opt_get_number(opts, "bps_rd", 0);
io_limits.bps[BLOCK_IO_LIMIT_WRITE] =
qemu_opt_get_number(opts, "bps_wr", 0);
io_limits.iops[BLOCK_IO_LIMIT_TOTAL] =
qemu_opt_get_number(opts, "iops", 0);
io_limits.iops[BLOCK_IO_LIMIT_READ] =
qemu_opt_get_number(opts, "iops_rd", 0);
io_limits.iops[BLOCK_IO_LIMIT_WRITE] =
qemu_opt_get_number(opts, "iops_wr", 0);
if (!do_check_io_limits(&io_limits)) {
error_report("bps(iops) and bps_rd/bps_wr(iops_rd/iops_wr) "
"cannot be used at the same time");
is_extboot = qemu_opt_get_bool(opts, "boot", 0);
if (is_extboot && extboot_drive) {
fprintf(stderr, "qemu: two bootable drives specified\n");
return NULL;
}
@@ -513,12 +451,12 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
/* init */
dinfo = g_malloc0(sizeof(*dinfo));
dinfo = qemu_mallocz(sizeof(*dinfo));
if ((buf = qemu_opts_id(opts)) != NULL) {
dinfo->id = g_strdup(buf);
dinfo->id = qemu_strdup(buf);
} else {
/* no id supplied -> create one */
dinfo->id = g_malloc0(32);
dinfo->id = qemu_mallocz(32);
if (type == IF_IDE || type == IF_SCSI)
mediastr = (media == MEDIA_CDROM) ? "-cd" : "-hd";
if (max_devs)
@@ -535,15 +473,15 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
dinfo->unit = unit_id;
dinfo->opts = opts;
dinfo->refcount = 1;
if (serial) {
pstrcpy(dinfo->serial, sizeof(dinfo->serial), serial);
}
if (serial)
strncpy(dinfo->serial, serial, sizeof(dinfo->serial) - 1);
QTAILQ_INSERT_TAIL(&drives, dinfo, next);
bdrv_set_on_error(dinfo->bdrv, on_read_error, on_write_error);
if (is_extboot) {
extboot_drive = dinfo;
}
/* disk I/O throttling */
bdrv_set_io_limits(dinfo->bdrv, &io_limits);
bdrv_set_on_error(dinfo->bdrv, on_read_error, on_write_error);
switch(type) {
case IF_IDE:
@@ -558,23 +496,24 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
}
break;
case MEDIA_CDROM:
bdrv_set_removable(dinfo->bdrv, 1);
dinfo->media_cd = 1;
break;
}
break;
case IF_SD:
/* FIXME: This isn't really a floppy, but it's a reasonable
approximation. */
case IF_FLOPPY:
bdrv_set_removable(dinfo->bdrv, 1);
break;
case IF_PFLASH:
case IF_MTD:
break;
case IF_VIRTIO:
/* add virtio block device */
opts = qemu_opts_create(qemu_find_opts("device"), NULL, 0);
if (arch_type == QEMU_ARCH_S390X) {
qemu_opt_set(opts, "driver", "virtio-blk-s390");
} else {
qemu_opt_set(opts, "driver", "virtio-blk-pci");
}
qemu_opt_set(opts, "driver", "virtio-blk");
qemu_opt_set(opts, "drive", dinfo->id);
if (devaddr)
qemu_opt_set(opts, "addr", devaddr);
@@ -591,20 +530,11 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
bdrv_flags |= (BDRV_O_SNAPSHOT|BDRV_O_CACHE_WB|BDRV_O_NO_FLUSH);
}
if (copy_on_read) {
bdrv_flags |= BDRV_O_COPY_ON_READ;
}
if (runstate_check(RUN_STATE_INMIGRATE)) {
bdrv_flags |= BDRV_O_INCOMING;
}
if (media == MEDIA_CDROM) {
/* CDROM is fine for any interface, don't check. */
ro = 1;
} else if (ro == 1) {
if (type != IF_SCSI && type != IF_VIRTIO && type != IF_FLOPPY &&
type != IF_NONE && type != IF_PFLASH) {
if (type != IF_SCSI && type != IF_VIRTIO && type != IF_FLOPPY && type != IF_NONE) {
error_report("readonly not supported by this bus type");
goto err;
}
@@ -625,9 +555,9 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
err:
bdrv_delete(dinfo->bdrv);
g_free(dinfo->id);
qemu_free(dinfo->id);
QTAILQ_REMOVE(&drives, dinfo, next);
g_free(dinfo);
qemu_free(dinfo);
return NULL;
}
@@ -635,354 +565,182 @@ void do_commit(Monitor *mon, const QDict *qdict)
{
const char *device = qdict_get_str(qdict, "device");
BlockDriverState *bs;
int ret;
if (!strcmp(device, "all")) {
ret = bdrv_commit_all();
if (ret == -EBUSY) {
qerror_report(QERR_DEVICE_IN_USE, device);
return;
}
bdrv_commit_all();
} else {
bs = bdrv_find(device);
if (!bs) {
qerror_report(QERR_DEVICE_NOT_FOUND, device);
return;
}
ret = bdrv_commit(bs);
if (ret == -EBUSY) {
qerror_report(QERR_DEVICE_IN_USE, device);
return;
}
bdrv_commit(bs);
}
}
static void blockdev_do_action(int kind, void *data, Error **errp)
{
BlockdevAction action;
BlockdevActionList list;
action.kind = kind;
action.data = data;
list.value = &action;
list.next = NULL;
qmp_transaction(&list, errp);
}
void qmp_blockdev_snapshot_sync(const char *device, const char *snapshot_file,
bool has_format, const char *format,
bool has_mode, enum NewImageMode mode,
Error **errp)
{
BlockdevSnapshot snapshot = {
.device = (char *) device,
.snapshot_file = (char *) snapshot_file,
.has_format = has_format,
.format = (char *) format,
.has_mode = has_mode,
.mode = mode,
};
blockdev_do_action(BLOCKDEV_ACTION_KIND_BLOCKDEV_SNAPSHOT_SYNC, &snapshot,
errp);
}
/* New and old BlockDriverState structs for group snapshots */
typedef struct BlkTransactionStates {
BlockDriverState *old_bs;
BlockDriverState *new_bs;
QSIMPLEQ_ENTRY(BlkTransactionStates) entry;
} BlkTransactionStates;
/*
* 'Atomic' group snapshots. The snapshots are taken as a set, and if any fail
* then we do not pivot any of the devices in the group, and abandon the
* snapshots
*/
void qmp_transaction(BlockdevActionList *dev_list, Error **errp)
{
int ret = 0;
BlockdevActionList *dev_entry = dev_list;
BlkTransactionStates *states, *next;
QSIMPLEQ_HEAD(snap_bdrv_states, BlkTransactionStates) snap_bdrv_states;
QSIMPLEQ_INIT(&snap_bdrv_states);
/* drain all i/o before any snapshots */
bdrv_drain_all();
/* We don't do anything in this loop that commits us to the snapshot */
while (NULL != dev_entry) {
BlockdevAction *dev_info = NULL;
BlockDriver *proto_drv;
BlockDriver *drv;
int flags;
enum NewImageMode mode;
const char *new_image_file;
const char *device;
const char *format = "qcow2";
dev_info = dev_entry->value;
dev_entry = dev_entry->next;
states = g_malloc0(sizeof(BlkTransactionStates));
QSIMPLEQ_INSERT_TAIL(&snap_bdrv_states, states, entry);
switch (dev_info->kind) {
case BLOCKDEV_ACTION_KIND_BLOCKDEV_SNAPSHOT_SYNC:
device = dev_info->blockdev_snapshot_sync->device;
if (!dev_info->blockdev_snapshot_sync->has_mode) {
dev_info->blockdev_snapshot_sync->mode = NEW_IMAGE_MODE_ABSOLUTE_PATHS;
}
new_image_file = dev_info->blockdev_snapshot_sync->snapshot_file;
if (dev_info->blockdev_snapshot_sync->has_format) {
format = dev_info->blockdev_snapshot_sync->format;
}
mode = dev_info->blockdev_snapshot_sync->mode;
break;
default:
abort();
}
drv = bdrv_find_format(format);
if (!drv) {
error_set(errp, QERR_INVALID_BLOCK_FORMAT, format);
goto delete_and_fail;
}
states->old_bs = bdrv_find(device);
if (!states->old_bs) {
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
goto delete_and_fail;
}
if (!bdrv_is_inserted(states->old_bs)) {
error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
goto delete_and_fail;
}
if (bdrv_in_use(states->old_bs)) {
error_set(errp, QERR_DEVICE_IN_USE, device);
goto delete_and_fail;
}
if (!bdrv_is_read_only(states->old_bs)) {
if (bdrv_flush(states->old_bs)) {
error_set(errp, QERR_IO_ERROR);
goto delete_and_fail;
}
}
flags = states->old_bs->open_flags;
proto_drv = bdrv_find_protocol(new_image_file);
if (!proto_drv) {
error_set(errp, QERR_INVALID_BLOCK_FORMAT, format);
goto delete_and_fail;
}
/* create new image w/backing file */
if (mode != NEW_IMAGE_MODE_EXISTING) {
ret = bdrv_img_create(new_image_file, format,
states->old_bs->filename,
states->old_bs->drv->format_name,
NULL, -1, flags);
if (ret) {
error_set(errp, QERR_OPEN_FILE_FAILED, new_image_file);
goto delete_and_fail;
}
}
/* We will manually add the backing_hd field to the bs later */
states->new_bs = bdrv_new("");
ret = bdrv_open(states->new_bs, new_image_file,
flags | BDRV_O_NO_BACKING, drv);
if (ret != 0) {
error_set(errp, QERR_OPEN_FILE_FAILED, new_image_file);
goto delete_and_fail;
}
}
/* Now we are going to do the actual pivot. Everything up to this point
* is reversible, but we are committed at this point */
QSIMPLEQ_FOREACH(states, &snap_bdrv_states, entry) {
/* This removes our old bs from the bdrv_states, and adds the new bs */
bdrv_append(states->new_bs, states->old_bs);
}
/* success */
goto exit;
delete_and_fail:
/*
* failure, and it is all-or-none; abandon each new bs, and keep using
* the original bs for all images
*/
QSIMPLEQ_FOREACH(states, &snap_bdrv_states, entry) {
if (states->new_bs) {
bdrv_delete(states->new_bs);
}
}
exit:
QSIMPLEQ_FOREACH_SAFE(states, &snap_bdrv_states, entry, next) {
g_free(states);
}
return;
}
static void eject_device(BlockDriverState *bs, int force, Error **errp)
{
if (bdrv_in_use(bs)) {
error_set(errp, QERR_DEVICE_IN_USE, bdrv_get_device_name(bs));
return;
}
if (!bdrv_dev_has_removable_media(bs)) {
error_set(errp, QERR_DEVICE_NOT_REMOVABLE, bdrv_get_device_name(bs));
return;
}
if (bdrv_dev_is_medium_locked(bs) && !bdrv_dev_is_tray_open(bs)) {
bdrv_dev_eject_request(bs, force);
if (!force) {
error_set(errp, QERR_DEVICE_LOCKED, bdrv_get_device_name(bs));
return;
}
}
bdrv_close(bs);
}
void qmp_eject(const char *device, bool has_force, bool force, Error **errp)
int do_snapshot_blkdev(Monitor *mon, const QDict *qdict, QObject **ret_data)
{
const char *device = qdict_get_str(qdict, "device");
const char *filename = qdict_get_try_str(qdict, "snapshot-file");
const char *format = qdict_get_try_str(qdict, "format");
BlockDriverState *bs;
BlockDriver *drv, *old_drv, *proto_drv;
int ret = 0;
int flags;
char old_filename[1024];
if (!filename) {
qerror_report(QERR_MISSING_PARAMETER, "snapshot-file");
ret = -1;
goto out;
}
bs = bdrv_find(device);
if (!bs) {
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
qerror_report(QERR_DEVICE_NOT_FOUND, device);
ret = -1;
goto out;
}
eject_device(bs, force, errp);
pstrcpy(old_filename, sizeof(old_filename), bs->filename);
old_drv = bs->drv;
flags = bs->open_flags;
if (!format) {
format = "qcow2";
}
drv = bdrv_find_format(format);
if (!drv) {
qerror_report(QERR_INVALID_BLOCK_FORMAT, format);
ret = -1;
goto out;
}
proto_drv = bdrv_find_protocol(filename);
if (!proto_drv) {
qerror_report(QERR_INVALID_BLOCK_FORMAT, format);
ret = -1;
goto out;
}
ret = bdrv_img_create(filename, format, bs->filename,
bs->drv->format_name, NULL, -1, flags);
if (ret) {
goto out;
}
qemu_aio_flush();
bdrv_flush(bs);
bdrv_close(bs);
ret = bdrv_open(bs, filename, flags, drv);
/*
* If reopening the image file we just created fails, fall back
* and try to re-open the original image. If that fails too, we
* are in serious trouble.
*/
if (ret != 0) {
ret = bdrv_open(bs, old_filename, flags, old_drv);
if (ret != 0) {
qerror_report(QERR_OPEN_FILE_FAILED, old_filename);
} else {
qerror_report(QERR_OPEN_FILE_FAILED, filename);
}
}
out:
if (ret) {
ret = -1;
}
return ret;
}
void qmp_block_passwd(const char *device, const char *password, Error **errp)
static int eject_device(Monitor *mon, BlockDriverState *bs, int force)
{
if (!force) {
if (!bdrv_is_removable(bs)) {
qerror_report(QERR_DEVICE_NOT_REMOVABLE,
bdrv_get_device_name(bs));
return -1;
}
if (bdrv_is_locked(bs)) {
qerror_report(QERR_DEVICE_LOCKED, bdrv_get_device_name(bs));
return -1;
}
}
bdrv_close(bs);
return 0;
}
int do_eject(Monitor *mon, const QDict *qdict, QObject **ret_data)
{
BlockDriverState *bs;
int force = qdict_get_try_bool(qdict, "force", 0);
const char *filename = qdict_get_str(qdict, "device");
bs = bdrv_find(filename);
if (!bs) {
qerror_report(QERR_DEVICE_NOT_FOUND, filename);
return -1;
}
return eject_device(mon, bs, force);
}
int do_block_set_passwd(Monitor *mon, const QDict *qdict,
QObject **ret_data)
{
BlockDriverState *bs;
int err;
bs = bdrv_find(device);
bs = bdrv_find(qdict_get_str(qdict, "device"));
if (!bs) {
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
qerror_report(QERR_DEVICE_NOT_FOUND, qdict_get_str(qdict, "device"));
return -1;
}
err = bdrv_set_key(bs, password);
err = bdrv_set_key(bs, qdict_get_str(qdict, "password"));
if (err == -EINVAL) {
error_set(errp, QERR_DEVICE_NOT_ENCRYPTED, bdrv_get_device_name(bs));
return;
qerror_report(QERR_DEVICE_NOT_ENCRYPTED, bdrv_get_device_name(bs));
return -1;
} else if (err < 0) {
error_set(errp, QERR_INVALID_PASSWORD);
return;
qerror_report(QERR_INVALID_PASSWORD);
return -1;
}
return 0;
}
static void qmp_bdrv_open_encrypted(BlockDriverState *bs, const char *filename,
int bdrv_flags, BlockDriver *drv,
const char *password, Error **errp)
{
if (bdrv_open(bs, filename, bdrv_flags, drv) < 0) {
error_set(errp, QERR_OPEN_FILE_FAILED, filename);
return;
}
if (bdrv_key_required(bs)) {
if (password) {
if (bdrv_set_key(bs, password) < 0) {
error_set(errp, QERR_INVALID_PASSWORD);
}
} else {
error_set(errp, QERR_DEVICE_ENCRYPTED, bdrv_get_device_name(bs),
bdrv_get_encrypted_filename(bs));
}
} else if (password) {
error_set(errp, QERR_DEVICE_NOT_ENCRYPTED, bdrv_get_device_name(bs));
}
}
void qmp_change_blockdev(const char *device, const char *filename,
bool has_format, const char *format, Error **errp)
int do_change_block(Monitor *mon, const char *device,
const char *filename, const char *fmt)
{
BlockDriverState *bs;
BlockDriver *drv = NULL;
int bdrv_flags;
Error *err = NULL;
bs = bdrv_find(device);
if (!bs) {
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
qerror_report(QERR_DEVICE_NOT_FOUND, device);
return -1;
}
if (format) {
drv = bdrv_find_whitelisted_format(format);
if (fmt) {
drv = bdrv_find_whitelisted_format(fmt);
if (!drv) {
error_set(errp, QERR_INVALID_BLOCK_FORMAT, format);
return;
qerror_report(QERR_INVALID_BLOCK_FORMAT, fmt);
return -1;
}
}
eject_device(bs, 0, &err);
if (error_is_set(&err)) {
error_propagate(errp, err);
return;
if (eject_device(mon, bs, 0) < 0) {
return -1;
}
bdrv_flags = bdrv_is_read_only(bs) ? 0 : BDRV_O_RDWR;
bdrv_flags |= bdrv_is_snapshot(bs) ? BDRV_O_SNAPSHOT : 0;
qmp_bdrv_open_encrypted(bs, filename, bdrv_flags, drv, NULL, errp);
}
/* throttling disk I/O limits */
void qmp_block_set_io_throttle(const char *device, int64_t bps, int64_t bps_rd,
int64_t bps_wr, int64_t iops, int64_t iops_rd,
int64_t iops_wr, Error **errp)
{
BlockIOLimit io_limits;
BlockDriverState *bs;
bs = bdrv_find(device);
if (!bs) {
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
}
io_limits.bps[BLOCK_IO_LIMIT_TOTAL] = bps;
io_limits.bps[BLOCK_IO_LIMIT_READ] = bps_rd;
io_limits.bps[BLOCK_IO_LIMIT_WRITE] = bps_wr;
io_limits.iops[BLOCK_IO_LIMIT_TOTAL]= iops;
io_limits.iops[BLOCK_IO_LIMIT_READ] = iops_rd;
io_limits.iops[BLOCK_IO_LIMIT_WRITE]= iops_wr;
if (!do_check_io_limits(&io_limits)) {
error_set(errp, QERR_INVALID_PARAMETER_COMBINATION);
return;
}
bs->io_limits = io_limits;
bs->slice_time = BLOCK_IO_SLICE_TIME;
if (!bs->io_limits_enabled && bdrv_io_limits_enabled(bs)) {
bdrv_io_limits_enable(bs);
} else if (bs->io_limits_enabled && !bdrv_io_limits_enabled(bs)) {
bdrv_io_limits_disable(bs);
} else {
if (bs->block_timer) {
qemu_mod_timer(bs->block_timer, qemu_get_clock_ns(vm_clock));
}
if (bdrv_open(bs, filename, bdrv_flags, drv) < 0) {
qerror_report(QERR_OPEN_FILE_FAILED, filename);
return -1;
}
return monitor_read_bdrv_key_start(mon, bs, NULL, NULL);
}
int do_drive_del(Monitor *mon, const QDict *qdict, QObject **ret_data)
@@ -1001,16 +759,16 @@ int do_drive_del(Monitor *mon, const QDict *qdict, QObject **ret_data)
}
/* quiesce block driver; prevent further io */
bdrv_drain_all();
qemu_aio_flush();
bdrv_flush(bs);
bdrv_close(bs);
/* if we have a device attached to this BlockDriverState
/* if we have a device associated with this BlockDriverState (bs->peer)
* then we need to make the drive anonymous until the device
* can be removed. If this is a drive with no device backing
* then we can just get rid of the block driver state right here.
*/
if (bdrv_get_attached_dev(bs)) {
if (bs->peer) {
bdrv_make_anon(bs);
} else {
drive_uninit(drive_get_by_blockdev(bs));
@@ -1019,182 +777,32 @@ int do_drive_del(Monitor *mon, const QDict *qdict, QObject **ret_data)
return 0;
}
void qmp_block_resize(const char *device, int64_t size, Error **errp)
/*
* XXX: replace the QERR_UNDEFINED_ERROR errors with real values once the
* existing QERR_ macro mess is cleaned up. A good example for better
* error reports can be found in the qemu-img resize code.
*/
int do_block_resize(Monitor *mon, const QDict *qdict, QObject **ret_data)
{
const char *device = qdict_get_str(qdict, "device");
int64_t size = qdict_get_int(qdict, "size");
BlockDriverState *bs;
bs = bdrv_find(device);
if (!bs) {
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
qerror_report(QERR_DEVICE_NOT_FOUND, device);
return -1;
}
if (size < 0) {
error_set(errp, QERR_INVALID_PARAMETER_VALUE, "size", "a >0 size");
return;
qerror_report(QERR_UNDEFINED_ERROR);
return -1;
}
switch (bdrv_truncate(bs, size)) {
case 0:
break;
case -ENOMEDIUM:
error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, device);
break;
case -ENOTSUP:
error_set(errp, QERR_UNSUPPORTED);
break;
case -EACCES:
error_set(errp, QERR_DEVICE_IS_READ_ONLY, device);
break;
case -EBUSY:
error_set(errp, QERR_DEVICE_IN_USE, device);
break;
default:
error_set(errp, QERR_UNDEFINED_ERROR);
break;
if (bdrv_truncate(bs, size)) {
qerror_report(QERR_UNDEFINED_ERROR);
return -1;
}
}
static QObject *qobject_from_block_job(BlockJob *job)
{
return qobject_from_jsonf("{ 'type': %s,"
"'device': %s,"
"'len': %" PRId64 ","
"'offset': %" PRId64 ","
"'speed': %" PRId64 " }",
job->job_type->job_type,
bdrv_get_device_name(job->bs),
job->len,
job->offset,
job->speed);
}
static void block_stream_cb(void *opaque, int ret)
{
BlockDriverState *bs = opaque;
QObject *obj;
trace_block_stream_cb(bs, bs->job, ret);
assert(bs->job);
obj = qobject_from_block_job(bs->job);
if (ret < 0) {
QDict *dict = qobject_to_qdict(obj);
qdict_put(dict, "error", qstring_from_str(strerror(-ret)));
}
if (block_job_is_cancelled(bs->job)) {
monitor_protocol_event(QEVENT_BLOCK_JOB_CANCELLED, obj);
} else {
monitor_protocol_event(QEVENT_BLOCK_JOB_COMPLETED, obj);
}
qobject_decref(obj);
drive_put_ref_bh_schedule(drive_get_by_blockdev(bs));
}
void qmp_block_stream(const char *device, bool has_base,
const char *base, bool has_speed,
int64_t speed, Error **errp)
{
BlockDriverState *bs;
BlockDriverState *base_bs = NULL;
Error *local_err = NULL;
bs = bdrv_find(device);
if (!bs) {
error_set(errp, QERR_DEVICE_NOT_FOUND, device);
return;
}
if (base) {
base_bs = bdrv_find_backing_image(bs, base);
if (base_bs == NULL) {
error_set(errp, QERR_BASE_NOT_FOUND, base);
return;
}
}
stream_start(bs, base_bs, base, has_speed ? speed : 0,
block_stream_cb, bs, &local_err);
if (error_is_set(&local_err)) {
error_propagate(errp, local_err);
return;
}
/* Grab a reference so hotplug does not delete the BlockDriverState from
* underneath us.
*/
drive_get_ref(drive_get_by_blockdev(bs));
trace_qmp_block_stream(bs, bs->job);
}
static BlockJob *find_block_job(const char *device)
{
BlockDriverState *bs;
bs = bdrv_find(device);
if (!bs || !bs->job) {
return NULL;
}
return bs->job;
}
void qmp_block_job_set_speed(const char *device, int64_t speed, Error **errp)
{
BlockJob *job = find_block_job(device);
if (!job) {
error_set(errp, QERR_DEVICE_NOT_ACTIVE, device);
return;
}
block_job_set_speed(job, speed, errp);
}
void qmp_block_job_cancel(const char *device, Error **errp)
{
BlockJob *job = find_block_job(device);
if (!job) {
error_set(errp, QERR_DEVICE_NOT_ACTIVE, device);
return;
}
trace_qmp_block_job_cancel(job);
block_job_cancel(job);
}
static void do_qmp_query_block_jobs_one(void *opaque, BlockDriverState *bs)
{
BlockJobInfoList **prev = opaque;
BlockJob *job = bs->job;
if (job) {
BlockJobInfoList *elem;
BlockJobInfo *info = g_new(BlockJobInfo, 1);
*info = (BlockJobInfo){
.type = g_strdup(job->job_type->job_type),
.device = g_strdup(bdrv_get_device_name(bs)),
.len = job->len,
.offset = job->offset,
.speed = job->speed,
};
elem = g_new0(BlockJobInfoList, 1);
elem->value = info;
(*prev)->next = elem;
*prev = elem;
}
}
BlockJobInfoList *qmp_query_block_jobs(Error **errp)
{
/* Dummy is a fake list element for holding the head pointer */
BlockJobInfoList dummy = {};
BlockJobInfoList *prev = &dummy;
bdrv_iterate(do_qmp_query_block_jobs_one, &prev);
return dummy.next;
return 0;
}

View File

@@ -11,7 +11,6 @@
#define BLOCKDEV_H
#include "block.h"
#include "error.h"
#include "qemu-queue.h"
void blockdev_mark_auto_del(BlockDriverState *bs);
@@ -58,8 +57,15 @@ DriveInfo *drive_init(QemuOpts *arg, int default_to_scsi);
DriveInfo *add_init_drive(const char *opts);
void qmp_change_blockdev(const char *device, const char *filename,
bool has_format, const char *format, Error **errp);
void do_commit(Monitor *mon, const QDict *qdict);
int do_eject(Monitor *mon, const QDict *qdict, QObject **ret_data);
int do_block_set_passwd(Monitor *mon, const QDict *qdict, QObject **ret_data);
int do_change_block(Monitor *mon, const char *device,
const char *filename, const char *fmt);
int do_drive_del(Monitor *mon, const QDict *qdict, QObject **ret_data);
int do_snapshot_blkdev(Monitor *mon, const QDict *qdict, QObject **ret_data);
int do_block_resize(Monitor *mon, const QDict *qdict, QObject **ret_data);
extern DriveInfo *extboot_drive;
#endif

View File

@@ -196,7 +196,7 @@ int loader_exec(const char * filename, char ** argv, char ** envp,
/* Something went wrong, return the inode and free the argument pages*/
for (i=0 ; i<MAX_ARG_PAGES ; i++) {
g_free(bprm.page[i]);
free(bprm.page[i]);
}
return(retval);
}

View File

@@ -641,7 +641,8 @@ static abi_ulong copy_elf_strings(int argc,char ** argv, void **page,
offset = p % TARGET_PAGE_SIZE;
pag = (char *)page[p/TARGET_PAGE_SIZE];
if (!pag) {
pag = g_try_malloc0(TARGET_PAGE_SIZE);
pag = (char *)malloc(TARGET_PAGE_SIZE);
memset(pag, 0, TARGET_PAGE_SIZE);
page[p/TARGET_PAGE_SIZE] = pag;
if (!pag)
return 0;
@@ -695,7 +696,7 @@ static abi_ulong setup_arg_pages(abi_ulong p, struct linux_binprm *bprm,
info->rss++;
/* FIXME - check return value of memcpy_to_target() for failure */
memcpy_to_target(stack_base, bprm->page[i], TARGET_PAGE_SIZE);
g_free(bprm->page[i]);
free(bprm->page[i]);
}
stack_base += TARGET_PAGE_SIZE;
}
@@ -993,12 +994,12 @@ static abi_ulong load_elf_interp(struct elfhdr * interp_elf_ex,
static int symfind(const void *s0, const void *s1)
{
target_ulong addr = *(target_ulong *)s0;
struct elf_sym *key = (struct elf_sym *)s0;
struct elf_sym *sym = (struct elf_sym *)s1;
int result = 0;
if (addr < sym->st_value) {
if (key->st_value < sym->st_value) {
result = -1;
} else if (addr >= sym->st_value + sym->st_size) {
} else if (key->st_value > sym->st_value + sym->st_size) {
result = 1;
}
return result;
@@ -1013,9 +1014,12 @@ static const char *lookup_symbolxx(struct syminfo *s, target_ulong orig_addr)
#endif
// binary search
struct elf_sym key;
struct elf_sym *sym;
sym = bsearch(&orig_addr, syms, s->disas_num_syms, sizeof(*syms), symfind);
key.st_value = orig_addr;
sym = bsearch(&key, syms, s->disas_num_syms, sizeof(*syms), symfind);
if (sym != NULL) {
return s->disas_strtab + sym->st_name;
}

View File

@@ -41,7 +41,6 @@ int singlestep;
unsigned long mmap_min_addr;
unsigned long guest_base;
int have_guest_base;
unsigned long reserved_va;
#endif
static const char *interp_prefix = CONFIG_QEMU_INTERP_PREFIX;
@@ -64,18 +63,18 @@ void gemu_log(const char *fmt, ...)
}
#if defined(TARGET_I386)
int cpu_get_pic_interrupt(CPUX86State *env)
int cpu_get_pic_interrupt(CPUState *env)
{
return -1;
}
#endif
/* These are no-ops because we are not threadsafe. */
static inline void cpu_exec_start(CPUArchState *env)
static inline void cpu_exec_start(CPUState *env)
{
}
static inline void cpu_exec_end(CPUArchState *env)
static inline void cpu_exec_end(CPUState *env)
{
}
@@ -110,7 +109,7 @@ void cpu_list_unlock(void)
/***********************************************************/
/* CPUX86 core interface */
void cpu_smm_update(CPUX86State *env)
void cpu_smm_update(CPUState *env)
{
}
@@ -714,7 +713,7 @@ static void usage(void)
exit(1);
}
THREAD CPUArchState *thread_env;
THREAD CPUState *thread_env;
/* Assumes contents are already zeroed. */
void init_task_state(TaskState *ts)
@@ -738,7 +737,7 @@ int main(int argc, char **argv)
struct target_pt_regs regs1, *regs = &regs1;
struct image_info info1, *info = &info1;
TaskState ts1, *ts = &ts1;
CPUArchState *env;
CPUState *env;
int optind;
const char *r;
int gdbstub_port = 0;
@@ -749,8 +748,6 @@ int main(int argc, char **argv)
if (argc <= 1)
usage();
module_call_init(MODULE_INIT_QOM);
if ((envlist = envlist_create()) == NULL) {
(void) fprintf(stderr, "Unable to allocate envlist\n");
exit(1);
@@ -908,8 +905,7 @@ int main(int argc, char **argv)
cpu_model = "any";
#endif
}
tcg_exec_init(0);
cpu_exec_init_all();
cpu_exec_init_all(0);
/* NOTE: we need to init the CPU at this stage to get
qemu_host_page_size */
env = cpu_init(cpu_model);
@@ -918,7 +914,7 @@ int main(int argc, char **argv)
exit(1);
}
#if defined(TARGET_I386) || defined(TARGET_SPARC) || defined(TARGET_PPC)
cpu_state_reset(env);
cpu_reset(env);
#endif
thread_env = env;

View File

@@ -94,7 +94,7 @@ void *qemu_vmalloc(size_t size)
return p;
}
void *g_malloc(size_t size)
void *qemu_malloc(size_t size)
{
char * p;
size += 16;
@@ -104,12 +104,12 @@ void *g_malloc(size_t size)
}
/* We use map, which is always zero initialized. */
void * g_malloc0(size_t size)
void * qemu_mallocz(size_t size)
{
return g_malloc(size);
return qemu_malloc(size);
}
void g_free(void *ptr)
void qemu_free(void *ptr)
{
/* FIXME: We should unmark the reserved pages here. However this gets
complicated when one target page spans multiple host pages, so we
@@ -119,18 +119,18 @@ void g_free(void *ptr)
munmap(p, *p);
}
void *g_realloc(void *ptr, size_t size)
void *qemu_realloc(void *ptr, size_t size)
{
size_t old_size, copy;
void *new_ptr;
if (!ptr)
return g_malloc(size);
return qemu_malloc(size);
old_size = *(size_t *)((char *)ptr - 16);
copy = old_size < size ? old_size : size;
new_ptr = g_malloc(size);
new_ptr = qemu_malloc(size);
memcpy(new_ptr, ptr, copy);
g_free(ptr);
qemu_free(ptr);
return new_ptr;
}

View File

@@ -139,8 +139,8 @@ abi_long do_openbsd_syscall(void *cpu_env, int num, abi_long arg1,
abi_long arg2, abi_long arg3, abi_long arg4,
abi_long arg5, abi_long arg6);
void gemu_log(const char *fmt, ...) GCC_FMT_ATTR(1, 2);
extern THREAD CPUArchState *thread_env;
void cpu_loop(CPUArchState *env);
extern THREAD CPUState *thread_env;
void cpu_loop(CPUState *env);
char *target_strerror(int err);
int get_osversion(void);
void fork_start(void);
@@ -167,13 +167,13 @@ void print_openbsd_syscall_ret(int num, abi_long ret);
extern int do_strace;
/* signal.c */
void process_pending_signals(CPUArchState *cpu_env);
void process_pending_signals(CPUState *cpu_env);
void signal_init(void);
//int queue_signal(CPUArchState *env, int sig, target_siginfo_t *info);
//int queue_signal(CPUState *env, int sig, target_siginfo_t *info);
//void host_to_target_siginfo(target_siginfo_t *tinfo, const siginfo_t *info);
//void target_to_host_siginfo(siginfo_t *info, const target_siginfo_t *tinfo);
long do_sigreturn(CPUArchState *env);
long do_rt_sigreturn(CPUArchState *env);
long do_sigreturn(CPUState *env);
long do_rt_sigreturn(CPUState *env);
abi_long do_sigaltstack(abi_ulong uss_addr, abi_ulong uoss_addr, abi_ulong sp);
/* mmap.c */

View File

@@ -33,6 +33,6 @@ void signal_init(void)
{
}
void process_pending_signals(CPUArchState *cpu_env)
void process_pending_signals(CPUState *cpu_env)
{
}

View File

@@ -231,7 +231,7 @@ static abi_long do_freebsd_sysctl(abi_ulong namep, int32_t namelen, abi_ulong ol
void *hnamep, *holdp, *hnewp = NULL;
size_t holdlen;
abi_ulong oldlen = 0;
int32_t *snamep = g_malloc(sizeof(int32_t) * namelen), *p, *q, i;
int32_t *snamep = qemu_malloc(sizeof(int32_t) * namelen), *p, *q, i;
uint32_t kind = 0;
if (oldlenp)
@@ -255,7 +255,7 @@ static abi_long do_freebsd_sysctl(abi_ulong namep, int32_t namelen, abi_ulong ol
unlock_user(holdp, oldp, holdlen);
if (hnewp)
unlock_user(hnewp, newp, 0);
g_free(snamep);
qemu_free(snamep);
return ret;
}
#endif

View File

@@ -8,7 +8,7 @@ struct target_pt_regs {
abi_ulong r12;
abi_ulong rbp;
abi_ulong rbx;
/* arguments: non interrupts/non tracing syscalls only save up to here */
/* arguments: non interrupts/non tracing syscalls only save upto here*/
abi_ulong r11;
abi_ulong r10;
abi_ulong r9;

473
bswap.h
View File

@@ -4,7 +4,6 @@
#include "config-host.h"
#include <inttypes.h>
#include "softfloat.h"
#ifdef CONFIG_MACHINE_BSWAP_H
#include <sys/endian.h>
@@ -238,476 +237,4 @@ static inline uint32_t qemu_bswap_len(uint32_t value, int len)
return bswap32(value) >> (32 - 8 * len);
}
typedef union {
float32 f;
uint32_t l;
} CPU_FloatU;
typedef union {
float64 d;
#if defined(HOST_WORDS_BIGENDIAN)
struct {
uint32_t upper;
uint32_t lower;
} l;
#else
struct {
uint32_t lower;
uint32_t upper;
} l;
#endif
uint64_t ll;
} CPU_DoubleU;
typedef union {
floatx80 d;
struct {
uint64_t lower;
uint16_t upper;
} l;
} CPU_LDoubleU;
typedef union {
float128 q;
#if defined(HOST_WORDS_BIGENDIAN)
struct {
uint32_t upmost;
uint32_t upper;
uint32_t lower;
uint32_t lowest;
} l;
struct {
uint64_t upper;
uint64_t lower;
} ll;
#else
struct {
uint32_t lowest;
uint32_t lower;
uint32_t upper;
uint32_t upmost;
} l;
struct {
uint64_t lower;
uint64_t upper;
} ll;
#endif
} CPU_QuadU;
/* unaligned/endian-independent pointer access */
/*
* the generic syntax is:
*
* load: ld{type}{sign}{size}{endian}_p(ptr)
*
* store: st{type}{size}{endian}_p(ptr, val)
*
* Note there are small differences with the softmmu access API!
*
* type is:
* (empty): integer access
* f : float access
*
* sign is:
* (empty): for floats or 32 bit size
* u : unsigned
* s : signed
*
* size is:
* b: 8 bits
* w: 16 bits
* l: 32 bits
* q: 64 bits
*
* endian is:
* (empty): 8 bit access
* be : big endian
* le : little endian
*/
static inline int ldub_p(const void *ptr)
{
return *(uint8_t *)ptr;
}
static inline int ldsb_p(const void *ptr)
{
return *(int8_t *)ptr;
}
static inline void stb_p(void *ptr, int v)
{
*(uint8_t *)ptr = v;
}
/* NOTE: on arm, putting 2 in /proc/sys/debug/alignment so that the
kernel handles unaligned load/stores may give better results, but
it is a system wide setting : bad */
#if defined(HOST_WORDS_BIGENDIAN) || defined(WORDS_ALIGNED)
/* conservative code for little endian unaligned accesses */
static inline int lduw_le_p(const void *ptr)
{
#ifdef _ARCH_PPC
int val;
__asm__ __volatile__ ("lhbrx %0,0,%1" : "=r" (val) : "r" (ptr));
return val;
#else
const uint8_t *p = ptr;
return p[0] | (p[1] << 8);
#endif
}
static inline int ldsw_le_p(const void *ptr)
{
#ifdef _ARCH_PPC
int val;
__asm__ __volatile__ ("lhbrx %0,0,%1" : "=r" (val) : "r" (ptr));
return (int16_t)val;
#else
const uint8_t *p = ptr;
return (int16_t)(p[0] | (p[1] << 8));
#endif
}
static inline int ldl_le_p(const void *ptr)
{
#ifdef _ARCH_PPC
int val;
__asm__ __volatile__ ("lwbrx %0,0,%1" : "=r" (val) : "r" (ptr));
return val;
#else
const uint8_t *p = ptr;
return p[0] | (p[1] << 8) | (p[2] << 16) | (p[3] << 24);
#endif
}
static inline uint64_t ldq_le_p(const void *ptr)
{
const uint8_t *p = ptr;
uint32_t v1, v2;
v1 = ldl_le_p(p);
v2 = ldl_le_p(p + 4);
return v1 | ((uint64_t)v2 << 32);
}
static inline void stw_le_p(void *ptr, int v)
{
#ifdef _ARCH_PPC
__asm__ __volatile__ ("sthbrx %1,0,%2" : "=m" (*(uint16_t *)ptr) : "r" (v), "r" (ptr));
#else
uint8_t *p = ptr;
p[0] = v;
p[1] = v >> 8;
#endif
}
static inline void stl_le_p(void *ptr, int v)
{
#ifdef _ARCH_PPC
__asm__ __volatile__ ("stwbrx %1,0,%2" : "=m" (*(uint32_t *)ptr) : "r" (v), "r" (ptr));
#else
uint8_t *p = ptr;
p[0] = v;
p[1] = v >> 8;
p[2] = v >> 16;
p[3] = v >> 24;
#endif
}
static inline void stq_le_p(void *ptr, uint64_t v)
{
uint8_t *p = ptr;
stl_le_p(p, (uint32_t)v);
stl_le_p(p + 4, v >> 32);
}
/* float access */
static inline float32 ldfl_le_p(const void *ptr)
{
union {
float32 f;
uint32_t i;
} u;
u.i = ldl_le_p(ptr);
return u.f;
}
static inline void stfl_le_p(void *ptr, float32 v)
{
union {
float32 f;
uint32_t i;
} u;
u.f = v;
stl_le_p(ptr, u.i);
}
static inline float64 ldfq_le_p(const void *ptr)
{
CPU_DoubleU u;
u.l.lower = ldl_le_p(ptr);
u.l.upper = ldl_le_p(ptr + 4);
return u.d;
}
static inline void stfq_le_p(void *ptr, float64 v)
{
CPU_DoubleU u;
u.d = v;
stl_le_p(ptr, u.l.lower);
stl_le_p(ptr + 4, u.l.upper);
}
#else
static inline int lduw_le_p(const void *ptr)
{
return *(uint16_t *)ptr;
}
static inline int ldsw_le_p(const void *ptr)
{
return *(int16_t *)ptr;
}
static inline int ldl_le_p(const void *ptr)
{
return *(uint32_t *)ptr;
}
static inline uint64_t ldq_le_p(const void *ptr)
{
return *(uint64_t *)ptr;
}
static inline void stw_le_p(void *ptr, int v)
{
*(uint16_t *)ptr = v;
}
static inline void stl_le_p(void *ptr, int v)
{
*(uint32_t *)ptr = v;
}
static inline void stq_le_p(void *ptr, uint64_t v)
{
*(uint64_t *)ptr = v;
}
/* float access */
static inline float32 ldfl_le_p(const void *ptr)
{
return *(float32 *)ptr;
}
static inline float64 ldfq_le_p(const void *ptr)
{
return *(float64 *)ptr;
}
static inline void stfl_le_p(void *ptr, float32 v)
{
*(float32 *)ptr = v;
}
static inline void stfq_le_p(void *ptr, float64 v)
{
*(float64 *)ptr = v;
}
#endif
#if !defined(HOST_WORDS_BIGENDIAN) || defined(WORDS_ALIGNED)
static inline int lduw_be_p(const void *ptr)
{
#if defined(__i386__)
int val;
asm volatile ("movzwl %1, %0\n"
"xchgb %b0, %h0\n"
: "=q" (val)
: "m" (*(uint16_t *)ptr));
return val;
#else
const uint8_t *b = ptr;
return ((b[0] << 8) | b[1]);
#endif
}
static inline int ldsw_be_p(const void *ptr)
{
#if defined(__i386__)
int val;
asm volatile ("movzwl %1, %0\n"
"xchgb %b0, %h0\n"
: "=q" (val)
: "m" (*(uint16_t *)ptr));
return (int16_t)val;
#else
const uint8_t *b = ptr;
return (int16_t)((b[0] << 8) | b[1]);
#endif
}
static inline int ldl_be_p(const void *ptr)
{
#if defined(__i386__) || defined(__x86_64__)
int val;
asm volatile ("movl %1, %0\n"
"bswap %0\n"
: "=r" (val)
: "m" (*(uint32_t *)ptr));
return val;
#else
const uint8_t *b = ptr;
return (b[0] << 24) | (b[1] << 16) | (b[2] << 8) | b[3];
#endif
}
static inline uint64_t ldq_be_p(const void *ptr)
{
uint32_t a,b;
a = ldl_be_p(ptr);
b = ldl_be_p((uint8_t *)ptr + 4);
return (((uint64_t)a<<32)|b);
}
static inline void stw_be_p(void *ptr, int v)
{
#if defined(__i386__)
asm volatile ("xchgb %b0, %h0\n"
"movw %w0, %1\n"
: "=q" (v)
: "m" (*(uint16_t *)ptr), "0" (v));
#else
uint8_t *d = (uint8_t *) ptr;
d[0] = v >> 8;
d[1] = v;
#endif
}
static inline void stl_be_p(void *ptr, int v)
{
#if defined(__i386__) || defined(__x86_64__)
asm volatile ("bswap %0\n"
"movl %0, %1\n"
: "=r" (v)
: "m" (*(uint32_t *)ptr), "0" (v));
#else
uint8_t *d = (uint8_t *) ptr;
d[0] = v >> 24;
d[1] = v >> 16;
d[2] = v >> 8;
d[3] = v;
#endif
}
static inline void stq_be_p(void *ptr, uint64_t v)
{
stl_be_p(ptr, v >> 32);
stl_be_p((uint8_t *)ptr + 4, v);
}
/* float access */
static inline float32 ldfl_be_p(const void *ptr)
{
union {
float32 f;
uint32_t i;
} u;
u.i = ldl_be_p(ptr);
return u.f;
}
static inline void stfl_be_p(void *ptr, float32 v)
{
union {
float32 f;
uint32_t i;
} u;
u.f = v;
stl_be_p(ptr, u.i);
}
static inline float64 ldfq_be_p(const void *ptr)
{
CPU_DoubleU u;
u.l.upper = ldl_be_p(ptr);
u.l.lower = ldl_be_p((uint8_t *)ptr + 4);
return u.d;
}
static inline void stfq_be_p(void *ptr, float64 v)
{
CPU_DoubleU u;
u.d = v;
stl_be_p(ptr, u.l.upper);
stl_be_p((uint8_t *)ptr + 4, u.l.lower);
}
#else
static inline int lduw_be_p(const void *ptr)
{
return *(uint16_t *)ptr;
}
static inline int ldsw_be_p(const void *ptr)
{
return *(int16_t *)ptr;
}
static inline int ldl_be_p(const void *ptr)
{
return *(uint32_t *)ptr;
}
static inline uint64_t ldq_be_p(const void *ptr)
{
return *(uint64_t *)ptr;
}
static inline void stw_be_p(void *ptr, int v)
{
*(uint16_t *)ptr = v;
}
static inline void stl_be_p(void *ptr, int v)
{
*(uint32_t *)ptr = v;
}
static inline void stq_be_p(void *ptr, uint64_t v)
{
*(uint64_t *)ptr = v;
}
/* float access */
static inline float32 ldfl_be_p(const void *ptr)
{
return *(float32 *)ptr;
}
static inline float64 ldfq_be_p(const void *ptr)
{
return *(float64 *)ptr;
}
static inline void stfl_be_p(void *ptr, float32 v)
{
*(float32 *)ptr = v;
}
static inline void stfq_be_p(void *ptr, float64 v)
{
*(float64 *)ptr = v;
}
#endif
#endif /* BSWAP_H */

View File

@@ -130,7 +130,6 @@ static void bt_host_read(void *opaque)
pktlen = MIN(pkt[2] + 3, s->len);
s->len -= pktlen;
pkt += pktlen;
break;
default:
bad_pkt:
@@ -178,7 +177,7 @@ struct HCIInfo *bt_host_hci(const char *id)
}
# endif
s = g_malloc0(sizeof(struct bt_host_hci_s));
s = qemu_mallocz(sizeof(struct bt_host_hci_s));
s->fd = fd;
s->hci.cmd_send = bt_host_cmd;
s->hci.sco_send = bt_host_sco;

View File

@@ -156,7 +156,7 @@ void bt_vhci_init(struct HCIInfo *info)
exit(-1);
}
s = g_malloc0(sizeof(struct bt_vhci_s));
s = qemu_mallocz(sizeof(struct bt_vhci_s));
s->fd = fd;
s->info = info ?: qemu_next_hci();
s->info->opaque = s;

Some files were not shown because too many files have changed in this diff Show More