Compare commits

..

28 Commits

Author SHA1 Message Date
Anthony Liguori
76e4e1d237 Update version to 0.15.0
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-08 13:27:32 -05:00
Kevin Wolf
4fbe5233fd qcow2: Fix L1 table size after bdrv_snapshot_goto
When loading an internal snapshot whose L1 table is smaller than the current L1
table, the size of the current L1 would be shrunk to the snapshot's L1 size in
memory, but not on disk. This lead to incorrect refcount updates and eventuelly
to image corruption.

Instead of writing the new L1 size to disk, this simply retains the bigger L1
size that is currently in use and makes sure that the unused part is zeroed.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Tested-by: Philipp Hahn <hahn@univention.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 35d7ace74b)
2011-08-05 07:25:45 -05:00
Anthony Liguori
e2f775205a Revert "floppy: save and restore DIR register"
This reverts commit 7d905f716b.

The use of subsections by this commit are broken because of a fundamental
limitations of subsections in the current protocol.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-04 16:19:04 -05:00
Richard Henderson
51dd7a94c7 alpha-softmmu: Disable for the 0.15 release branch.
The system emulation code was not merged before the branch.
Let's leave that work for the next release.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-04 16:19:04 -05:00
Wolfgang Mauerer
9096de69ff vhost build fix for i386
vhost.c uses __sync_fetch_and_and(), which is only
available for -march=i486 and above (see
https://bugzilla.redhat.com/show_bug.cgi?id=624279).

Signed-off-by: Wolfgang Mauerer <wolfgang.mauerer@siemens.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
(cherry picked from commit 023367e6cd)
2011-08-04 16:19:04 -05:00
Michael Roth
09afeef1ab guest agent: add --enable-guest-agent config option
QAPI will require glib/python, but for now the guest agent is the only
user. For now, make these dependencies an explicit guest agent one, and
give users the option to disable it if need be.

Once QAPI is adopted in core QEMU code, we would basically revert this
patch.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-04 16:19:04 -05:00
Peter Maydell
01825a8ddf user: Restore debug usage message for '-d ?' in user mode emulation
The code which prints the debug usage message on '-d ?' for *-user
has to come before the check for "not enough arguments", so that
"qemu-foo -d ?" prints the list of possible debug log items rather than
the generic usage message. (This was inadvertently broken in commit
c235d73.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2011-08-04 16:19:04 -05:00
Michael Walle
ae2dd33693 lm32: softusb: claim to support full speed
The QEMU keyboard and mouse reports themselves as full speed devices,
though they are actually low speed devices. Until this is fixed, claim that
we are supporting full speed devices.

Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Michael Walle <michael@walle.cc>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
2011-08-04 01:25:39 +02:00
Michael Roth
88ca9f047b Makefile: add missing deps on $(GENERATED_HEADERS)
This fixes a build issue with make -j6+ due to qapi-generated files
being built before $(GENERATED_HEADERS) have been created.

Tested-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2011-07-31 15:56:52 -05:00
Anthony Liguori
898517b0bc Update version to 0.15.0-rc2 2011-07-31 15:38:11 -05:00
Anthony Liguori
9dc9f2b820 Bump version to 0.15.0-rc1
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-07-29 17:14:11 -05:00
Justin M. Forbes
ef942b795a Merge branch 'for-upstream-0.15' of git://git.linaro.org/people/pmaydell/qemu-arm 2011-07-29 10:14:01 -05:00
Amit Shah
868aa386b8 virtio-balloon: Unregister savevm section on device unplug
Migrating after unplugging a virtio-balloon device resulted in an error
message on the destination:

Unknown savevm section or instance '0000:00:04.0/virtio-balloon' 0
load of migration failed

Fix this by unregistering the section on device unplug.

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-28 15:10:39 +05:30
Amit Shah
7e10be8c74 virtio-balloon: Add exit handler, fix memleaks
Add an exit handler that will free up RAM after a virtio-balloon device
is unplugged.

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-28 15:10:33 +05:30
Amit Shah
9843621e3b balloon: Reject negative balloon values
Negative balloon values don't make sense, reject them and throw a qerror
with QERR_INVALID_PARAMETER_VALUE.

Reported-by: Mike Cao <bcao@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-28 15:10:27 +05:30
Amit Shah
ab640dbfc0 virtio-balloon: Check if balloon registration failed
Multiple balloon registrations are not allowed; check if the
registration with the qemu balloon api succeeded.  If not, fail the
device init.

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-28 15:10:19 +05:30
Amit Shah
eaa8b2778c balloon: Don't allow multiple balloon handler registrations
Multiple balloon devices don't make sense; disallow more than one
registration attempt to register handlers.

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-28 15:09:49 +05:30
Peter Maydell
7ec7f28019 target-arm: UNDEF on a VCVTT/VCVTB UNPREDICTABLE to avoid TCG assert
VCVTT/VCVTB with bit 8 set is UNPREDICTABLE; we choose to UNDEF.
This avoids a TCG assert later when the VCVTT/VCVTB code tries to
use a source register that wasn't ever set up.

We pull the check for the presence of the half-precision extension
up in to this common code as well.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2011-07-27 09:29:22 +00:00
Peter Maydell
31b1308046 target-arm: Handle UNDEF and UNPREDICTABLE cases for VLDM, VSTM
Handle the UNDEF and UNPREDICTABLE cases for VLDM and VSTM. In
particular, we now generate an undef exception for overlarge imm8
values rather than generating 1000+ TCG ops and hitting an assertion.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2011-07-27 09:29:22 +00:00
Peter Maydell
4ec648dd6e target-arm: Support v6 barriers in linux-user mode
ARMv6 implemented various operations as special cases of cp15 accesses
which are true instructions in v7; this includes barriers (DMB, DSB, ISB).
Catch this special case at translate time, so that it works in linux-user
mode (which doesn't provide a functional get_cp15 helper) as well as
system mode.

Includes minor cleanup of the existing cases (single switch statement,
and doing the "OK in user mode?" test explicitly rather than hiding it in
cp15_user_ok()).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2011-07-27 09:29:22 +00:00
Peter Maydell
e961d129e1 target-arm: Mark 1136r1 as a v6K core
The 1136r1 is actually a v6K core (unlike the 1136r0); mark it as such,
thus enabling the TLS registers, NOP hints, CLREX, half and byte wide
exclusive load/stores, etc.

The VA-to-PA translation registers are not present on 1136r1, so
introduce a new feature flag for them, which is enabled on
11MPCore and all v7 cores.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2011-07-26 14:58:43 +00:00
Amit Shah
8959459386 virtio-balloon: Fix header comment; add Copyright
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2011-07-26 11:21:15 +05:30
Amit Shah
e2b40e003a balloon: Fix header comment; add Copyright
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2011-07-26 11:21:14 +05:30
Amit Shah
1a39b0fcff balloon: Separate out stat and balloon handling
Passing on '0' as ballooning target to indicate retrieval of stats is
bad API.  It also makes 'balloon 0' in the monitor cause a segfault.
Have two different functions handle the different functionality instead.

Detailed explanation from Markus's review:

1. do_info_balloon() is an info_async() method.  It receives a callback
   with argument, to be called exactly once (callback frees the
   argument).  It passes the callback via qemu_balloon_status() and
   indirectly through qemu_balloon_event to virtio_balloon_to_target().

   virtio_balloon_to_target() executes its balloon stats half.  It
   stores the callback in the device state.

   If it can't send a stats request, it resets stats and calls the
   callback right away.

   Else, it sends a stats request.  The device model runs the callback
   when it receives the answer.

   Works.

2. do_balloon() is a cmd_async() method.  It receives a callback with
   argument, to be called when the command completes.  do_balloon()
   calls it right before it succeeds.  Odd, but should work.

   Nevertheless, it passes the callback on via qemu_ballon() and
   indirectly through qemu_balloon_event to virtio_balloon_to_target().

   a. If the argument is non-zero, virtio_balloon_to_target() executes
      its balloon half, which doesn't use the callback in any way.

      Odd, but works.

   b. If the argument is zero, virtio_balloon_to_target() executes its
      balloon stats half, just like in 1.  It either calls the callback
      right away, or arranges for it to be called later.

      Thus, the callback runs twice: use after free and double free.

Test case: start with -S -device virtio-balloon, execute "balloon 0" in
human monitor.  Runs the callback first from virtio_balloon_to_target(),
then again from do_balloon().

Reported-by: Mike Cao <bcao@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
2011-07-26 11:21:14 +05:30
Amit Shah
4a97e18b87 virtio-balloon: Separate status handling into separate function
Separate out the code to retrieve balloon info from the code that sets
balloon values.

This will be used to separate the two callbacks from balloon.c and help
cope with 'balloon 0' on the monitor.  Currently, 'balloon 0' causes a
segfault in monitor_resume().

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
2011-07-26 11:21:13 +05:30
Amit Shah
f1ee0a0ebd balloon: Simplify code flow
Replace:
  if (foo) {
    ...
  } else {
    return 0;
  }

by

  if (!foo) {
    return 0;
  }
  ...

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
2011-07-26 11:21:13 +05:30
Amit Shah
3583bc031e balloon: Add braces around if statements
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
2011-07-26 11:21:12 +05:30
Amit Shah
2798b5e174 balloon: Make functions, local vars static
balloon.h had function declarations for a couple of functions that are
local to balloon.c.  Make them static.

Drop the 'qemu_' prefix for balloon.c-local variables, and make them
static.

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
2011-07-26 11:21:12 +05:30
1945 changed files with 107795 additions and 395185 deletions

25
.gitignore vendored
View File

@@ -15,12 +15,7 @@ libdis*
libhw32
libhw64
libuser
linux-headers/asm
qapi-generated
qapi-types.[ch]
qapi-visit.[ch]
qmp-commands.h
qmp-marshal.c
qemu-doc.html
qemu-tech.html
qemu-doc.info
@@ -39,18 +34,8 @@ qemu-img-cmds.texi
qemu-img-cmds.h
qemu-io
qemu-ga
qemu-bridge-helper
qemu-monitor.texi
vscclient
QMP/qmp-commands.txt
test-coroutine
test-qmp-input-visitor
test-qmp-output-visitor
test-string-input-visitor
test-string-output-visitor
test-visitor-serialization
fsdev/virtfs-proxy-helper.1
fsdev/virtfs-proxy-helper.pod
.gdbinit
*.a
*.aux
@@ -71,10 +56,6 @@ fsdev/virtfs-proxy-helper.pod
*.vr
*.d
*.o
*.lo
*.la
*.pc
.libs
*.swp
*.orig
.pc
@@ -82,14 +63,8 @@ patches
pc-bios/bios-pq/status
pc-bios/vgabios-pq/status
pc-bios/optionrom/linuxboot.bin
pc-bios/optionrom/linuxboot.raw
pc-bios/optionrom/linuxboot.img
pc-bios/optionrom/multiboot.bin
pc-bios/optionrom/multiboot.raw
pc-bios/optionrom/multiboot.img
pc-bios/optionrom/kvmvapic.bin
pc-bios/optionrom/kvmvapic.raw
pc-bios/optionrom/kvmvapic.img
.stgit-*
cscope.*
tags

9
.gitmodules vendored
View File

@@ -10,12 +10,3 @@
[submodule "roms/ipxe"]
path = roms/ipxe
url = git://git.qemu.org/ipxe.git
[submodule "roms/openbios"]
path = roms/openbios
url = git://git.qemu.org/openbios.git
[submodule "roms/qemu-palcode"]
path = roms/qemu-palcode
url = git://repo.or.cz/qemu-palcode.git
[submodule "roms/sgabios"]
path = roms/sgabios
url = git://git.qemu.org/sgabios.git

View File

@@ -1,16 +0,0 @@
# This mailmap just translates the weird addresses from the original import into git
# into proper addresses so that they are counted properly in git shortlog output.
#
Andrzej Zaborowski <balrogg@gmail.com> balrog <balrog@c046a42c-6fe2-441c-8c8c-71466251a162>
Anthony Liguori <aliguori@us.ibm.com> aliguori <aliguori@c046a42c-6fe2-441c-8c8c-71466251a162>
Aurelien Jarno <aurelien@aurel32.net> aurel32 <aurel32@c046a42c-6fe2-441c-8c8c-71466251a162>
Blue Swirl <blauwirbel@gmail.com> blueswir1 <blueswir1@c046a42c-6fe2-441c-8c8c-71466251a162>
Edgar E. Iglesias <edgar.iglesias@gmail.com> edgar_igl <edgar_igl@c046a42c-6fe2-441c-8c8c-71466251a162>
Fabrice Bellard <fabrice@bellard.org> bellard <bellard@c046a42c-6fe2-441c-8c8c-71466251a162>
Jocelyn Mayer <l_indien@magic.fr> j_mayer <j_mayer@c046a42c-6fe2-441c-8c8c-71466251a162>
Paul Brook <paul@codesourcery.com> pbrook <pbrook@c046a42c-6fe2-441c-8c8c-71466251a162>
Thiemo Seufer <ths@networkno.de> ths <ths@c046a42c-6fe2-441c-8c8c-71466251a162>
malc <av1474@comtv.ru> malc <malc@c046a42c-6fe2-441c-8c8c-71466251a162>
# There is also a:
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>
# for the cvs2svn initialization commit e63c3dc74bf.

View File

@@ -1,4 +1,4 @@
QEMU Coding Style
Qemu Coding Style
=================
Please use the script checkpatch.pl in the scripts directory to check
@@ -44,8 +44,7 @@ Rationale:
3. Naming
Variables are lower_case_with_underscores; easy to type and read. Structured
type names are in CamelCase; harder to type but standing out. Enum type
names and function type names should also be in CamelCase. Scalar type
type names are in CamelCase; harder to type but standing out. Scalar type
names are lower_case_with_underscores_ending_with_a_t, like the POSIX
uint64_t and family. Note that this last convention contradicts POSIX
and is therefore likely to be changed.
@@ -69,10 +68,6 @@ keyword. Example:
printf("a was something else entirely.\n");
}
Note that 'else if' is considered a single statement; otherwise a long if/
else if/else if/.../else sequence would need an indent for every else
statement.
An exception is the opening brace for a function; for reasons of tradition
and clarity it comes on a line by itself:

View File

@@ -78,7 +78,7 @@ version 0.10.2:
- fix savevm/loadvm (Anthony Liguori)
- live migration: fix dirty tracking windows (Glauber Costa)
- live migration: improve error propagation (Glauber Costa)
- live migration: improve error propogation (Glauber Costa)
- qcow2: fix image creation for > ~2TB images (Chris Wright)
- hotplug: fix error handling for if= parameter (Eduardo Habkost)
- qcow2: fix data corruption (Nolan Leake)
@@ -386,7 +386,7 @@ version 0.5.3:
- support of CD-ROM change
- multiple network interface support
- initial x86-64 host support (Gwenole Beauchesne)
- lret to outer privilege fix (OS/2 install fix)
- lret to outer priviledge fix (OS/2 install fix)
- task switch fixes (SkyOS boot)
- VM save/restore commands
- new timer API
@@ -447,7 +447,7 @@ version 0.5.0:
- multi-target build
- fixed: no error code in hardware interrupts
- fixed: pop ss, mov ss, x and sti disable hardware irqs for the next insn
- correct single stepping through string operations
- correct single stepping thru string operations
- preliminary SPARC target support (Thomas M. Ogrisegg)
- tun-fd option (Rusty Russell)
- automatic IDE geometry detection

12
HACKING
View File

@@ -77,13 +77,11 @@ avoided.
Use of the malloc/free/realloc/calloc/valloc/memalign/posix_memalign
APIs is not allowed in the QEMU codebase. Instead of these routines,
use the GLib memory allocation routines g_malloc/g_malloc0/g_new/
g_new0/g_realloc/g_free or QEMU's qemu_vmalloc/qemu_memalign/qemu_vfree
APIs.
use the replacement qemu_malloc/qemu_mallocz/qemu_realloc/qemu_free or
qemu_vmalloc/qemu_memalign/qemu_vfree APIs.
Please note that g_malloc will exit on allocation failure, so there
is no need to test for failure (as you would have to with malloc).
Calling g_malloc with a zero size is valid and will return NULL.
Please note that NULL check for the qemu_malloc result is redundant and
that qemu_malloc() call with zero size is not allowed.
Memory allocated by qemu_vmalloc or qemu_memalign must be freed with
qemu_vfree, since breaking this will cause problems on Win32 and user
@@ -110,7 +108,7 @@ int qemu_strnlen(const char *s, int max_len)
There are also replacement character processing macros for isxyz and toxyz,
so instead of e.g. isalnum you should use qemu_isalnum.
Because of the memory management rules, you must use g_strdup/g_strndup
Because of the memory management rules, you must use qemu_strdup/qemu_strndup
instead of plain strdup/strndup.
5. Printf-style functions

View File

@@ -6,7 +6,9 @@ The following points clarify the QEMU license:
GNU General Public License. Hence each source file contains its own
licensing information.
Many hardware device emulation sources are released under the BSD license.
In particular, the QEMU virtual CPU core library (libqemu.a) is
released under the GNU Lesser General Public License. Many hardware
device emulation sources are released under the BSD license.
3) The Tiny Code Generator (TCG) is released under the BSD license
(see license headers in files).

View File

@@ -20,7 +20,7 @@ Descriptions of section entries:
Supported: Someone is actually paid to look after this.
Maintained: Someone actually looks after it.
Odd Fixes: It has a maintainer but they don't have time to do
much other than throw the odd patch in. See below.
much other than throw the odd patch in. See below..
Orphan: No current maintainer [but maybe you could take the
role as you write your new code].
Obsolete: Old code. Something tagged obsolete generally means
@@ -62,7 +62,6 @@ F: target-alpha/
ARM
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: target-arm/
@@ -78,7 +77,7 @@ F: target-lm32/
M68K
M: Paul Brook <paul@codesourcery.com>
S: Odd Fixes
S: Maintained
F: target-m68k/
MicroBlaze
@@ -88,12 +87,11 @@ F: target-microblaze/
MIPS
M: Aurelien Jarno <aurelien@aurel32.net>
S: Odd Fixes
S: Maintained
F: target-mips/
PowerPC
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Maintained
F: target-ppc/
@@ -104,7 +102,7 @@ F: target-s390x/
SH4
M: Aurelien Jarno <aurelien@aurel32.net>
S: Odd Fixes
S: Maintained
F: target-sh4/
SPARC
@@ -112,22 +110,11 @@ M: Blue Swirl <blauwirbel@gmail.com>
S: Maintained
F: target-sparc/
UniCore32
M: Guan Xuetao <gxt@mprc.pku.edu.cn>
S: Maintained
F: target-unicore32/
X86
M: qemu-devel@nongnu.org
S: Odd Fixes
F: target-i386/
Xtensa
M: Max Filippov <jcmvbkbc@gmail.com>
W: http://wiki.osll.spb.ru/doku.php?id=etc:users:jcmvbkbc:qemu-target-xtensa
S: Maintained
F: target-xtensa/
Guest CPU Cores (KVM):
----------------------
@@ -156,66 +143,15 @@ L: kvm@vger.kernel.org
S: Supported
F: target-i386/kvm.c
Guest CPU Cores (Xen):
----------------------
X86
M: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
L: xen-devel@lists.xensource.com
S: Supported
F: xen-*
F: */xen*
Hosts:
------
LINUX
L: qemu-devel@nongnu.org
S: Maintained
F: linux-*
F: linux-headers/
POSIX
L: qemu-devel@nongnu.org
S: Maintained
F: *posix*
W32, W64
L: qemu-devel@nongnu.org
M: Stefan Weil <sw@weilnetz.de>
S: Maintained
F: *win32*
ARM Machines
------------
Exynos
M: Evgeny Voevodin <e.voevodin@samsung.com>
M: Maksim Kozlov <m.kozlov@samsung.com>
M: Igor Mitsyanko <i.mitsyanko@samsung.com>
M: Dmitry Solodkiy <d.solodkiy@samsung.com>
S: Maintained
F: hw/exynos*
Calxeda Highbank
M: Mark Langsdorf <mark.langsdorf@calxeda.com>
S: Supported
F: hw/highbank.c
F: hw/xgmac.c
Gumstix
M: qemu-devel@nongnu.org
S: Orphan
F: hw/gumstix.c
i.MX31
M: Peter Chubb <peter.chubb@nicta.com.au>
S: Odd fixes
F: hw/imx*
F: hw/kzm.c
Integrator CP
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/integratorcp.c
@@ -241,7 +177,6 @@ F: hw/palm.c
Real View
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/realview*
@@ -252,23 +187,14 @@ F: hw/spitz.c
Stellaris
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/stellaris.c
Versatile PB
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/versatilepb.c
Xilinx Zynq
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
S: Maintained
F: hw/xilinx_zynq.c
F: hw/zynq_slcr.c
F: hw/cadence_*
CRIS Machines
-------------
Axis Dev88
@@ -317,11 +243,6 @@ M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
S: Maintained
F: hw/petalogix_s3adsp1800.c
petalogix_ml605
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
S: Maintained
F: hw/petalogix_ml605_mmu.c
MIPS Machines
-------------
Jazz
@@ -348,31 +269,23 @@ PowerPC Machines
----------------
405
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Maintained
F: hw/ppc405_boards.c
New World
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Maintained
F: hw/ppc_newworld.c
F: hw/unin_pci.c
F: hw/dec_pci.[hc]
Old World
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Maintained
F: hw/ppc_oldworld.c
F: hw/grackle_pci.c
PReP
M: Andreas Färber <andreas.faerber@web.de>
L: qemu-ppc@nongnu.org
S: Odd Fixes
Prep
M: qemu-devel@nongnu.org
S: Orphan
F: hw/ppc_prep.c
F: hw/prep_pci.[hc]
SH4 Machines
------------
@@ -398,12 +311,6 @@ M: Blue Swirl <blauwirbel@gmail.com>
S: Maintained
F: hw/sun4u.c
Leon3
M: Fabien Chouteau <chouteau@adacore.com>
S: Maintained
F: hw/leon3.c
F: hw/grlib*
S390 Machines
-------------
S390 Virtio
@@ -411,33 +318,12 @@ M: Alexander Graf <agraf@suse.de>
S: Maintained
F: hw/s390-*.c
UniCore32 Machines
-------------
PKUnity-3 SoC initramfs-with-busybox
M: Guan Xuetao <gxt@mprc.pku.edu.cn>
S: Maintained
F: hw/puv3*
F: hw/unicore32/
X86 Machines
------------
PC
M: Anthony Liguori <aliguori@us.ibm.com>
S: Supported
F: hw/pc.[ch]
F: hw/pc_piix.c
Xtensa Machines
---------------
sim
M: Max Filippov <jcmvbkbc@gmail.com>
S: Maintained
F: hw/xtensa_sim.c
Avnet LX60
M: Max Filippov <jcmvbkbc@gmail.com>
S: Maintained
F: hw/xtensa_lx60.c
F: hw/pc.[ch] hw/pc_piix.c
Devices
-------
@@ -446,11 +332,6 @@ M: Kevin Wolf <kwolf@redhat.com>
S: Odd Fixes
F: hw/ide/
OMAP
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/omap*
PCI
M: Michael S. Tsirkin <mst@redhat.com>
S: Supported
@@ -458,16 +339,11 @@ F: hw/pci*
F: hw/piix*
SCSI
M: Paolo Bonzini <pbonzini@redhat.com>
S: Supported
F: hw/virtio-scsi.*
F: hw/scsi*
T: git git://github.com/bonzini/qemu.git scsi-next
LSI53C895A
M: Paul Brook <paul@codesourcery.com>
M: Kevin Wolf <kwolf@redhat.com>
S: Odd Fixes
F: hw/lsi53c895a.c
F: hw/scsi*
USB
M: Gerd Hoffmann <kraxel@redhat.com>
@@ -485,11 +361,9 @@ S: Supported
F: hw/virtio*
virtio-9p
M: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
M: Venkateswararao Jujjuri (JV) <jvrao@linux.vnet.ibm.com>
S: Supported
F: hw/9pfs/
F: fsdev/
T: git git://github.com/kvaneesh/QEMU.git
F: hw/virtio-9p*
virtio-blk
M: Kevin Wolf <kwolf@redhat.com>
@@ -502,17 +376,6 @@ S: Supported
F: hw/virtio-serial*
F: hw/virtio-console*
Xilinx EDK
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
S: Maintained
F: hw/xilinx_axi*
F: hw/xilinx_uartlite.c
F: hw/xilinx_intc.c
F: hw/xilinx_ethlite.c
F: hw/xilinx_timer.c
F: hw/xilinx.h
Subsystems
----------
Audio
@@ -531,18 +394,6 @@ M: Anthony Liguori <aliguori@us.ibm.com>
S: Maintained
F: qemu-char.c
CPU
M: Andreas Färber <afaerber@suse.de>
S: Supported
F: qom/cpu.c
F: include/qemu/cpu.h
Device Tree
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
M: Alexander Graf <agraf@suse.de>
S: Maintained
F: device-tree.[ch]
GDB stub
M: qemu-devel@nongnu.org
S: Odd Fixes
@@ -562,11 +413,6 @@ M: Anthony Liguori <aliguori@us.ibm.com>
S: Maintained
F: ui/
Cocoa graphics
M: Andreas Färber <andreas.faerber@web.de>
S: Odd Fixes
F: ui/cocoa.m
Main loop
M: Anthony Liguori <aliguori@us.ibm.com>
S: Supported
@@ -580,38 +426,14 @@ F: monitor.c
Network device layer
M: Anthony Liguori <aliguori@us.ibm.com>
M: Stefan Hajnoczi <stefanha@gmail.com>
M: Mark McLoughlin <markmc@redhat.com>
S: Maintained
F: net/
T: git git://github.com/stefanha/qemu.git net
Network Block Device (NBD)
M: Paolo Bonzini <pbonzini@redhat.com>
S: Odd Fixes
F: block/nbd.c
F: nbd.*
F: qemu-nbd.c
T: git git://github.com/bonzini/qemu.git nbd-next
SLIRP
M: Jan Kiszka <jan.kiszka@siemens.com>
S: Maintained
M: qemu-devel@nongnu.org
S: Orphan
F: slirp/
T: git git://git.kiszka.org/qemu.git queues/slirp
Tracing
M: Stefan Hajnoczi <stefanha@gmail.com>
S: Maintained
F: trace/
F: scripts/tracetool.py
F: scripts/tracetool/
F: docs/tracing.txt
T: git git://github.com/stefanha/qemu.git tracing
Checkpatch
M: Blue Swirl <blauwirbel@gmail.com>
S: Odd Fixes
F: scripts/checkpatch.pl
Usermode Emulation
------------------
@@ -620,6 +442,11 @@ M: Blue Swirl <blauwirbel@gmail.com>
S: Maintained
F: bsd-user/
Darwin user
M: qemu-devel@nongnu.org
S: Orphan
F: darwin-user/
Linux user
M: Riku Voipio <riku.voipio@iki.fi>
S: Maintained
@@ -677,30 +504,3 @@ SPARC target
M: Blue Swirl <blauwirbel@gmail.com>
S: Maintained
F: tcg/sparc/
TCI target
M: Stefan Weil <sw@weilnetz.de>
S: Maintained
F: tcg/tci/
Stable branches
---------------
Stable 1.0
L: qemu-stable@nongnu.org
T: git git://git.qemu.org/qemu-stable-1.0.git
S: Orphan
Stable 0.15
L: qemu-stable@nongnu.org
T: git git://git.qemu.org/qemu-stable-0.15.git
S: Orphan
Stable 0.14
L: qemu-stable@nongnu.org
T: git git://git.qemu.org/qemu-stable-0.14.git
S: Orphan
Stable 0.10
L: qemu-stable@nongnu.org
T: git git://git.qemu.org/qemu-stable-0.10.git
S: Orphan

275
Makefile
View File

@@ -1,12 +1,13 @@
# Makefile for QEMU.
# Always point to the root of the build tree (needs GNU make).
BUILD_DIR=$(CURDIR)
GENERATED_HEADERS = config-host.h trace.h qemu-options.def
ifeq ($(TRACE_BACKEND),dtrace)
GENERATED_HEADERS += trace-dtrace.h
endif
# All following code might depend on configuration variables
ifneq ($(wildcard config-host.mak),)
# Put the all: rule here so that config-host.mak can contain dependencies.
all:
all: build-all
include config-host.mak
include $(SRC_PATH)/rules.mak
config-host.mak: $(SRC_PATH)/configure
@@ -18,47 +19,30 @@ config-host.mak:
@exit 1
endif
GENERATED_HEADERS = config-host.h trace.h qemu-options.def
ifeq ($(TRACE_BACKEND),dtrace)
GENERATED_HEADERS += trace-dtrace.h
endif
GENERATED_HEADERS += qmp-commands.h qapi-types.h qapi-visit.h
GENERATED_SOURCES += qmp-marshal.c qapi-types.c qapi-visit.c trace.c
# Don't try to regenerate Makefile or configure
# We don't generate any of them
Makefile: ;
configure: ;
.PHONY: all clean cscope distclean dvi html info install install-doc \
pdf recurse-all speed test dist
pdf recurse-all speed tar tarbin test build-all
$(call set-vpath, $(SRC_PATH))
$(call set-vpath, $(SRC_PATH):$(SRC_PATH)/hw)
LIBS+=-lz $(LIBS_TOOLS)
HELPERS-$(CONFIG_LINUX) = qemu-bridge-helper$(EXESUF)
ifdef BUILD_DOCS
DOCS=qemu-doc.html qemu-tech.html qemu.1 qemu-img.1 qemu-nbd.8 QMP/qmp-commands.txt
ifdef CONFIG_VIRTFS
DOCS+=fsdev/virtfs-proxy-helper.1
endif
else
DOCS=
endif
SUBDIR_MAKEFLAGS=$(if $(V),,--no-print-directory) BUILD_DIR=$(BUILD_DIR)
SUBDIR_MAKEFLAGS=$(if $(V),,--no-print-directory)
SUBDIR_DEVICES_MAK=$(patsubst %, %/config-devices.mak, $(TARGET_DIRS))
SUBDIR_DEVICES_MAK_DEP=$(patsubst %, %/config-devices.mak.d, $(TARGET_DIRS))
ifeq ($(SUBDIR_DEVICES_MAK),)
config-all-devices.mak:
$(call quiet-command,echo '# no devices' > $@," GEN $@")
else
config-all-devices.mak: $(SUBDIR_DEVICES_MAK)
$(call quiet-command,cat $(SUBDIR_DEVICES_MAK) | grep =y | sort -u > $@," GEN $@")
endif
-include $(SUBDIR_DEVICES_MAK_DEP)
@@ -87,7 +71,7 @@ defconfig:
-include config-all-devices.mak
all: $(DOCS) $(TOOLS) $(HELPERS-y) recurse-all
build-all: $(DOCS) $(TOOLS) recurse-all
config-host.h: config-host.h-timestamp
config-host.h-timestamp: config-host.mak
@@ -96,18 +80,19 @@ qemu-options.def: $(SRC_PATH)/qemu-options.hx
SUBDIR_RULES=$(patsubst %,subdir-%, $(TARGET_DIRS))
subdir-%:
subdir-%: $(GENERATED_HEADERS)
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C $* V="$(V)" TARGET_DIR="$*/" all,)
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/Makefile.objs
endif
subdir-libcacard: $(oslib-obj-y) $(trace-obj-y) qemu-timer-common.o
$(common-obj-y): $(GENERATED_HEADERS)
subdir-libcacard: $(oslib-obj-y) $(trace-obj-y) qemu-malloc.o qemu-timer-common.o
$(filter %-softmmu,$(SUBDIR_RULES)): $(universal-obj-y) $(trace-obj-y) $(common-obj-y) $(extra-obj-y) subdir-libdis
$(filter %-softmmu,$(SUBDIR_RULES)): $(trace-obj-y) $(common-obj-y) subdir-libdis
$(filter %-user,$(SUBDIR_RULES)): $(universal-obj-y) $(trace-obj-y) subdir-libdis-user subdir-libuser
$(filter %-user,$(SUBDIR_RULES)): $(GENERATED_HEADERS) $(trace-obj-y) subdir-libdis-user subdir-libuser
ROMSUBDIR_RULES=$(patsubst %,romsubdir-%, $(ROMS))
romsubdir-%:
@@ -121,17 +106,17 @@ audio/audio.o audio/fmodaudio.o: QEMU_CFLAGS += $(FMOD_CFLAGS)
QEMU_CFLAGS+=$(CURL_CFLAGS)
QEMU_CFLAGS += -I$(SRC_PATH)/include
QEMU_CFLAGS+=$(GLIB_CFLAGS)
ui/cocoa.o: ui/cocoa.m
ui/sdl.o audio/sdlaudio.o ui/sdl_zoom.o hw/baum.o: QEMU_CFLAGS += $(SDL_CFLAGS)
ui/sdl.o audio/sdlaudio.o ui/sdl_zoom.o baum.o: QEMU_CFLAGS += $(SDL_CFLAGS)
ui/vnc.o: QEMU_CFLAGS += $(VNC_TLS_CFLAGS)
bt-host.o: QEMU_CFLAGS += $(BLUEZ_CFLAGS)
version.o: $(SRC_PATH)/version.rc config-host.h
version.o: $(SRC_PATH)/version.rc config-host.mak
$(call quiet-command,$(WINDRES) -I. -o $@ $<," RC $(TARGET_DIR)$@")
version-obj-$(CONFIG_WIN32) += version.o
@@ -146,72 +131,72 @@ libcacard.la:
install-libcacard:
@echo "libtool is missing, please install and rerun configure"; exit 1
else
libcacard.la: $(oslib-obj-y) qemu-timer-common.o $(addsuffix .lo, $(basename $(trace-obj-y)))
libcacard.la: $(GENERATED_HEADERS) $(oslib-obj-y) qemu-malloc.o qemu-timer-common.o $(addsuffix .lo, $(basename $(trace-obj-y)))
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C libcacard V="$(V)" TARGET_DIR="$*/" libcacard.la,)
install-libcacard: libcacard.la
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C libcacard V="$(V)" TARGET_DIR="$*/" install-libcacard,)
endif
######################################################################
qemu-img.o: qemu-img-cmds.h
qemu-img.o qemu-tool.o qemu-nbd.o qemu-io.o cmd.o qemu-ga.o: $(GENERATED_HEADERS)
tools-obj-y = $(oslib-obj-y) $(trace-obj-y) qemu-tool.o qemu-timer.o \
qemu-timer-common.o main-loop.o notify.o \
iohandler.o cutils.o iov.o async.o
tools-obj-$(CONFIG_POSIX) += compatfd.o
qemu-img$(EXESUF): qemu-img.o qemu-tool.o qemu-error.o $(oslib-obj-y) $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y) qemu-timer-common.o
qemu-img$(EXESUF): qemu-img.o $(tools-obj-y) $(block-obj-y)
qemu-nbd$(EXESUF): qemu-nbd.o $(tools-obj-y) $(block-obj-y)
qemu-io$(EXESUF): qemu-io.o cmd.o $(tools-obj-y) $(block-obj-y)
qemu-nbd$(EXESUF): qemu-nbd.o qemu-tool.o qemu-error.o $(oslib-obj-y) $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y) qemu-timer-common.o
qemu-bridge-helper$(EXESUF): qemu-bridge-helper.o
vscclient$(EXESUF): $(libcacard-y) $(oslib-obj-y) $(trace-obj-y) $(tools-obj-y) qemu-timer-common.o libcacard/vscclient.o
$(call quiet-command,$(CC) $(LDFLAGS) -o $@ $^ $(libcacard_libs) $(LIBS)," LINK $@")
fsdev/virtfs-proxy-helper$(EXESUF): fsdev/virtfs-proxy-helper.o fsdev/virtio-9p-marshal.o oslib-posix.o $(trace-obj-y)
fsdev/virtfs-proxy-helper$(EXESUF): LIBS += -lcap
qemu-io$(EXESUF): qemu-io.o cmd.o qemu-tool.o qemu-error.o $(oslib-obj-y) $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y) qemu-timer-common.o
qemu-img-cmds.h: $(SRC_PATH)/qemu-img-cmds.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $@")
qemu-ga$(EXESUF): LIBS = $(LIBS_QGA)
qemu-ga$(EXESUF): QEMU_CFLAGS += -I qga/qapi-generated
check-qint.o check-qstring.o check-qdict.o check-qlist.o check-qfloat.o check-qjson.o: $(GENERATED_HEADERS)
gen-out-type = $(subst .,-,$(suffix $@))
CHECK_PROG_DEPS = qemu-malloc.o $(oslib-obj-y) $(trace-obj-y) qemu-tool.o
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/tests/Makefile
endif
check-qint: check-qint.o qint.o $(CHECK_PROG_DEPS)
check-qstring: check-qstring.o qstring.o $(CHECK_PROG_DEPS)
check-qdict: check-qdict.o qdict.o qfloat.o qint.o qstring.o qbool.o qlist.o $(CHECK_PROG_DEPS)
check-qlist: check-qlist.o qlist.o qint.o $(CHECK_PROG_DEPS)
check-qfloat: check-qfloat.o qfloat.o $(CHECK_PROG_DEPS)
check-qjson: check-qjson.o qfloat.o qint.o qdict.o qstring.o qlist.o qbool.o qjson.o json-streamer.o json-lexer.o json-parser.o error.o qerror.o qemu-error.o $(CHECK_PROG_DEPS)
qapi-py = $(SRC_PATH)/scripts/qapi.py $(SRC_PATH)/scripts/ordereddict.py
$(qapi-obj-y): $(GENERATED_HEADERS)
qapi-dir := qapi-generated
test-visitor.o test-qmp-commands.o qemu-ga$(EXESUF): QEMU_CFLAGS += -I $(qapi-dir)
qga/qapi-generated/qga-qapi-types.c qga/qapi-generated/qga-qapi-types.h :\
$(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py $(gen-out-type) -o qga/qapi-generated -p "qga-" < $<, " GEN $@")
qga/qapi-generated/qga-qapi-visit.c qga/qapi-generated/qga-qapi-visit.h :\
$(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py $(gen-out-type) -o qga/qapi-generated -p "qga-" < $<, " GEN $@")
qga/qapi-generated/qga-qmp-commands.h qga/qapi-generated/qga-qmp-marshal.c :\
$(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py $(gen-out-type) -o qga/qapi-generated -p "qga-" < $<, " GEN $@")
$(qapi-dir)/test-qapi-types.c: $(qapi-dir)/test-qapi-types.h
$(qapi-dir)/test-qapi-types.h: $(SRC_PATH)/qapi-schema-test.json $(SRC_PATH)/scripts/qapi-types.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py -o "$(qapi-dir)" -p "test-" < $<, " GEN $@")
$(qapi-dir)/test-qapi-visit.c: $(qapi-dir)/test-qapi-visit.h
$(qapi-dir)/test-qapi-visit.h: $(SRC_PATH)/qapi-schema-test.json $(SRC_PATH)/scripts/qapi-visit.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py -o "$(qapi-dir)" -p "test-" < $<, " GEN $@")
$(qapi-dir)/test-qmp-commands.h: $(qapi-dir)/test-qmp-marshal.c
$(qapi-dir)/test-qmp-marshal.c: $(SRC_PATH)/qapi-schema-test.json $(SRC_PATH)/scripts/qapi-commands.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py -o "$(qapi-dir)" -p "test-" < $<, " GEN $@")
qapi-types.c qapi-types.h :\
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py $(gen-out-type) -o "." < $<, " GEN $@")
qapi-visit.c qapi-visit.h :\
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py $(gen-out-type) -o "." < $<, " GEN $@")
qmp-commands.h qmp-marshal.c :\
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py $(gen-out-type) -m -o "." < $<, " GEN $@")
$(qapi-dir)/qga-qapi-types.c: $(qapi-dir)/qga-qapi-types.h
$(qapi-dir)/qga-qapi-types.h: $(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-types.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py -o "$(qapi-dir)" -p "qga-" < $<, " GEN $@")
$(qapi-dir)/qga-qapi-visit.c: $(qapi-dir)/qga-qapi-visit.h
$(qapi-dir)/qga-qapi-visit.h: $(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-visit.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py -o "$(qapi-dir)" -p "qga-" < $<, " GEN $@")
$(qapi-dir)/qga-qmp-marshal.c: $(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-commands.py
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py -o "$(qapi-dir)" -p "qga-" < $<, " GEN $@")
QGALIB_GEN=$(addprefix qga/qapi-generated/, qga-qapi-types.h qga-qapi-visit.h qga-qmp-commands.h)
$(qga-obj-y) qemu-ga.o: $(QGALIB_GEN)
test-visitor.o: $(addprefix $(qapi-dir)/, test-qapi-types.c test-qapi-types.h test-qapi-visit.c test-qapi-visit.h) $(qapi-obj-y)
test-visitor: test-visitor.o qfloat.o qint.o qdict.o qstring.o qlist.o qbool.o $(qapi-obj-y) error.o osdep.o qemu-malloc.o $(oslib-obj-y) qjson.o json-streamer.o json-lexer.o json-parser.o qerror.o qemu-error.o qemu-tool.o $(qapi-dir)/test-qapi-visit.o $(qapi-dir)/test-qapi-types.o
qemu-ga$(EXESUF): qemu-ga.o $(qga-obj-y) $(tools-obj-y) $(qapi-obj-y) $(qobject-obj-y) $(version-obj-y)
test-qmp-commands.o: $(addprefix $(qapi-dir)/, test-qapi-types.c test-qapi-types.h test-qapi-visit.c test-qapi-visit.h test-qmp-marshal.c test-qmp-commands.h) $(qapi-obj-y)
test-qmp-commands: test-qmp-commands.o qfloat.o qint.o qdict.o qstring.o qlist.o qbool.o $(qapi-obj-y) error.o osdep.o qemu-malloc.o $(oslib-obj-y) qjson.o json-streamer.o json-lexer.o json-parser.o qerror.o qemu-error.o qemu-tool.o $(qapi-dir)/test-qapi-visit.o $(qapi-dir)/test-qapi-types.o $(qapi-dir)/test-qmp-marshal.o module.o
QGALIB=qga/guest-agent-command-state.o qga/guest-agent-commands.o
QGALIB_GEN=$(addprefix $(qapi-dir)/, qga-qapi-types.c qga-qapi-types.h qga-qapi-visit.c qga-qmp-marshal.c)
$(QGALIB_GEN): $(GENERATED_HEADERS)
$(QGALIB) qemu-ga.o: $(QGALIB_GEN) $(qapi-obj-y)
qemu-ga$(EXESUF): qemu-ga.o $(QGALIB) qemu-tool.o qemu-error.o error.o $(oslib-obj-y) $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y) $(qapi-obj-y) qemu-timer-common.o qemu-sockets.o module.o qapi/qmp-dispatch.o qapi/qmp-registry.o $(qapi-dir)/qga-qapi-visit.o $(qapi-dir)/qga-qapi-types.o $(qapi-dir)/qga-qmp-marshal.o
QEMULIBS=libhw32 libhw64 libuser libdis libdis-user
@@ -219,30 +204,20 @@ clean:
# avoid old build problems by removing potentially incorrect old files
rm -f config.mak op-i386.h opc-i386.h gen-op-i386.h op-arm.h opc-arm.h gen-op-arm.h
rm -f qemu-options.def
find . -name '*.[od]' -exec rm -f {} +
rm -f *.a *.lo $(TOOLS) $(HELPERS-y) qemu-ga TAGS cscope.* *.pod *~ */*~
rm -f *.o *.d *.a *.lo $(TOOLS) qemu-ga TAGS cscope.* *.pod *~ */*~
rm -Rf .libs
rm -f slirp/*.o slirp/*.d audio/*.o audio/*.d block/*.o block/*.d net/*.o net/*.d fsdev/*.o fsdev/*.d ui/*.o ui/*.d qapi/*.o qapi/*.d qga/*.o qga/*.d
rm -f qemu-img-cmds.h
rm -f trace.c trace.h trace.c-timestamp trace.h-timestamp
rm -f trace-dtrace.dtrace trace-dtrace.dtrace-timestamp
@# May not be present in GENERATED_HEADERS
rm -f trace-dtrace.h trace-dtrace.h-timestamp
rm -f $(foreach f,$(GENERATED_HEADERS),$(f) $(f)-timestamp)
rm -f $(foreach f,$(GENERATED_SOURCES),$(f) $(f)-timestamp)
rm -rf qapi-generated
rm -rf qga/qapi-generated
$(MAKE) -C tests/tcg clean
rm -rf $(qapi-dir)
$(MAKE) -C tests clean
for d in $(ALL_SUBDIRS) $(QEMULIBS) libcacard; do \
if test -d $$d; then $(MAKE) -C $$d $@ || exit 1; fi; \
rm -f $$d/qemu-options.def; \
done
VERSION ?= $(shell cat VERSION)
dist: qemu-$(VERSION).tar.bz2
qemu-%.tar.bz2:
$(SRC_PATH)/scripts/make-release "$(SRC_PATH)" "$(patsubst qemu-%.tar.bz2,%,$@)"
distclean: clean
rm -f config-host.mak config-host.h* config-host.ld $(DOCS) qemu-options.texi qemu-img-cmds.texi qemu-monitor.texi
rm -f config-all-devices.mak
@@ -251,8 +226,6 @@ distclean: clean
rm -f qemu-doc.fn qemu-doc.fns qemu-doc.info qemu-doc.ky qemu-doc.kys
rm -f qemu-doc.log qemu-doc.pdf qemu-doc.pg qemu-doc.toc qemu-doc.tp
rm -f qemu-doc.vr
rm -f config.log
rm -f linux-headers/asm
rm -f qemu-tech.info qemu-tech.aux qemu-tech.cp qemu-tech.dvi qemu-tech.fn qemu-tech.info qemu-tech.ky qemu-tech.log qemu-tech.pdf qemu-tech.pg qemu-tech.toc qemu-tech.tp qemu-tech.vr
for d in $(TARGET_DIRS) $(QEMULIBS); do \
rm -rf $$d || exit 1 ; \
@@ -260,67 +233,51 @@ distclean: clean
KEYMAPS=da en-gb et fr fr-ch is lt modifiers no pt-br sv \
ar de en-us fi fr-be hr it lv nl pl ru th \
common de-ch es fo fr-ca hu ja mk nl-be pt sl tr \
bepo
common de-ch es fo fr-ca hu ja mk nl-be pt sl tr
ifdef INSTALL_BLOBS
BLOBS=bios.bin sgabios.bin vgabios.bin vgabios-cirrus.bin \
BLOBS=bios.bin vgabios.bin vgabios-cirrus.bin \
vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin \
ppc_rom.bin openbios-sparc32 openbios-sparc64 openbios-ppc \
pxe-e1000.rom pxe-eepro100.rom pxe-ne2k_pci.rom \
pxe-pcnet.rom pxe-rtl8139.rom pxe-virtio.rom \
qemu-icon.bmp \
bamboo.dtb petalogix-s3adsp1800.dtb petalogix-ml605.dtb \
multiboot.bin linuxboot.bin kvmvapic.bin \
mpc8544ds.dtb \
multiboot.bin linuxboot.bin \
s390-zipl.rom \
spapr-rtas.bin slof.bin \
palcode-clipper
spapr-rtas.bin slof.bin
else
BLOBS=
endif
install-doc: $(DOCS)
$(INSTALL_DIR) "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) qemu-doc.html qemu-tech.html "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) QMP/qmp-commands.txt "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DIR) "$(DESTDIR)$(docdir)"
$(INSTALL_DATA) qemu-doc.html qemu-tech.html "$(DESTDIR)$(docdir)"
ifdef CONFIG_POSIX
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DATA) qemu.1 qemu-img.1 "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man8"
$(INSTALL_DATA) qemu-nbd.8 "$(DESTDIR)$(mandir)/man8"
endif
ifdef CONFIG_VIRTFS
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DATA) fsdev/virtfs-proxy-helper.1 "$(DESTDIR)$(mandir)/man1"
endif
install-datadir:
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)"
install-sysconfig:
$(INSTALL_DIR) "$(DESTDIR)$(sysconfdir)/qemu"
$(INSTALL_DATA) $(SRC_PATH)/sysconfigs/target/target-x86_64.conf "$(DESTDIR)$(sysconfdir)/qemu"
install-confdir:
$(INSTALL_DIR) "$(DESTDIR)$(qemu_confdir)"
install-sysconfig: install-datadir install-confdir
$(INSTALL_DATA) $(SRC_PATH)/sysconfigs/target/target-x86_64.conf "$(DESTDIR)$(qemu_confdir)"
$(INSTALL_DATA) $(SRC_PATH)/sysconfigs/target/cpus-x86_64.conf "$(DESTDIR)$(qemu_datadir)"
install: all $(if $(BUILD_DOCS),install-doc) install-sysconfig install-datadir
install: all $(if $(BUILD_DOCS),install-doc) install-sysconfig
$(INSTALL_DIR) "$(DESTDIR)$(bindir)"
ifneq ($(TOOLS),)
$(INSTALL_PROG) $(STRIP_OPT) $(TOOLS) "$(DESTDIR)$(bindir)"
endif
ifneq ($(HELPERS-y),)
$(INSTALL_DIR) "$(DESTDIR)$(libexecdir)"
$(INSTALL_PROG) $(STRIP_OPT) $(HELPERS-y) "$(DESTDIR)$(libexecdir)"
endif
ifneq ($(BLOBS),)
$(INSTALL_DIR) "$(DESTDIR)$(datadir)"
set -e; for x in $(BLOBS); do \
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/$$x "$(DESTDIR)$(qemu_datadir)"; \
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/$$x "$(DESTDIR)$(datadir)"; \
done
endif
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)/keymaps"
$(INSTALL_DIR) "$(DESTDIR)$(datadir)/keymaps"
set -e; for x in $(KEYMAPS); do \
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/keymaps/$$x "$(DESTDIR)$(qemu_datadir)/keymaps"; \
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/keymaps/$$x "$(DESTDIR)$(datadir)/keymaps"; \
done
for d in $(TARGET_DIRS); do \
$(MAKE) -C $$d $@ || exit 1 ; \
@@ -328,7 +285,7 @@ endif
# various test targets
test speed: all
$(MAKE) -C tests/tcg $@
$(MAKE) -C tests $@
.PHONY: TAGS
TAGS:
@@ -336,7 +293,7 @@ TAGS:
cscope:
rm -f ./cscope.*
find "$(SRC_PATH)" -name "*.[chsS]" -print | sed 's,^\./,,' > ./cscope.files
find . -name "*.[ch]" -print | sed 's,^\./,,' > ./cscope.files
cscope -b
# documentation
@@ -347,7 +304,7 @@ TEXIFLAG=$(if $(V),,--quiet)
$(call quiet-command,texi2dvi $(TEXIFLAG) -I . $<," GEN $@")
%.html: %.texi
$(call quiet-command,LC_ALL=C $(MAKEINFO) $(MAKEINFOFLAGS) --html $< -o $@, \
$(call quiet-command,$(MAKEINFO) $(MAKEINFOFLAGS) --html $< -o $@, \
" GEN $@")
%.info: %.texi
@@ -371,25 +328,19 @@ qemu-img-cmds.texi: $(SRC_PATH)/qemu-img-cmds.hx
qemu.1: qemu-doc.texi qemu-options.texi qemu-monitor.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< qemu.pod && \
$(POD2MAN) --section=1 --center=" " --release=" " qemu.pod > $@, \
pod2man --section=1 --center=" " --release=" " qemu.pod > $@, \
" GEN $@")
qemu-img.1: qemu-img.texi qemu-img-cmds.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< qemu-img.pod && \
$(POD2MAN) --section=1 --center=" " --release=" " qemu-img.pod > $@, \
" GEN $@")
fsdev/virtfs-proxy-helper.1: fsdev/virtfs-proxy-helper.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< fsdev/virtfs-proxy-helper.pod && \
$(POD2MAN) --section=1 --center=" " --release=" " fsdev/virtfs-proxy-helper.pod > $@, \
pod2man --section=1 --center=" " --release=" " qemu-img.pod > $@, \
" GEN $@")
qemu-nbd.8: qemu-nbd.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< qemu-nbd.pod && \
$(POD2MAN) --section=8 --center=" " --release=" " qemu-nbd.pod > $@, \
pod2man --section=8 --center=" " --release=" " qemu-nbd.pod > $@, \
" GEN $@")
dvi: qemu-doc.dvi qemu-tech.dvi
@@ -401,10 +352,52 @@ qemu-doc.dvi qemu-doc.html qemu-doc.info qemu-doc.pdf: \
qemu-img.texi qemu-nbd.texi qemu-options.texi \
qemu-monitor.texi qemu-img-cmds.texi
# Add a dependency on the generated files, so that they are always
# rebuilt before other object files
Makefile: $(GENERATED_HEADERS)
VERSION ?= $(shell cat VERSION)
FILE = qemu-$(VERSION)
# tar release (use 'make -k tar' on a checkouted tree)
tar:
rm -rf /tmp/$(FILE)
cp -r . /tmp/$(FILE)
cd /tmp && tar zcvf ~/$(FILE).tar.gz $(FILE) --exclude CVS --exclude .git --exclude .svn
rm -rf /tmp/$(FILE)
SYSTEM_TARGETS=$(filter %-softmmu,$(TARGET_DIRS))
SYSTEM_PROGS=$(patsubst qemu-system-i386,qemu, \
$(patsubst %-softmmu,qemu-system-%, \
$(SYSTEM_TARGETS)))
USER_TARGETS=$(filter %-user,$(TARGET_DIRS))
USER_PROGS=$(patsubst %-bsd-user,qemu-%, \
$(patsubst %-darwin-user,qemu-%, \
$(patsubst %-linux-user,qemu-%, \
$(USER_TARGETS))))
# generate a binary distribution
tarbin:
cd / && tar zcvf ~/qemu-$(VERSION)-$(ARCH).tar.gz \
$(patsubst %,$(bindir)/%, $(SYSTEM_PROGS)) \
$(patsubst %,$(bindir)/%, $(USER_PROGS)) \
$(bindir)/qemu-img \
$(bindir)/qemu-nbd \
$(datadir)/bios.bin \
$(datadir)/vgabios.bin \
$(datadir)/vgabios-cirrus.bin \
$(datadir)/ppc_rom.bin \
$(datadir)/openbios-sparc32 \
$(datadir)/openbios-sparc64 \
$(datadir)/openbios-ppc \
$(datadir)/pxe-e1000.rom \
$(datadir)/pxe-eepro100.rom \
$(datadir)/pxe-ne2k_pci.rom \
$(datadir)/pxe-pcnet.rom \
$(datadir)/pxe-rtl8139.rom \
$(datadir)/pxe-virtio.rom \
$(docdir)/qemu-doc.html \
$(docdir)/qemu-tech.html \
$(mandir)/man1/qemu.1 \
$(mandir)/man1/qemu-img.1 \
$(mandir)/man8/qemu-nbd.8
# Include automatically generated dependency files
# Dependencies in Makefile.objs files come from our recursive subdir rules
-include $(wildcard *.d tests/*.d)
-include $(wildcard *.d audio/*.d slirp/*.d block/*.d net/*.d ui/*.d qapi/*.d qga/*.d)

View File

@@ -18,3 +18,6 @@ all: $(libdis-y)
clean:
rm -f *.o *.d *.a *~
# Include automatically generated dependency files
-include $(wildcard *.d */*.d)

View File

@@ -7,10 +7,9 @@ include $(SRC_PATH)/rules.mak
.PHONY: all
$(call set-vpath, $(SRC_PATH))
$(call set-vpath, $(SRC_PATH):$(SRC_PATH)/hw)
QEMU_CFLAGS+=-I..
QEMU_CFLAGS += -I$(SRC_PATH)/include
QEMU_CFLAGS+=-I.. -I$(SRC_PATH)/fpu
include $(SRC_PATH)/Makefile.objs
@@ -19,5 +18,7 @@ all: $(hw-obj-y)
@true
clean:
rm -f $(addsuffix *.o, $(sort $(dir $(hw-obj-y))))
rm -f $(addsuffix *.d, $(sort $(dir $(hw-obj-y))))
rm -f *.o */*.o *.d */*.d *.a */*.a *~ */*~
# Include automatically generated dependency files
-include $(wildcard *.d */*.d)

View File

@@ -1,21 +1,8 @@
#######################################################################
# Target-independent parts used in system and user emulation
universal-obj-y =
universal-obj-y += qemu-log.o
#######################################################################
# QObject
qobject-obj-y = qint.o qstring.o qdict.o qlist.o qfloat.o qbool.o
qobject-obj-y += qjson.o json-lexer.o json-streamer.o json-parser.o
qobject-obj-y += qerror.o error.o qemu-error.o
universal-obj-y += $(qobject-obj-y)
#######################################################################
# QOM
qom-obj-y = qom/
universal-obj-y += $(qom-obj-y)
qobject-obj-y += qerror.o error.o
#######################################################################
# oslib-obj-y is code depending on the OS (win32 vs posix)
@@ -23,84 +10,153 @@ oslib-obj-y = osdep.o
oslib-obj-$(CONFIG_WIN32) += oslib-win32.o qemu-thread-win32.o
oslib-obj-$(CONFIG_POSIX) += oslib-posix.o qemu-thread-posix.o
#######################################################################
# coroutines
coroutine-obj-y = qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
coroutine-obj-y += qemu-coroutine-sleep.o
ifeq ($(CONFIG_UCONTEXT_COROUTINE),y)
coroutine-obj-$(CONFIG_POSIX) += coroutine-ucontext.o
else
ifeq ($(CONFIG_SIGALTSTACK_COROUTINE),y)
coroutine-obj-$(CONFIG_POSIX) += coroutine-sigaltstack.o
else
coroutine-obj-$(CONFIG_POSIX) += coroutine-gthread.o
endif
endif
coroutine-obj-$(CONFIG_WIN32) += coroutine-win32.o
#######################################################################
# block-obj-y is code used by both qemu system emulation and qemu-img
block-obj-y = cutils.o iov.o cache-utils.o qemu-option.o module.o async.o
block-obj-y = cutils.o cache-utils.o qemu-malloc.o qemu-option.o module.o async.o
block-obj-y += nbd.o block.o aio.o aes.o qemu-config.o qemu-progress.o qemu-sockets.o
block-obj-y += $(coroutine-obj-y) $(qobject-obj-y) $(version-obj-y)
block-obj-$(CONFIG_POSIX) += posix-aio-compat.o
block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
block-obj-y += block/
block-nested-y += raw.o cow.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat.o
block-nested-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o
block-nested-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o
block-nested-y += qed-check.o
block-nested-y += parallels.o nbd.o blkdebug.o sheepdog.o blkverify.o
block-nested-$(CONFIG_WIN32) += raw-win32.o
block-nested-$(CONFIG_POSIX) += raw-posix.o
block-nested-$(CONFIG_CURL) += curl.o
block-nested-$(CONFIG_RBD) += rbd.o
block-obj-y += $(addprefix block/, $(block-nested-y))
net-obj-y = net.o
net-nested-y = queue.o checksum.o util.o
net-nested-y += socket.o
net-nested-y += dump.o
net-nested-$(CONFIG_POSIX) += tap.o
net-nested-$(CONFIG_LINUX) += tap-linux.o
net-nested-$(CONFIG_WIN32) += tap-win32.o
net-nested-$(CONFIG_BSD) += tap-bsd.o
net-nested-$(CONFIG_SOLARIS) += tap-solaris.o
net-nested-$(CONFIG_AIX) += tap-aix.o
net-nested-$(CONFIG_HAIKU) += tap-haiku.o
net-nested-$(CONFIG_SLIRP) += slirp.o
net-nested-$(CONFIG_VDE) += vde.o
net-obj-y += $(addprefix net/, $(net-nested-y))
ifeq ($(CONFIG_VIRTIO)$(CONFIG_VIRTFS)$(CONFIG_PCI),yyy)
# Lots of the fsdev/9pcode is pulled in by vl.c via qemu_fsdev_add.
# only pull in the actual virtio-9p device if we also enabled virtio.
CONFIG_REALLY_VIRTFS=y
fsdev-nested-y = qemu-fsdev.o
else
fsdev-nested-y = qemu-fsdev-dummy.o
endif
fsdev-obj-$(CONFIG_VIRTFS) += $(addprefix fsdev/, $(fsdev-nested-y))
######################################################################
# Target independent part of system emulation. The long term path is to
# suppress *all* target specific code in case of system emulation, i.e. a
# single QEMU executable should support all CPUs and machines.
# libqemu_common.a: Target independent part of system emulation. The
# long term path is to suppress *all* target specific code in case of
# system emulation, i.e. a single QEMU executable should support all
# CPUs and machines.
common-obj-y = $(block-obj-y) blockdev.o
common-obj-y += net.o net/
common-obj-y += qom/
common-obj-y += readline.o console.o cursor.o
common-obj-y += $(net-obj-y)
common-obj-y += $(qobject-obj-y)
common-obj-$(CONFIG_LINUX) += $(fsdev-obj-$(CONFIG_LINUX))
common-obj-y += readline.o console.o cursor.o qemu-error.o
common-obj-y += $(oslib-obj-y)
common-obj-$(CONFIG_WIN32) += os-win32.o
common-obj-$(CONFIG_POSIX) += os-posix.o
common-obj-$(CONFIG_LINUX) += fsdev/
extra-obj-$(CONFIG_LINUX) += fsdev/
common-obj-y += tcg-runtime.o host-utils.o main-loop.o
common-obj-y += input.o
common-obj-y += tcg-runtime.o host-utils.o
common-obj-y += irq.o ioport.o input.o
common-obj-$(CONFIG_PTIMER) += ptimer.o
common-obj-$(CONFIG_MAX7310) += max7310.o
common-obj-$(CONFIG_WM8750) += wm8750.o
common-obj-$(CONFIG_TWL92230) += twl92230.o
common-obj-$(CONFIG_TSC2005) += tsc2005.o
common-obj-$(CONFIG_LM832X) += lm832x.o
common-obj-$(CONFIG_TMP105) += tmp105.o
common-obj-$(CONFIG_STELLARIS_INPUT) += stellaris_input.o
common-obj-$(CONFIG_SSD0303) += ssd0303.o
common-obj-$(CONFIG_SSD0323) += ssd0323.o
common-obj-$(CONFIG_ADS7846) += ads7846.o
common-obj-$(CONFIG_MAX111X) += max111x.o
common-obj-$(CONFIG_DS1338) += ds1338.o
common-obj-y += i2c.o smbus.o smbus_eeprom.o
common-obj-y += eeprom93xx.o
common-obj-y += scsi-disk.o cdrom.o
common-obj-y += scsi-generic.o scsi-bus.o
common-obj-y += usb.o usb-hub.o usb-$(HOST_USB).o usb-hid.o usb-msd.o usb-wacom.o
common-obj-y += usb-serial.o usb-net.o usb-bus.o usb-desc.o
common-obj-$(CONFIG_SSI) += ssi.o
common-obj-$(CONFIG_SSI_SD) += ssi-sd.o
common-obj-$(CONFIG_SD) += sd.o
common-obj-y += bt.o bt-host.o bt-vhci.o bt-l2cap.o bt-sdp.o bt-hci.o bt-hid.o usb-bt.o
common-obj-y += bt-hci-csr.o
common-obj-y += buffered_file.o migration.o migration-tcp.o
common-obj-y += qemu-char.o #aio.o
common-obj-y += qemu-char.o savevm.o #aio.o
common-obj-y += msmouse.o ps2.o
common-obj-y += qdev.o qdev-properties.o
common-obj-y += block-migration.o iohandler.o
common-obj-y += pflib.o
common-obj-y += bitmap.o bitops.o
common-obj-y += page_cache.o
common-obj-$(CONFIG_BRLAPI) += baum.o
common-obj-$(CONFIG_POSIX) += migration-exec.o migration-unix.o migration-fd.o
common-obj-$(CONFIG_WIN32) += version.o
common-obj-$(CONFIG_SPICE) += spice-qemu-char.o
common-obj-$(CONFIG_SPICE) += ui/spice-core.o ui/spice-input.o ui/spice-display.o spice-qemu-char.o
common-obj-y += audio/
common-obj-y += hw/
common-obj-y += ui/
common-obj-y += bt-host.o bt-vhci.o
audio-obj-y = audio.o noaudio.o wavaudio.o mixeng.o
audio-obj-$(CONFIG_SDL) += sdlaudio.o
audio-obj-$(CONFIG_OSS) += ossaudio.o
audio-obj-$(CONFIG_SPICE) += spiceaudio.o
audio-obj-$(CONFIG_COREAUDIO) += coreaudio.o
audio-obj-$(CONFIG_ALSA) += alsaaudio.o
audio-obj-$(CONFIG_DSOUND) += dsoundaudio.o
audio-obj-$(CONFIG_FMOD) += fmodaudio.o
audio-obj-$(CONFIG_ESD) += esdaudio.o
audio-obj-$(CONFIG_PA) += paaudio.o
audio-obj-$(CONFIG_WINWAVE) += winwaveaudio.o
audio-obj-$(CONFIG_AUDIO_PT_INT) += audio_pt_int.o
audio-obj-$(CONFIG_AUDIO_WIN_INT) += audio_win_int.o
audio-obj-y += wavcapture.o
common-obj-y += $(addprefix audio/, $(audio-obj-y))
ui-obj-y += keymaps.o
ui-obj-$(CONFIG_SDL) += sdl.o sdl_zoom.o x_keymap.o
ui-obj-$(CONFIG_COCOA) += cocoa.o
ui-obj-$(CONFIG_CURSES) += curses.o
vnc-obj-y += vnc.o d3des.o
vnc-obj-y += vnc-enc-zlib.o vnc-enc-hextile.o
vnc-obj-y += vnc-enc-tight.o vnc-palette.o
vnc-obj-y += vnc-enc-zrle.o
vnc-obj-$(CONFIG_VNC_TLS) += vnc-tls.o vnc-auth-vencrypt.o
vnc-obj-$(CONFIG_VNC_SASL) += vnc-auth-sasl.o
ifdef CONFIG_VNC_THREAD
vnc-obj-y += vnc-jobs-async.o
else
vnc-obj-y += vnc-jobs-sync.o
endif
common-obj-y += $(addprefix ui/, $(ui-obj-y))
common-obj-$(CONFIG_VNC) += $(addprefix ui/, $(vnc-obj-y))
common-obj-y += iov.o acl.o
common-obj-$(CONFIG_POSIX) += compatfd.o
common-obj-y += notify.o event_notifier.o
common-obj-y += qemu-timer.o qemu-timer-common.o
common-obj-$(CONFIG_SLIRP) += slirp/
slirp-obj-y = cksum.o if.o ip_icmp.o ip_input.o ip_output.o
slirp-obj-y += slirp.o mbuf.o misc.o sbuf.o socket.o tcp_input.o tcp_output.o
slirp-obj-y += tcp_subr.o tcp_timer.o udp.o bootp.o tftp.o
common-obj-$(CONFIG_SLIRP) += $(addprefix slirp/, $(slirp-obj-y))
######################################################################
# libseccomp
ifeq ($(CONFIG_SECCOMP),y)
common-obj-y += qemu-seccomp.o
endif
# xen backend driver support
common-obj-$(CONFIG_XEN_BACKEND) += xen_backend.o xen_devconfig.o
common-obj-$(CONFIG_XEN_BACKEND) += xen_console.o xenfb.o xen_disk.o xen_nic.o
######################################################################
# libuser
@@ -108,16 +164,137 @@ endif
user-obj-y =
user-obj-y += envlist.o path.o
user-obj-y += tcg-runtime.o host-utils.o
user-obj-y += cutils.o iov.o cache-utils.o
user-obj-y += module.o
user-obj-y += qemu-user.o
user-obj-y += $(trace-obj-y)
user-obj-y += qom/
user-obj-y += cutils.o cache-utils.o
######################################################################
# libhw
hw-obj-y = vl.o dma-helpers.o qtest.o hw/
hw-obj-y =
hw-obj-y += vl.o loader.o
hw-obj-$(CONFIG_VIRTIO) += virtio-console.o
hw-obj-$(CONFIG_VIRTIO_PCI) += virtio-pci.o
hw-obj-y += fw_cfg.o
hw-obj-$(CONFIG_PCI) += pci.o pci_bridge.o
hw-obj-$(CONFIG_PCI) += msix.o msi.o
hw-obj-$(CONFIG_PCI) += pci_host.o pcie_host.o
hw-obj-$(CONFIG_PCI) += ioh3420.o xio3130_upstream.o xio3130_downstream.o
hw-obj-y += watchdog.o
hw-obj-$(CONFIG_ISA_MMIO) += isa_mmio.o
hw-obj-$(CONFIG_ECC) += ecc.o
hw-obj-$(CONFIG_NAND) += nand.o
hw-obj-$(CONFIG_PFLASH_CFI01) += pflash_cfi01.o
hw-obj-$(CONFIG_PFLASH_CFI02) += pflash_cfi02.o
hw-obj-$(CONFIG_M48T59) += m48t59.o
hw-obj-$(CONFIG_ESCC) += escc.o
hw-obj-$(CONFIG_EMPTY_SLOT) += empty_slot.o
hw-obj-$(CONFIG_SERIAL) += serial.o
hw-obj-$(CONFIG_PARALLEL) += parallel.o
hw-obj-$(CONFIG_I8254) += i8254.o
hw-obj-$(CONFIG_PCSPK) += pcspk.o
hw-obj-$(CONFIG_PCKBD) += pckbd.o
hw-obj-$(CONFIG_USB_UHCI) += usb-uhci.o
hw-obj-$(CONFIG_USB_OHCI) += usb-ohci.o
hw-obj-$(CONFIG_USB_EHCI) += usb-ehci.o
hw-obj-$(CONFIG_FDC) += fdc.o
hw-obj-$(CONFIG_ACPI) += acpi.o acpi_piix4.o
hw-obj-$(CONFIG_APM) += pm_smbus.o apm.o
hw-obj-$(CONFIG_DMA) += dma.o
hw-obj-$(CONFIG_HPET) += hpet.o
hw-obj-$(CONFIG_APPLESMC) += applesmc.o
hw-obj-$(CONFIG_SMARTCARD) += usb-ccid.o ccid-card-passthru.o
hw-obj-$(CONFIG_SMARTCARD_NSS) += ccid-card-emulated.o
hw-obj-$(CONFIG_USB_REDIR) += usb-redir.o
# PPC devices
hw-obj-$(CONFIG_OPENPIC) += openpic.o
hw-obj-$(CONFIG_PREP_PCI) += prep_pci.o
# Mac shared devices
hw-obj-$(CONFIG_MACIO) += macio.o
hw-obj-$(CONFIG_CUDA) += cuda.o
hw-obj-$(CONFIG_ADB) += adb.o
hw-obj-$(CONFIG_MAC_NVRAM) += mac_nvram.o
hw-obj-$(CONFIG_MAC_DBDMA) += mac_dbdma.o
# OldWorld PowerMac
hw-obj-$(CONFIG_HEATHROW_PIC) += heathrow_pic.o
hw-obj-$(CONFIG_GRACKLE_PCI) += grackle_pci.o
# NewWorld PowerMac
hw-obj-$(CONFIG_UNIN_PCI) += unin_pci.o
hw-obj-$(CONFIG_DEC_PCI) += dec_pci.o
# PowerPC E500 boards
hw-obj-$(CONFIG_PPCE500_PCI) += ppce500_pci.o
# MIPS devices
hw-obj-$(CONFIG_PIIX4) += piix4.o
# PCI watchdog devices
hw-obj-$(CONFIG_PCI) += wdt_i6300esb.o
hw-obj-$(CONFIG_PCI) += pcie.o pcie_aer.o pcie_port.o
# PCI network cards
hw-obj-$(CONFIG_NE2000_PCI) += ne2000.o
hw-obj-$(CONFIG_EEPRO100_PCI) += eepro100.o
hw-obj-$(CONFIG_PCNET_PCI) += pcnet-pci.o
hw-obj-$(CONFIG_PCNET_COMMON) += pcnet.o
hw-obj-$(CONFIG_E1000_PCI) += e1000.o
hw-obj-$(CONFIG_RTL8139_PCI) += rtl8139.o
hw-obj-$(CONFIG_SMC91C111) += smc91c111.o
hw-obj-$(CONFIG_LAN9118) += lan9118.o
hw-obj-$(CONFIG_NE2000_ISA) += ne2000-isa.o
# IDE
hw-obj-$(CONFIG_IDE_CORE) += ide/core.o ide/atapi.o
hw-obj-$(CONFIG_IDE_QDEV) += ide/qdev.o
hw-obj-$(CONFIG_IDE_PCI) += ide/pci.o
hw-obj-$(CONFIG_IDE_ISA) += ide/isa.o
hw-obj-$(CONFIG_IDE_PIIX) += ide/piix.o
hw-obj-$(CONFIG_IDE_CMD646) += ide/cmd646.o
hw-obj-$(CONFIG_IDE_MACIO) += ide/macio.o
hw-obj-$(CONFIG_IDE_VIA) += ide/via.o
hw-obj-$(CONFIG_AHCI) += ide/ahci.o
hw-obj-$(CONFIG_AHCI) += ide/ich.o
# SCSI layer
hw-obj-$(CONFIG_LSI_SCSI_PCI) += lsi53c895a.o
hw-obj-$(CONFIG_ESP) += esp.o
hw-obj-y += dma-helpers.o sysbus.o isa-bus.o
hw-obj-y += qdev-addr.o
# VGA
hw-obj-$(CONFIG_VGA_PCI) += vga-pci.o
hw-obj-$(CONFIG_VGA_ISA) += vga-isa.o
hw-obj-$(CONFIG_VGA_ISA_MM) += vga-isa-mm.o
hw-obj-$(CONFIG_VMWARE_VGA) += vmware_vga.o
hw-obj-$(CONFIG_VMMOUSE) += vmmouse.o
hw-obj-$(CONFIG_RC4030) += rc4030.o
hw-obj-$(CONFIG_DP8393X) += dp8393x.o
hw-obj-$(CONFIG_DS1225Y) += ds1225y.o
hw-obj-$(CONFIG_MIPSNET) += mipsnet.o
# Sound
sound-obj-y =
sound-obj-$(CONFIG_SB16) += sb16.o
sound-obj-$(CONFIG_ES1370) += es1370.o
sound-obj-$(CONFIG_AC97) += ac97.o
sound-obj-$(CONFIG_ADLIB) += fmopl.o adlib.o
sound-obj-$(CONFIG_GUS) += gus.o gusemu_hal.o gusemu_mixer.o
sound-obj-$(CONFIG_CS4231A) += cs4231a.o
sound-obj-$(CONFIG_HDA) += intel-hda.o hda-audio.o
adlib.o fmopl.o: QEMU_CFLAGS += -DBUILD_Y8950=0
hw-obj-$(CONFIG_SOUND) += $(sound-obj-y)
9pfs-nested-$(CONFIG_VIRTFS) = virtio-9p.o virtio-9p-debug.o
9pfs-nested-$(CONFIG_VIRTFS) += virtio-9p-local.o virtio-9p-xattr.o
9pfs-nested-$(CONFIG_VIRTFS) += virtio-9p-xattr-user.o virtio-9p-posix-acl.o
hw-obj-$(CONFIG_REALLY_VIRTFS) += $(addprefix 9pfs/, $(9pfs-nested-y))
######################################################################
# libdis
@@ -137,28 +314,22 @@ libdis-$(CONFIG_PPC_DIS) += ppc-dis.o
libdis-$(CONFIG_S390_DIS) += s390-dis.o
libdis-$(CONFIG_SH4_DIS) += sh4-dis.o
libdis-$(CONFIG_SPARC_DIS) += sparc-dis.o
libdis-$(CONFIG_LM32_DIS) += lm32-dis.o
######################################################################
# trace
ifeq ($(TRACE_BACKEND),dtrace)
TRACE_H_EXTRA_DEPS=trace-dtrace.h
trace.h: trace.h-timestamp trace-dtrace.h
else
trace.h: trace.h-timestamp
endif
trace.h: trace.h-timestamp $(TRACE_H_EXTRA_DEPS)
trace.h-timestamp: $(SRC_PATH)/trace-events $(BUILD_DIR)/config-host.mak
$(call quiet-command,$(TRACETOOL) \
--format=h \
--backend=$(TRACE_BACKEND) \
< $< > $@," GEN trace.h")
trace.h-timestamp: $(SRC_PATH)/trace-events config-host.mak
$(call quiet-command,sh $(SRC_PATH)/scripts/tracetool --$(TRACE_BACKEND) -h < $< > $@," GEN trace.h")
@cmp -s $@ trace.h || cp $@ trace.h
trace.c: trace.c-timestamp
trace.c-timestamp: $(SRC_PATH)/trace-events $(BUILD_DIR)/config-host.mak
$(call quiet-command,$(TRACETOOL) \
--format=c \
--backend=$(TRACE_BACKEND) \
< $< > $@," GEN trace.c")
trace.c-timestamp: $(SRC_PATH)/trace-events config-host.mak
$(call quiet-command,sh $(SRC_PATH)/scripts/tracetool --$(TRACE_BACKEND) -c < $< > $@," GEN trace.c")
@cmp -s $@ trace.c || cp $@ trace.c
trace.o: trace.c $(GENERATED_HEADERS)
@@ -170,81 +341,47 @@ trace-dtrace.h: trace-dtrace.dtrace
# but that gets picked up by QEMU's Makefile as an external dependency
# rule file. So we use '.dtrace' instead
trace-dtrace.dtrace: trace-dtrace.dtrace-timestamp
trace-dtrace.dtrace-timestamp: $(SRC_PATH)/trace-events $(BUILD_DIR)/config-host.mak
$(call quiet-command,$(TRACETOOL) \
--format=d \
--backend=$(TRACE_BACKEND) \
< $< > $@," GEN trace-dtrace.dtrace")
trace-dtrace.dtrace-timestamp: $(SRC_PATH)/trace-events config-host.mak
$(call quiet-command,sh $(SRC_PATH)/scripts/tracetool --$(TRACE_BACKEND) -d < $< > $@," GEN trace-dtrace.dtrace")
@cmp -s $@ trace-dtrace.dtrace || cp $@ trace-dtrace.dtrace
trace-dtrace.o: trace-dtrace.dtrace $(GENERATED_HEADERS)
$(call quiet-command,dtrace -o $@ -G -s $<, " GEN trace-dtrace.o")
$(call quiet-command,dtrace -o $@ -G -s $<, " GEN trace-dtrace.o")
ifeq ($(LIBTOOL),)
trace-dtrace.lo: trace-dtrace.dtrace
@echo "missing libtool. please install and rerun configure."; exit 1
else
trace-dtrace.lo: trace-dtrace.dtrace
$(call quiet-command,$(LIBTOOL) --mode=compile --tag=CC dtrace -o $@ -G -s $<, " lt GEN trace-dtrace.o")
$(call quiet-command,libtool --mode=compile --tag=CC dtrace -o $@ -G -s $<, " lt GEN trace-dtrace.o")
endif
trace/simple.o: trace/simple.c $(GENERATED_HEADERS)
simpletrace.o: simpletrace.c $(GENERATED_HEADERS)
trace-obj-$(CONFIG_TRACE_DTRACE) += trace-dtrace.o
ifneq ($(TRACE_BACKEND),dtrace)
ifeq ($(TRACE_BACKEND),dtrace)
trace-obj-y = trace-dtrace.o
else
trace-obj-y = trace.o
ifeq ($(TRACE_BACKEND),simple)
trace-obj-y += simpletrace.o
user-obj-y += qemu-timer-common.o
endif
endif
trace-obj-$(CONFIG_TRACE_DEFAULT) += trace/default.o
trace-obj-$(CONFIG_TRACE_SIMPLE) += trace/simple.o
trace-obj-$(CONFIG_TRACE_SIMPLE) += qemu-timer-common.o
trace-obj-$(CONFIG_TRACE_STDERR) += trace/stderr.o
trace-obj-y += trace/control.o
$(trace-obj-y): $(GENERATED_HEADERS)
######################################################################
# smartcard
libcacard-y += libcacard/cac.o libcacard/event.o
libcacard-y += libcacard/vcard.o libcacard/vreader.o
libcacard-y += libcacard/vcard_emul_nss.o
libcacard-y += libcacard/vcard_emul_type.o
libcacard-y += libcacard/card_7816.o
common-obj-$(CONFIG_SMARTCARD_NSS) += $(libcacard-y)
libcacard-y = cac.o event.o vcard.o vreader.o vcard_emul_nss.o vcard_emul_type.o card_7816.o
######################################################################
# qapi
qapi-obj-y = qapi/
qapi-obj-y += qapi-types.o qapi-visit.o
common-obj-y += qmp-marshal.o qapi-visit.o qapi-types.o
common-obj-y += qmp.o hmp.o
universal-obj-y += $(qapi-obj-y)
######################################################################
# guest agent
qga-obj-y = qga/ qemu-ga.o module.o
qga-obj-$(CONFIG_WIN32) += oslib-win32.o
qga-obj-$(CONFIG_POSIX) += oslib-posix.o qemu-sockets.o qemu-option.o
qapi-nested-y = qapi-visit-core.o qmp-input-visitor.o qmp-output-visitor.o qapi-dealloc-visitor.o
qapi-nested-y += qmp-registry.o qmp-dispatch.o
qapi-obj-y = $(addprefix qapi/, $(qapi-nested-y))
vl.o: QEMU_CFLAGS+=$(GPROF_CFLAGS)
vl.o: QEMU_CFLAGS+=$(SDL_CFLAGS)
QEMU_CFLAGS+=$(GLIB_CFLAGS)
nested-vars += \
hw-obj-y \
qga-obj-y \
block-obj-y \
qom-obj-y \
qapi-obj-y \
user-obj-y \
common-obj-y \
extra-obj-y
dummy := $(call unnest-vars)
vl.o: QEMU_CFLAGS+=$(GLIB_CFLAGS)

View File

@@ -1,5 +1,10 @@
# -*- Mode: makefile -*-
GENERATED_HEADERS = config-target.h
CONFIG_NO_PCI = $(if $(subst n,,$(CONFIG_PCI)),n,y)
CONFIG_NO_KVM = $(if $(subst n,,$(CONFIG_KVM)),n,y)
CONFIG_NO_XEN = $(if $(subst n,,$(CONFIG_XEN)),n,y)
include ../config-host.mak
include config-devices.mak
include config-target.mak
@@ -8,30 +13,28 @@ ifneq ($(HWDIR),)
include $(HWDIR)/config.mak
endif
$(call set-vpath, $(SRC_PATH))
TARGET_PATH=$(SRC_PATH)/target-$(TARGET_BASE_ARCH)
$(call set-vpath, $(SRC_PATH):$(TARGET_PATH):$(SRC_PATH)/hw)
ifdef CONFIG_LINUX
QEMU_CFLAGS += -I../linux-headers
endif
QEMU_CFLAGS += -I.. -I$(SRC_PATH)/target-$(TARGET_BASE_ARCH) -DNEED_CPU_H
QEMU_CFLAGS += -I.. -I$(TARGET_PATH) -DNEED_CPU_H
QEMU_CFLAGS+=-I$(SRC_PATH)/include
include $(SRC_PATH)/Makefile.objs
ifdef CONFIG_USER_ONLY
# user emulator name
QEMU_PROG=qemu-$(TARGET_ARCH2)
else
# system emulator name
ifneq (,$(findstring -mwindows,$(LIBS)))
# Terminate program name with a 'w' because the linker builds a windows executable.
QEMU_PROGW=qemu-system-$(TARGET_ARCH2)w$(EXESUF)
endif # windows executable
ifeq ($(TARGET_ARCH), i386)
QEMU_PROG=qemu$(EXESUF)
else
QEMU_PROG=qemu-system-$(TARGET_ARCH2)$(EXESUF)
endif
endif
PROGS=$(QEMU_PROG)
ifdef QEMU_PROGW
PROGS+=$(QEMU_PROGW)
endif
STPFILES=
ifndef CONFIG_HAIKU
@@ -41,7 +44,7 @@ endif
config-target.h: config-target.h-timestamp
config-target.h-timestamp: config-target.mak
ifdef CONFIG_TRACE_SYSTEMTAP
ifdef CONFIG_SYSTEMTAP_TRACE
stap: $(QEMU_PROG).stp
ifdef CONFIG_USER_ONLY
@@ -50,14 +53,13 @@ else
TARGET_TYPE=system
endif
$(QEMU_PROG).stp: $(SRC_PATH)/trace-events
$(call quiet-command,$(TRACETOOL) \
--format=stap \
--backend=$(TRACE_BACKEND) \
--binary=$(bindir)/$(QEMU_PROG) \
--target-arch=$(TARGET_ARCH) \
--target-type=$(TARGET_TYPE) \
< $< > $@," GEN $(QEMU_PROG).stp")
$(QEMU_PROG).stp:
$(call quiet-command,sh $(SRC_PATH)/scripts/tracetool \
--$(TRACE_BACKEND) \
--binary $(bindir)/$(QEMU_PROG) \
--target-arch $(TARGET_ARCH) \
--target-type $(TARGET_TYPE) \
--stap < $(SRC_PATH)/trace-events > $(QEMU_PROG).stp," GEN $(QEMU_PROG).stp")
else
stap:
endif
@@ -69,111 +71,332 @@ all: $(PROGS) stap
#########################################################
# cpu emulator library
obj-y = exec.o translate-all.o cpu-exec.o
obj-y += tcg/tcg.o tcg/optimize.o
obj-$(CONFIG_TCG_INTERPRETER) += tci.o
obj-y += fpu/softfloat.o
obj-y += disas.o
obj-$(CONFIG_TCI_DIS) += tci-dis.o
obj-y += target-$(TARGET_BASE_ARCH)/
obj-$(CONFIG_GDBSTUB_XML) += gdbstub-xml.o
libobj-y = exec.o translate-all.o cpu-exec.o translate.o
libobj-y += tcg/tcg.o
libobj-y += fpu/softfloat.o
libobj-y += op_helper.o helper.o
ifeq ($(TARGET_BASE_ARCH), i386)
libobj-y += cpuid.o
endif
libobj-$(CONFIG_NEED_MMU) += mmu.o
libobj-$(TARGET_ARM) += neon_helper.o iwmmxt_helper.o
tci-dis.o: QEMU_CFLAGS += -I$(SRC_PATH)/tcg -I$(SRC_PATH)/tcg/tci
libobj-y += disas.o
$(libobj-y): $(GENERATED_HEADERS)
# libqemu
translate.o: translate.c cpu.h
translate-all.o: translate-all.c cpu.h
tcg/tcg.o: cpu.h
# HELPER_CFLAGS is used for all the code compiled with static register
# variables
op_helper.o user-exec.o: QEMU_CFLAGS += $(HELPER_CFLAGS)
# Note: this is a workaround. The real fix is to avoid compiling
# cpu_signal_handler() in user-exec.c.
signal.o: QEMU_CFLAGS += $(HELPER_CFLAGS)
#########################################################
# Linux user emulator target
ifdef CONFIG_LINUX_USER
QEMU_CFLAGS+=-I$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR) -I$(SRC_PATH)/linux-user
$(call set-vpath, $(SRC_PATH)/linux-user:$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR))
obj-y += linux-user/
obj-y += gdbstub.o thunk.o user-exec.o $(oslib-obj-y)
QEMU_CFLAGS+=-I$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR) -I$(SRC_PATH)/linux-user
obj-y = main.o syscall.o strace.o mmap.o signal.o thunk.o \
elfload.o linuxload.o uaccess.o gdbstub.o cpu-uname.o \
qemu-malloc.o user-exec.o $(oslib-obj-y)
obj-$(TARGET_HAS_BFLT) += flatload.o
obj-$(TARGET_I386) += vm86.o
obj-i386-y += ioport-user.o
nwfpe-obj-y = fpa11.o fpa11_cpdo.o fpa11_cpdt.o fpa11_cprt.o fpopcode.o
nwfpe-obj-y += single_cpdo.o double_cpdo.o extended_cpdo.o
obj-arm-y += $(addprefix nwfpe/, $(nwfpe-obj-y))
obj-arm-y += arm-semi.o
obj-m68k-y += m68k-sim.o m68k-semi.o
$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y): $(GENERATED_HEADERS)
obj-y += $(addprefix ../libuser/, $(user-obj-y))
obj-y += $(addprefix ../libdis-user/, $(libdis-y))
obj-y += $(libobj-y)
endif #CONFIG_LINUX_USER
#########################################################
# Darwin user emulator target
ifdef CONFIG_DARWIN_USER
$(call set-vpath, $(SRC_PATH)/darwin-user)
QEMU_CFLAGS+=-I$(SRC_PATH)/darwin-user -I$(SRC_PATH)/darwin-user/$(TARGET_ARCH)
# Leave some space for the regular program loading zone
LDFLAGS+=-Wl,-segaddr,__STD_PROG_ZONE,0x1000 -image_base 0x0e000000
LIBS+=-lmx
obj-y = main.o commpage.o machload.o mmap.o signal.o syscall.o thunk.o \
gdbstub.o user-exec.o
obj-i386-y += ioport-user.o
$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y): $(GENERATED_HEADERS)
obj-y += $(addprefix ../libuser/, $(user-obj-y))
obj-y += $(addprefix ../libdis-user/, $(libdis-y))
obj-y += $(libobj-y)
endif #CONFIG_DARWIN_USER
#########################################################
# BSD user emulator target
ifdef CONFIG_BSD_USER
$(call set-vpath, $(SRC_PATH)/bsd-user)
QEMU_CFLAGS+=-I$(SRC_PATH)/bsd-user -I$(SRC_PATH)/bsd-user/$(TARGET_ARCH)
obj-y += bsd-user/
obj-y += gdbstub.o user-exec.o $(oslib-obj-y)
obj-y = main.o bsdload.o elfload.o mmap.o signal.o strace.o syscall.o \
gdbstub.o uaccess.o user-exec.o
obj-i386-y += ioport-user.o
$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y): $(GENERATED_HEADERS)
obj-y += $(addprefix ../libuser/, $(user-obj-y))
obj-y += $(addprefix ../libdis-user/, $(libdis-y))
obj-y += $(libobj-y)
endif #CONFIG_BSD_USER
#########################################################
# System emulator target
ifdef CONFIG_SOFTMMU
CONFIG_NO_PCI = $(if $(subst n,,$(CONFIG_PCI)),n,y)
CONFIG_NO_KVM = $(if $(subst n,,$(CONFIG_KVM)),n,y)
CONFIG_NO_XEN = $(if $(subst n,,$(CONFIG_XEN)),n,y)
CONFIG_NO_GET_MEMORY_MAPPING = $(if $(subst n,,$(CONFIG_HAVE_GET_MEMORY_MAPPING)),n,y)
CONFIG_NO_CORE_DUMP = $(if $(subst n,,$(CONFIG_HAVE_CORE_DUMP)),n,y)
obj-y += arch_init.o cpus.o monitor.o gdbstub.o balloon.o ioport.o
obj-y += hw/
obj-$(CONFIG_KVM) += kvm-all.o
obj-y = arch_init.o cpus.o monitor.o machine.o gdbstub.o balloon.o
# virtio has to be here due to weird dependency between PCI and virtio-net.
# need to fix this properly
obj-$(CONFIG_NO_PCI) += pci-stub.o
obj-$(CONFIG_VIRTIO) += virtio.o virtio-blk.o virtio-balloon.o virtio-net.o virtio-serial-bus.o
obj-y += vhost_net.o
obj-$(CONFIG_VHOST_NET) += vhost.o
obj-$(CONFIG_REALLY_VIRTFS) += 9pfs/virtio-9p-device.o
obj-y += rwhandler.o
obj-$(CONFIG_KVM) += kvm.o kvm-all.o
obj-$(CONFIG_NO_KVM) += kvm-stub.o
obj-y += memory.o savevm.o cputlb.o
obj-$(CONFIG_HAVE_GET_MEMORY_MAPPING) += memory_mapping.o
obj-$(CONFIG_HAVE_CORE_DUMP) += dump.o
obj-$(CONFIG_NO_GET_MEMORY_MAPPING) += memory_mapping-stub.o
obj-$(CONFIG_NO_CORE_DUMP) += dump-stub.o
LIBS+=-lz
QEMU_CFLAGS += $(VNC_TLS_CFLAGS)
QEMU_CFLAGS += $(VNC_SASL_CFLAGS)
QEMU_CFLAGS += $(VNC_JPEG_CFLAGS)
QEMU_CFLAGS += $(VNC_PNG_CFLAGS)
QEMU_CFLAGS += $(GLIB_CFLAGS)
# xen support
obj-$(CONFIG_XEN) += xen-all.o xen-mapcache.o
obj-$(CONFIG_XEN) += xen-all.o xen_machine_pv.o xen_domainbuild.o xen-mapcache.o
obj-$(CONFIG_NO_XEN) += xen-stub.o
# Hardware support
ifeq ($(TARGET_ARCH), sparc64)
obj-y += hw/sparc64/
else
obj-y += hw/$(TARGET_BASE_ARCH)/
obj-i386-$(CONFIG_XEN) += xen_platform.o
# Inter-VM PCI shared memory
CONFIG_IVSHMEM =
ifeq ($(CONFIG_KVM), y)
ifeq ($(CONFIG_PCI), y)
CONFIG_IVSHMEM = y
endif
endif
obj-$(CONFIG_IVSHMEM) += ivshmem.o
# Hardware support
obj-i386-y += vga.o
obj-i386-y += mc146818rtc.o i8259.o pc.o
obj-i386-y += cirrus_vga.o sga.o apic.o ioapic.o piix_pci.o
obj-i386-y += vmport.o
obj-i386-y += device-hotplug.o pci-hotplug.o smbios.o wdt_ib700.o
obj-i386-y += debugcon.o multiboot.o
obj-i386-y += pc_piix.o
obj-i386-$(CONFIG_KVM) += kvmclock.o
obj-i386-$(CONFIG_SPICE) += qxl.o qxl-logger.o qxl-render.o
# shared objects
obj-ppc-y = ppc.o
obj-ppc-y += vga.o
# PREP target
obj-ppc-y += i8259.o mc146818rtc.o
obj-ppc-y += ppc_prep.o
# OldWorld PowerMac
obj-ppc-y += ppc_oldworld.o
# NewWorld PowerMac
obj-ppc-y += ppc_newworld.o
# IBM pSeries (sPAPR)
ifeq ($(CONFIG_FDT)$(TARGET_PPC64),yy)
obj-ppc-y += spapr.o spapr_hcall.o spapr_rtas.o spapr_vio.o
obj-ppc-y += xics.o spapr_vty.o spapr_llan.o spapr_vscsi.o
endif
# PowerPC 4xx boards
obj-ppc-y += ppc4xx_devs.o ppc4xx_pci.o ppc405_uc.o ppc405_boards.o
obj-ppc-y += ppc440.o ppc440_bamboo.o
# PowerPC E500 boards
obj-ppc-y += ppce500_mpc8544ds.o mpc8544_guts.o
# PowerPC 440 Xilinx ML507 reference board.
obj-ppc-y += virtex_ml507.o
obj-ppc-$(CONFIG_KVM) += kvm_ppc.o
obj-ppc-$(CONFIG_FDT) += device_tree.o
# Xilinx PPC peripherals
obj-ppc-y += xilinx_intc.o
obj-ppc-y += xilinx_timer.o
obj-ppc-y += xilinx_uartlite.o
obj-ppc-y += xilinx_ethlite.o
# LM32 boards
obj-lm32-y += lm32_boards.o
obj-lm32-y += milkymist.o
# LM32 peripherals
obj-lm32-y += lm32_pic.o
obj-lm32-y += lm32_juart.o
obj-lm32-y += lm32_timer.o
obj-lm32-y += lm32_uart.o
obj-lm32-y += lm32_sys.o
obj-lm32-y += milkymist-ac97.o
obj-lm32-y += milkymist-hpdmc.o
obj-lm32-y += milkymist-memcard.o
obj-lm32-y += milkymist-minimac2.o
obj-lm32-y += milkymist-pfpu.o
obj-lm32-y += milkymist-softusb.o
obj-lm32-y += milkymist-sysctl.o
obj-lm32-$(CONFIG_OPENGL) += milkymist-tmu2.o
obj-lm32-y += milkymist-uart.o
obj-lm32-y += milkymist-vgafb.o
obj-lm32-y += framebuffer.o
obj-mips-y = mips_r4k.o mips_jazz.o mips_malta.o mips_mipssim.o
obj-mips-y += mips_addr.o mips_timer.o mips_int.o
obj-mips-y += vga.o i8259.o
obj-mips-y += g364fb.o jazz_led.o
obj-mips-y += gt64xxx.o mc146818rtc.o
obj-mips-y += cirrus_vga.o
obj-mips-$(CONFIG_FULONG) += bonito.o vt82c686.o mips_fulong2e.o
obj-microblaze-y = petalogix_s3adsp1800_mmu.o
obj-microblaze-y += petalogix_ml605_mmu.o
obj-microblaze-y += microblaze_pic_cpu.o
obj-microblaze-y += xilinx_intc.o
obj-microblaze-y += xilinx_timer.o
obj-microblaze-y += xilinx_uartlite.o
obj-microblaze-y += xilinx_ethlite.o
obj-microblaze-y += xilinx_axidma.o
obj-microblaze-y += xilinx_axienet.o
obj-microblaze-$(CONFIG_FDT) += device_tree.o
# Boards
obj-cris-y = cris_pic_cpu.o
obj-cris-y += cris-boot.o
obj-cris-y += etraxfs.o axis_dev88.o
obj-cris-y += axis_dev88.o
# IO blocks
obj-cris-y += etraxfs_dma.o
obj-cris-y += etraxfs_pic.o
obj-cris-y += etraxfs_eth.o
obj-cris-y += etraxfs_timer.o
obj-cris-y += etraxfs_ser.o
ifeq ($(TARGET_ARCH), sparc64)
obj-sparc-y = sun4u.o apb_pci.o
obj-sparc-y += vga.o
obj-sparc-y += mc146818rtc.o
obj-sparc-y += cirrus_vga.o
else
obj-sparc-y = sun4m.o lance.o tcx.o sun4m_iommu.o slavio_intctl.o
obj-sparc-y += slavio_timer.o slavio_misc.o sparc32_dma.o
obj-sparc-y += cs4231.o eccmemctl.o sbi.o sun4c_intctl.o leon3.o
# GRLIB
obj-sparc-y += grlib_gptimer.o grlib_irqmp.o grlib_apbuart.o
endif
obj-arm-y = integratorcp.o versatilepb.o arm_pic.o arm_timer.o
obj-arm-y += arm_boot.o pl011.o pl031.o pl050.o pl080.o pl110.o pl181.o pl190.o
obj-arm-y += versatile_pci.o
obj-arm-y += realview_gic.o realview.o arm_sysctl.o arm11mpcore.o a9mpcore.o
obj-arm-y += armv7m.o armv7m_nvic.o stellaris.o pl022.o stellaris_enet.o
obj-arm-y += pl061.o
obj-arm-y += arm-semi.o
obj-arm-y += pxa2xx.o pxa2xx_pic.o pxa2xx_gpio.o pxa2xx_timer.o pxa2xx_dma.o
obj-arm-y += pxa2xx_lcd.o pxa2xx_mmci.o pxa2xx_pcmcia.o pxa2xx_keypad.o
obj-arm-y += gumstix.o
obj-arm-y += zaurus.o ide/microdrive.o spitz.o tosa.o tc6393xb.o
obj-arm-y += omap1.o omap_lcdc.o omap_dma.o omap_clk.o omap_mmc.o omap_i2c.o \
omap_gpio.o omap_intc.o omap_uart.o
obj-arm-y += omap2.o omap_dss.o soc_dma.o omap_gptimer.o omap_synctimer.o \
omap_gpmc.o omap_sdrc.o omap_spi.o omap_tap.o omap_l4.o
obj-arm-y += omap_sx1.o palm.o tsc210x.o
obj-arm-y += nseries.o blizzard.o onenand.o vga.o cbus.o tusb6010.o usb-musb.o
obj-arm-y += mst_fpga.o mainstone.o
obj-arm-y += musicpal.o bitbang_i2c.o marvell_88w8618_audio.o
obj-arm-y += framebuffer.o
obj-arm-y += syborg.o syborg_fb.o syborg_interrupt.o syborg_keyboard.o
obj-arm-y += syborg_serial.o syborg_timer.o syborg_pointer.o syborg_rtc.o
obj-arm-y += syborg_virtio.o
obj-arm-y += vexpress.o
obj-arm-y += strongarm.o
obj-arm-y += collie.o
obj-sh4-y = shix.o r2d.o sh7750.o sh7750_regnames.o tc58128.o
obj-sh4-y += sh_timer.o sh_serial.o sh_intc.o sh_pci.o sm501.o
obj-sh4-y += ide/mmio.o
obj-m68k-y = an5206.o mcf5206.o mcf_uart.o mcf_intc.o mcf5208.o mcf_fec.o
obj-m68k-y += m68k-semi.o dummy_m68k.o
obj-s390x-y = s390-virtio-bus.o s390-virtio.o
obj-alpha-y = i8259.o mc146818rtc.o
obj-alpha-y += vga.o cirrus_vga.o
main.o: QEMU_CFLAGS+=$(GPROF_CFLAGS)
GENERATED_HEADERS += hmp-commands.h qmp-commands-old.h
monitor.o: hmp-commands.h qmp-commands.h
$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y): $(GENERATED_HEADERS)
obj-y += $(addprefix ../, $(common-obj-y))
obj-y += $(addprefix ../libdis/, $(libdis-y))
obj-y += $(libobj-y)
obj-y += $(addprefix $(HWDIR)/, $(hw-obj-y))
endif # CONFIG_SOFTMMU
nested-vars += obj-y
ifndef CONFIG_LINUX_USER
# libcacard needs qemu-thread support, and besides is only needed by devices
# so not requires with linux-user targets
obj-$(CONFIG_SMARTCARD_NSS) += $(addprefix ../libcacard/, $(libcacard-y))
endif # CONFIG_LINUX_USER
# This resolves all nested paths, so it must come last
include $(SRC_PATH)/Makefile.objs
obj-y += $(addprefix ../, $(trace-obj-y))
obj-$(CONFIG_GDBSTUB_XML) += gdbstub-xml.o
all-obj-y = $(obj-y)
all-obj-y += $(addprefix ../, $(universal-obj-y))
$(QEMU_PROG): $(obj-y) $(obj-$(TARGET_BASE_ARCH)-y)
$(call LINK,$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y))
ifdef CONFIG_SOFTMMU
all-obj-y += $(addprefix ../, $(common-obj-y))
all-obj-y += $(addprefix ../libdis/, $(libdis-y))
all-obj-y += $(addprefix $(HWDIR)/, $(hw-obj-y))
all-obj-y += $(addprefix ../, $(trace-obj-y))
else
all-obj-y += $(addprefix ../libuser/, $(user-obj-y))
all-obj-y += $(addprefix ../libdis-user/, $(libdis-y))
endif #CONFIG_LINUX_USER
ifdef QEMU_PROGW
# The linker builds a windows executable. Make also a console executable.
$(QEMU_PROGW): $(all-obj-y)
$(call LINK,$^)
$(QEMU_PROG): $(QEMU_PROGW)
$(call quiet-command,$(OBJCOPY) --subsystem console $(QEMU_PROGW) $(QEMU_PROG)," GEN $(TARGET_DIR)$(QEMU_PROG)")
else
$(QEMU_PROG): $(all-obj-y)
$(call LINK,$^)
endif
gdbstub-xml.c: $(TARGET_XML_FILES) $(SRC_PATH)/scripts/feature_to_c.sh
$(call quiet-command,rm -f $@ && $(SHELL) $(SRC_PATH)/scripts/feature_to_c.sh $@ $(TARGET_XML_FILES)," GEN $(TARGET_DIR)$@")
@@ -181,14 +404,14 @@ gdbstub-xml.c: $(TARGET_XML_FILES) $(SRC_PATH)/scripts/feature_to_c.sh
hmp-commands.h: $(SRC_PATH)/hmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
qmp-commands-old.h: $(SRC_PATH)/qmp-commands.hx
qmp-commands.h: $(SRC_PATH)/qmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
clean:
rm -f *.a *~ $(PROGS)
rm -f $(shell find . -name '*.[od]')
rm -f hmp-commands.h qmp-commands-old.h gdbstub-xml.c
ifdef CONFIG_TRACE_SYSTEMTAP
rm -f *.o *.a *~ $(PROGS) nwfpe/*.o fpu/*.o
rm -f *.d */*.d tcg/*.o ide/*.o 9pfs/*.o
rm -f hmp-commands.h qmp-commands.h gdbstub-xml.c
ifdef CONFIG_SYSTEMTAP_TRACE
rm -f *.stp
endif
@@ -199,10 +422,10 @@ ifneq ($(STRIP),)
$(STRIP) $(patsubst %,"$(DESTDIR)$(bindir)/%",$(PROGS))
endif
endif
ifdef CONFIG_TRACE_SYSTEMTAP
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset"
$(INSTALL_DATA) $(QEMU_PROG).stp "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset"
ifdef CONFIG_SYSTEMTAP_TRACE
$(INSTALL_DIR) "$(DESTDIR)$(datadir)/../systemtap/tapset"
$(INSTALL_DATA) $(QEMU_PROG).stp "$(DESTDIR)$(datadir)/../systemtap/tapset"
endif
GENERATED_HEADERS += config-target.h
Makefile: $(GENERATED_HEADERS)
# Include automatically generated dependency files
-include $(wildcard *.d */*.d)

View File

@@ -9,8 +9,6 @@ include $(SRC_PATH)/rules.mak
$(call set-vpath, $(SRC_PATH))
QEMU_CFLAGS+=-I..
QEMU_CFLAGS += -I$(SRC_PATH)/include
QEMU_CFLAGS += -DCONFIG_USER_ONLY
include $(SRC_PATH)/Makefile.objs
@@ -19,6 +17,7 @@ all: $(user-obj-y)
@true
clean:
for d in . trace; do \
rm -f $$d/*.o $$d/*.d $$d/*.a $$d/*~; \
done
rm -f *.o *.d *.a *~
# Include automatically generated dependency files
-include $(wildcard *.d */*.d)

126
QMP/qmp
View File

@@ -1,126 +0,0 @@
#!/usr/bin/python
#
# QMP command line tool
#
# Copyright IBM, Corp. 2011
#
# Authors:
# Anthony Liguori <aliguori@us.ibm.com>
#
# This work is licensed under the terms of the GNU GPLv2 or later.
# See the COPYING file in the top-level directory.
import sys, os
from qmp import QEMUMonitorProtocol
def print_response(rsp, prefix=[]):
if type(rsp) == list:
i = 0
for item in rsp:
if prefix == []:
prefix = ['item']
print_response(item, prefix[:-1] + ['%s[%d]' % (prefix[-1], i)])
i += 1
elif type(rsp) == dict:
for key in rsp.keys():
print_response(rsp[key], prefix + [key])
else:
if len(prefix):
print '%s: %s' % ('.'.join(prefix), rsp)
else:
print '%s' % (rsp)
def main(args):
path = None
# Use QMP_PATH if it's set
if os.environ.has_key('QMP_PATH'):
path = os.environ['QMP_PATH']
while len(args):
arg = args[0]
if arg.startswith('--'):
arg = arg[2:]
if arg.find('=') == -1:
value = True
else:
arg, value = arg.split('=', 1)
if arg in ['path']:
if type(value) == str:
path = value
elif arg in ['help']:
os.execlp('man', 'man', 'qmp')
else:
print 'Unknown argument "%s"' % arg
args = args[1:]
else:
break
if not path:
print "QMP path isn't set, use --path=qmp-monitor-address or set QMP_PATH"
return 1
if len(args):
command, args = args[0], args[1:]
else:
print 'No command found'
print 'Usage: "qmp [--path=qmp-monitor-address] qmp-cmd arguments"'
return 1
if command in ['help']:
os.execlp('man', 'man', 'qmp')
srv = QEMUMonitorProtocol(path)
srv.connect()
def do_command(srv, cmd, **kwds):
rsp = srv.cmd(cmd, kwds)
if rsp.has_key('error'):
raise Exception(rsp['error']['desc'])
return rsp['return']
commands = map(lambda x: x['name'], do_command(srv, 'query-commands'))
srv.close()
if command not in commands:
fullcmd = 'qmp-%s' % command
try:
os.environ['QMP_PATH'] = path
os.execvp(fullcmd, [fullcmd] + args)
except OSError, (errno, msg):
if errno == 2:
print 'Command "%s" not found.' % (fullcmd)
return 1
raise
return 0
srv = QEMUMonitorProtocol(path)
srv.connect()
arguments = {}
for arg in args:
if not arg.startswith('--'):
print 'Unknown argument "%s"' % arg
return 1
arg = arg[2:]
if arg.find('=') == -1:
value = True
else:
arg, value = arg.split('=', 1)
if arg in ['help']:
os.execlp('man', 'man', 'qmp-%s' % command)
return 1
arguments[arg] = value
rsp = do_command(srv, command, **arguments)
print_response(rsp)
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))

View File

@@ -1,23 +1,6 @@
QEMU Monitor Protocol Events
============================
BALLOON_CHANGE
--------------
Emitted when the guest changes the actual BALLOON level. This
value is equivalent to the 'actual' field return by the
'query-balloon' command
Data:
- "actual": actual level of the guest memory balloon in bytes (json-number)
Example:
{ "event": "BALLOON_CHANGE",
"data": { "actual": 944766976 },
"timestamp": { "seconds": 1267020223, "microseconds": 435656 } }
BLOCK_IO_ERROR
--------------
@@ -43,75 +26,6 @@ Example:
Note: If action is "stop", a STOP event will eventually follow the
BLOCK_IO_ERROR event.
BLOCK_JOB_CANCELLED
-------------------
Emitted when a block job has been cancelled.
Data:
- "type": Job type ("stream" for image streaming, json-string)
- "device": Device name (json-string)
- "len": Maximum progress value (json-int)
- "offset": Current progress value (json-int)
On success this is equal to len.
On failure this is less than len.
- "speed": Rate limit, bytes per second (json-int)
Example:
{ "event": "BLOCK_JOB_CANCELLED",
"data": { "type": "stream", "device": "virtio-disk0",
"len": 10737418240, "offset": 134217728,
"speed": 0 },
"timestamp": { "seconds": 1267061043, "microseconds": 959568 } }
BLOCK_JOB_COMPLETED
-------------------
Emitted when a block job has completed.
Data:
- "type": Job type ("stream" for image streaming, json-string)
- "device": Device name (json-string)
- "len": Maximum progress value (json-int)
- "offset": Current progress value (json-int)
On success this is equal to len.
On failure this is less than len.
- "speed": Rate limit, bytes per second (json-int)
- "error": Error message (json-string, optional)
Only present on failure. This field contains a human-readable
error message. There are no semantics other than that streaming
has failed and clients should not try to interpret the error
string.
Example:
{ "event": "BLOCK_JOB_COMPLETED",
"data": { "type": "stream", "device": "virtio-disk0",
"len": 10737418240, "offset": 10737418240,
"speed": 0 },
"timestamp": { "seconds": 1267061043, "microseconds": 959568 } }
DEVICE_TRAY_MOVED
-----------------
It's emitted whenever the tray of a removable device is moved by the guest
or by HMP/QMP commands.
Data:
- "device": device name (json-string)
- "tray-open": true if the tray has been opened or false if it has been closed
(json-bool)
{ "event": "DEVICE_TRAY_MOVED",
"data": { "device": "ide1-cd0",
"tray-open": true
},
"timestamp": { "seconds": 1265044230, "microseconds": 450486 } }
RESET
-----
@@ -166,68 +80,6 @@ Example:
Note: If the command-line option "-no-shutdown" has been specified, a STOP
event will eventually follow the SHUTDOWN event.
SPICE_CONNECTED, SPICE_DISCONNECTED
-----------------------------------
Emitted when a SPICE client connects or disconnects.
Data:
- "server": Server information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "client": Client information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
Example:
{ "timestamp": {"seconds": 1290688046, "microseconds": 388707},
"event": "SPICE_CONNECTED",
"data": {
"server": { "port": "5920", "family": "ipv4", "host": "127.0.0.1"},
"client": {"port": "52873", "family": "ipv4", "host": "127.0.0.1"}
}}
SPICE_INITIALIZED
-----------------
Emitted after initial handshake and authentication takes place (if any)
and the SPICE channel is up'n'running
Data:
- "server": Server information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "auth": authentication method (json-string, optional)
- "client": Client information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "connection-id": spice connection id. All channels with the same id
belong to the same spice session (json-int)
- "channel-type": channel type. "1" is the main control channel, filter for
this one if you want track spice sessions only (json-int)
- "channel-id": channel id. Usually "0", might be different needed when
multiple channels of the same type exist, such as multiple
display channels in a multihead setup (json-int)
- "tls": whevener the channel is encrypted (json-bool)
Example:
{ "timestamp": {"seconds": 1290688046, "microseconds": 417172},
"event": "SPICE_INITIALIZED",
"data": {"server": {"auth": "spice", "port": "5921",
"family": "ipv4", "host": "127.0.0.1"},
"client": {"port": "49004", "family": "ipv4", "channel-type": 3,
"connection-id": 1804289383, "host": "127.0.0.1",
"channel-id": 0, "tls": true}
}}
STOP
----
@@ -240,32 +92,6 @@ Example:
{ "event": "STOP",
"timestamp": { "seconds": 1267041730, "microseconds": 281295 } }
SUSPEND
-------
Emitted when guest enters S3 state.
Data: None.
Example:
{ "event": "SUSPEND",
"timestamp": { "seconds": 1344456160, "microseconds": 309119 } }
SUSPEND_DISK
------------
Emitted when the guest makes a request to enter S4 state.
Data: None.
Example:
{ "event": "SUSPEND_DISK",
"timestamp": { "seconds": 1344456160, "microseconds": 309119 } }
Note: QEMU shuts down when entering S4 state.
VNC_CONNECTED
-------------
@@ -300,7 +126,7 @@ the authentication ID is not provided.
VNC_DISCONNECTED
----------------
Emitted when the connection is closed.
Emitted when the conection is closed.
Data:
@@ -356,17 +182,69 @@ Example:
"host": "127.0.0.1", "sasl_username": "luiz" } },
"timestamp": { "seconds": 1263475302, "microseconds": 150772 } }
WAKEUP
------
SPICE_CONNECTED, SPICE_DISCONNECTED
-----------------------------------
Emitted when the guest has woken up from S3 and is running.
Emitted when a SPICE client connects or disconnects.
Data: None.
Data:
- "server": Server information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "client": Client information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
Example:
{ "event": "WATCHDOG",
"timestamp": { "seconds": 1344522075, "microseconds": 745528 } }
{ "timestamp": {"seconds": 1290688046, "microseconds": 388707},
"event": "SPICE_CONNECTED",
"data": {
"server": { "port": "5920", "family": "ipv4", "host": "127.0.0.1"},
"client": {"port": "52873", "family": "ipv4", "host": "127.0.0.1"}
}}
SPICE_INITIALIZED
-----------------
Emitted after initial handshake and authentication takes place (if any)
and the SPICE channel is up'n'running
Data:
- "server": Server information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "auth": authentication method (json-string, optional)
- "client": Client information (json-object)
- "host": IP address (json-string)
- "port": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "connection-id": spice connection id. All channels with the same id
belong to the same spice session (json-int)
- "channel-type": channel type. "1" is the main control channel, filter for
this one if you want track spice sessions only (json-int)
- "channel-id": channel id. Usually "0", might be different needed when
multiple channels of the same type exist, such as multiple
display channels in a multihead setup (json-int)
- "tls": whevener the channel is encrypted (json-bool)
Example:
{ "timestamp": {"seconds": 1290688046, "microseconds": 417172},
"event": "SPICE_INITIALIZED",
"data": {"server": {"auth": "spice", "port": "5921",
"family": "ipv4", "host": "127.0.0.1"},
"client": {"port": "49004", "family": "ipv4", "channel-type": 3,
"connection-id": 1804289383, "host": "127.0.0.1",
"channel-id": 0, "tls": true}
}}
WATCHDOG
--------

View File

@@ -106,11 +106,14 @@ completed because of an error condition.
The format is:
{ "error": { "class": json-string, "desc": json-string }, "id": json-value }
{ "error": { "class": json-string, "data": json-object, "desc": json-string },
"id": json-value }
Where,
- The "class" member contains the error class name (eg. "GenericError")
- The "class" member contains the error class name (eg. "ServiceUnavailable")
- The "data" member contains specific error data and is defined in a
per-command basis, it will be an empty json-object if the error has no data
- The "desc" member is a human-readable error message. Clients should
not attempt to parse this message.
- The "id" member contains the transaction identification associated with
@@ -170,7 +173,8 @@ S: {"return": {"enabled": true, "present": true}, "id": "example"}
------------------
C: { "execute": }
S: {"error": {"class": "GenericError", "desc": "Invalid JSON syntax" } }
S: {"error": {"class": "JSONParsing", "desc": "Invalid JSON syntax", "data":
{}}}
3.5 Powerdown event
-------------------
@@ -205,27 +209,13 @@ incompatible way are disabled by default and will be advertised by the
capabilities array (section '2.2 Server Greeting'). Thus, Clients can check
that array and enable the capabilities they support.
The QMP Server performs a type check on the arguments to a command. It
generates an error if a value does not have the expected type for its
key, or if it does not understand a key that the Client included. The
strictness of the Server catches wrong assumptions of Clients about
the Server's schema. Clients can assume that, when such validation
errors occur, they will be reported before the command generated any
side effect.
Additionally, Clients must not assume any particular:
However, Clients must not assume any particular:
- Length of json-arrays
- Size of json-objects; in particular, future versions of QEMU may add
new keys and Clients should be able to ignore them.
- Size of json-objects or length of json-arrays
- Order of json-object members or json-array elements
- Amount of errors generated by a command, that is, new errors can be added
to any existing command in newer versions of the Server
Of course, the Server does guarantee to send valid JSON. But apart from
this, a Client should be "conservative in what they send, and liberal in
what they accept".
6. Downstream extension of QMP
------------------------------

View File

@@ -128,12 +128,6 @@ class QEMUMonitorProtocol:
qmp_cmd['id'] = id
return self.cmd_obj(qmp_cmd)
def command(self, cmd, **kwds):
ret = self.cmd(cmd, kwds)
if ret.has_key('error'):
raise Exception(ret['error']['desc'])
return ret['return']
def get_events(self, wait=False):
"""
Get a list of available QMP events.

View File

@@ -1,138 +0,0 @@
#!/usr/bin/python
##
# QEMU Object Model test tools
#
# Copyright IBM, Corp. 2012
#
# Authors:
# Anthony Liguori <aliguori@us.ibm.com>
#
# This work is licensed under the terms of the GNU GPL, version 2 or later. See
# the COPYING file in the top-level directory.
##
import fuse, stat
from fuse import Fuse
import os, posix
from errno import *
from qmp import QEMUMonitorProtocol
fuse.fuse_python_api = (0, 2)
class QOMFS(Fuse):
def __init__(self, qmp, *args, **kwds):
Fuse.__init__(self, *args, **kwds)
self.qmp = qmp
self.qmp.connect()
self.ino_map = {}
self.ino_count = 1
def get_ino(self, path):
if self.ino_map.has_key(path):
return self.ino_map[path]
self.ino_map[path] = self.ino_count
self.ino_count += 1
return self.ino_map[path]
def is_object(self, path):
try:
items = self.qmp.command('qom-list', path=path)
return True
except:
return False
def is_property(self, path):
try:
path, prop = path.rsplit('/', 1)
for item in self.qmp.command('qom-list', path=path):
if item['name'] == prop:
return True
return False
except:
return False
def is_link(self, path):
try:
path, prop = path.rsplit('/', 1)
for item in self.qmp.command('qom-list', path=path):
if item['name'] == prop:
if item['type'].startswith('link<'):
return True
return False
return False
except:
return False
def read(self, path, length, offset):
if not self.is_property(path):
return -ENOENT
path, prop = path.rsplit('/', 1)
try:
data = str(self.qmp.command('qom-get', path=path, property=prop))
data += '\n' # make values shell friendly
except:
return -EPERM
if offset > len(data):
return ''
return str(data[offset:][:length])
def readlink(self, path):
if not self.is_link(path):
return False
path, prop = path.rsplit('/', 1)
prefix = '/'.join(['..'] * (len(path.split('/')) - 1))
return prefix + str(self.qmp.command('qom-get', path=path,
property=prop))
def getattr(self, path):
if self.is_link(path):
value = posix.stat_result((0755 | stat.S_IFLNK,
self.get_ino(path),
0,
2,
1000,
1000,
4096,
0,
0,
0))
elif self.is_object(path):
value = posix.stat_result((0755 | stat.S_IFDIR,
self.get_ino(path),
0,
2,
1000,
1000,
4096,
0,
0,
0))
elif self.is_property(path):
value = posix.stat_result((0644 | stat.S_IFREG,
self.get_ino(path),
0,
1,
1000,
1000,
4096,
0,
0,
0))
else:
value = -ENOENT
return value
def readdir(self, path, offset):
yield fuse.Direntry('.')
yield fuse.Direntry('..')
for item in self.qmp.command('qom-list', path=path):
yield fuse.Direntry(str(item['name']))
if __name__ == '__main__':
import sys, os
fs = QOMFS(QEMUMonitorProtocol(os.environ['QMP_SOCKET']))
fs.main(sys.argv)

View File

@@ -1,67 +0,0 @@
#!/usr/bin/python
##
# QEMU Object Model test tools
#
# Copyright IBM, Corp. 2011
#
# Authors:
# Anthony Liguori <aliguori@us.ibm.com>
#
# This work is licensed under the terms of the GNU GPL, version 2 or later. See
# the COPYING file in the top-level directory.
##
import sys
import os
from qmp import QEMUMonitorProtocol
cmd, args = sys.argv[0], sys.argv[1:]
socket_path = None
path = None
prop = None
def usage():
return '''environment variables:
QMP_SOCKET=<path | addr:port>
usage:
%s [-h] [-s <QMP socket path | addr:port>] <path>.<property>
''' % cmd
def usage_error(error_msg = "unspecified error"):
sys.stderr.write('%s\nERROR: %s\n' % (usage(), error_msg))
exit(1)
if len(args) > 0:
if args[0] == "-h":
print usage()
exit(0);
elif args[0] == "-s":
try:
socket_path = args[1]
except:
usage_error("missing argument: QMP socket path or address");
args = args[2:]
if not socket_path:
if os.environ.has_key('QMP_SOCKET'):
socket_path = os.environ['QMP_SOCKET']
else:
usage_error("no QMP socket path or address given");
if len(args) > 0:
try:
path, prop = args[0].rsplit('.', 1)
except:
usage_error("invalid format for path/property/value")
else:
usage_error("not enough arguments")
srv = QEMUMonitorProtocol(socket_path)
srv.connect()
rsp = srv.command('qom-get', path=path, property=prop)
if type(rsp) == dict:
for i in rsp.keys():
print '%s: %s' % (i, rsp[i])
else:
print rsp

View File

@@ -1,64 +0,0 @@
#!/usr/bin/python
##
# QEMU Object Model test tools
#
# Copyright IBM, Corp. 2011
#
# Authors:
# Anthony Liguori <aliguori@us.ibm.com>
#
# This work is licensed under the terms of the GNU GPL, version 2 or later. See
# the COPYING file in the top-level directory.
##
import sys
import os
from qmp import QEMUMonitorProtocol
cmd, args = sys.argv[0], sys.argv[1:]
socket_path = None
path = None
prop = None
def usage():
return '''environment variables:
QMP_SOCKET=<path | addr:port>
usage:
%s [-h] [-s <QMP socket path | addr:port>] [<path>]
''' % cmd
def usage_error(error_msg = "unspecified error"):
sys.stderr.write('%s\nERROR: %s\n' % (usage(), error_msg))
exit(1)
if len(args) > 0:
if args[0] == "-h":
print usage()
exit(0);
elif args[0] == "-s":
try:
socket_path = args[1]
except:
usage_error("missing argument: QMP socket path or address");
args = args[2:]
if not socket_path:
if os.environ.has_key('QMP_SOCKET'):
socket_path = os.environ['QMP_SOCKET']
else:
usage_error("no QMP socket path or address given");
srv = QEMUMonitorProtocol(socket_path)
srv.connect()
if len(args) == 0:
print '/'
sys.exit(0)
for item in srv.command('qom-list', path=args[0]):
if item['type'].startswith('child<'):
print '%s/' % item['name']
elif item['type'].startswith('link<'):
print '@%s/' % item['name']
else:
print '%s' % item['name']

View File

@@ -1,64 +0,0 @@
#!/usr/bin/python
##
# QEMU Object Model test tools
#
# Copyright IBM, Corp. 2011
#
# Authors:
# Anthony Liguori <aliguori@us.ibm.com>
#
# This work is licensed under the terms of the GNU GPL, version 2 or later. See
# the COPYING file in the top-level directory.
##
import sys
import os
from qmp import QEMUMonitorProtocol
cmd, args = sys.argv[0], sys.argv[1:]
socket_path = None
path = None
prop = None
value = None
def usage():
return '''environment variables:
QMP_SOCKET=<path | addr:port>
usage:
%s [-h] [-s <QMP socket path | addr:port>] <path>.<property> <value>
''' % cmd
def usage_error(error_msg = "unspecified error"):
sys.stderr.write('%s\nERROR: %s\n' % (usage(), error_msg))
exit(1)
if len(args) > 0:
if args[0] == "-h":
print usage()
exit(0);
elif args[0] == "-s":
try:
socket_path = args[1]
except:
usage_error("missing argument: QMP socket path or address");
args = args[2:]
if not socket_path:
if os.environ.has_key('QMP_SOCKET'):
socket_path = os.environ['QMP_SOCKET']
else:
usage_error("no QMP socket path or address given");
if len(args) > 1:
try:
path, prop = args[0].rsplit('.', 1)
except:
usage_error("invalid format for path/property/value")
value = args[1]
else:
usage_error("not enough arguments")
srv = QEMUMonitorProtocol(socket_path)
srv.connect()
print srv.command('qom-set', path=path, property=prop, value=sys.argv[2])

4
README
View File

@@ -1,3 +1,3 @@
Read the documentation in qemu-doc.html or on http://wiki.qemu.org
Read the documentation in qemu-doc.html.
- QEMU team
Fabrice Bellard.

View File

@@ -1 +1 @@
1.2.2
0.15.0

View File

@@ -151,7 +151,7 @@ struct external_lineno {
#define E_FILNMLEN 14 /* # characters in a file name */
#define E_DIMNUM 4 /* # array dimensions in auxiliary entry */
struct QEMU_PACKED external_syment
struct __attribute__((packed)) external_syment
{
union {
char e_name[E_SYMNMLEN];

18
acl.c
View File

@@ -55,8 +55,8 @@ qemu_acl *qemu_acl_init(const char *aclname)
if (acl)
return acl;
acl = g_malloc(sizeof(*acl));
acl->aclname = g_strdup(aclname);
acl = qemu_malloc(sizeof(*acl));
acl->aclname = qemu_strdup(aclname);
/* Deny by default, so there is no window of "open
* access" between QEMU starting, and the user setting
* up ACLs in the monitor */
@@ -65,7 +65,7 @@ qemu_acl *qemu_acl_init(const char *aclname)
acl->nentries = 0;
QTAILQ_INIT(&acl->entries);
acls = g_realloc(acls, sizeof(*acls) * (nacls +1));
acls = qemu_realloc(acls, sizeof(*acls) * (nacls +1));
acls[nacls] = acl;
nacls++;
@@ -95,13 +95,13 @@ int qemu_acl_party_is_allowed(qemu_acl *acl,
void qemu_acl_reset(qemu_acl *acl)
{
qemu_acl_entry *entry, *next_entry;
qemu_acl_entry *entry;
/* Put back to deny by default, so there is no window
* of "open access" while the user re-initializes the
* access control list */
acl->defaultDeny = 1;
QTAILQ_FOREACH_SAFE(entry, &acl->entries, next, next_entry) {
QTAILQ_FOREACH(entry, &acl->entries, next) {
QTAILQ_REMOVE(&acl->entries, entry, next);
free(entry->match);
free(entry);
@@ -116,8 +116,8 @@ int qemu_acl_append(qemu_acl *acl,
{
qemu_acl_entry *entry;
entry = g_malloc(sizeof(*entry));
entry->match = g_strdup(match);
entry = qemu_malloc(sizeof(*entry));
entry->match = qemu_strdup(match);
entry->deny = deny;
QTAILQ_INSERT_TAIL(&acl->entries, entry, next);
@@ -142,8 +142,8 @@ int qemu_acl_insert(qemu_acl *acl,
return qemu_acl_append(acl, deny, match);
entry = g_malloc(sizeof(*entry));
entry->match = g_strdup(match);
entry = qemu_malloc(sizeof(*entry));
entry->match = qemu_strdup(match);
entry->deny = deny;
QTAILQ_FOREACH(tmp, &acl->entries, next) {

178
aio.c
View File

@@ -9,8 +9,6 @@
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu-common.h"
@@ -35,6 +33,7 @@ struct AioHandler
IOHandler *io_read;
IOHandler *io_write;
AioFlushHandler *io_flush;
AioProcessQueue *io_process_queue;
int deleted;
void *opaque;
QLIST_ENTRY(AioHandler) node;
@@ -57,6 +56,7 @@ int qemu_aio_set_fd_handler(int fd,
IOHandler *io_read,
IOHandler *io_write,
AioFlushHandler *io_flush,
AioProcessQueue *io_process_queue,
void *opaque)
{
AioHandler *node;
@@ -75,13 +75,13 @@ int qemu_aio_set_fd_handler(int fd,
* releasing the walking_handlers lock.
*/
QLIST_REMOVE(node, node);
g_free(node);
qemu_free(node);
}
}
} else {
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_malloc0(sizeof(AioHandler));
node = qemu_mallocz(sizeof(AioHandler));
node->fd = fd;
QLIST_INSERT_HEAD(&aio_handlers, node, node);
}
@@ -89,6 +89,7 @@ int qemu_aio_set_fd_handler(int fd,
node->io_read = io_read;
node->io_write = io_write;
node->io_flush = io_flush;
node->io_process_queue = io_process_queue;
node->opaque = opaque;
}
@@ -99,96 +100,131 @@ int qemu_aio_set_fd_handler(int fd,
void qemu_aio_flush(void)
{
while (qemu_aio_wait());
AioHandler *node;
int ret;
do {
ret = 0;
/*
* If there are pending emulated aio start them now so flush
* will be able to return 1.
*/
qemu_aio_wait();
QLIST_FOREACH(node, &aio_handlers, node) {
if (node->io_flush) {
ret |= node->io_flush(node->opaque);
}
}
} while (qemu_bh_poll() || ret > 0);
}
bool qemu_aio_wait(void)
int qemu_aio_process_queue(void)
{
AioHandler *node;
fd_set rdfds, wrfds;
int max_fd = -1;
int ret;
bool busy;
/*
* If there are callbacks left that have been queued, we need to call then.
* Do not call select in this case, because it is possible that the caller
* does not need a complete flush (as is the case for qemu_aio_wait loops).
*/
if (qemu_bh_poll()) {
return true;
}
int ret = 0;
walking_handlers = 1;
FD_ZERO(&rdfds);
FD_ZERO(&wrfds);
/* fill fd sets */
busy = false;
QLIST_FOREACH(node, &aio_handlers, node) {
/* If there aren't pending AIO operations, don't invoke callbacks.
* Otherwise, if there are no AIO requests, qemu_aio_wait() would
* wait indefinitely.
*/
if (node->io_flush) {
if (node->io_flush(node->opaque) == 0) {
continue;
if (node->io_process_queue) {
if (node->io_process_queue(node->opaque)) {
ret = 1;
}
busy = true;
}
if (!node->deleted && node->io_read) {
FD_SET(node->fd, &rdfds);
max_fd = MAX(max_fd, node->fd + 1);
}
if (!node->deleted && node->io_write) {
FD_SET(node->fd, &wrfds);
max_fd = MAX(max_fd, node->fd + 1);
}
}
walking_handlers = 0;
/* No AIO operations? Get us out of here */
if (!busy) {
return false;
}
return ret;
}
/* wait until next event */
ret = select(max_fd, &rdfds, &wrfds, NULL, NULL);
void qemu_aio_wait(void)
{
int ret;
if (qemu_bh_poll())
return;
/*
* If there are callbacks left that have been queued, we need to call then.
* Return afterwards to avoid waiting needlessly in select().
*/
if (qemu_aio_process_queue())
return;
do {
AioHandler *node;
fd_set rdfds, wrfds;
int max_fd = -1;
/* if we have any readable fds, dispatch event */
if (ret > 0) {
walking_handlers = 1;
/* we have to walk very carefully in case
* qemu_aio_set_fd_handler is called while we're walking */
node = QLIST_FIRST(&aio_handlers);
while (node) {
AioHandler *tmp;
FD_ZERO(&rdfds);
FD_ZERO(&wrfds);
if (!node->deleted &&
FD_ISSET(node->fd, &rdfds) &&
node->io_read) {
node->io_read(node->opaque);
/* fill fd sets */
QLIST_FOREACH(node, &aio_handlers, node) {
/* If there aren't pending AIO operations, don't invoke callbacks.
* Otherwise, if there are no AIO requests, qemu_aio_wait() would
* wait indefinitely.
*/
if (node->io_flush && node->io_flush(node->opaque) == 0)
continue;
if (!node->deleted && node->io_read) {
FD_SET(node->fd, &rdfds);
max_fd = MAX(max_fd, node->fd + 1);
}
if (!node->deleted &&
FD_ISSET(node->fd, &wrfds) &&
node->io_write) {
node->io_write(node->opaque);
}
tmp = node;
node = QLIST_NEXT(node, node);
if (tmp->deleted) {
QLIST_REMOVE(tmp, node);
g_free(tmp);
if (!node->deleted && node->io_write) {
FD_SET(node->fd, &wrfds);
max_fd = MAX(max_fd, node->fd + 1);
}
}
walking_handlers = 0;
}
return true;
/* No AIO operations? Get us out of here */
if (max_fd == -1)
break;
/* wait until next event */
ret = select(max_fd, &rdfds, &wrfds, NULL, NULL);
if (ret == -1 && errno == EINTR)
continue;
/* if we have any readable fds, dispatch event */
if (ret > 0) {
walking_handlers = 1;
/* we have to walk very carefully in case
* qemu_aio_set_fd_handler is called while we're walking */
node = QLIST_FIRST(&aio_handlers);
while (node) {
AioHandler *tmp;
if (!node->deleted &&
FD_ISSET(node->fd, &rdfds) &&
node->io_read) {
node->io_read(node->opaque);
}
if (!node->deleted &&
FD_ISSET(node->fd, &wrfds) &&
node->io_write) {
node->io_write(node->opaque);
}
tmp = node;
node = QLIST_NEXT(node, node);
if (tmp->deleted) {
QLIST_REMOVE(tmp, node);
qemu_free(tmp);
}
}
walking_handlers = 0;
}
} while (ret == 0);
}

View File

@@ -41,18 +41,6 @@
#include "net.h"
#include "gdbstub.h"
#include "hw/smbios.h"
#include "exec-memory.h"
#include "hw/pcspk.h"
#include "qemu/page_cache.h"
#include "qmp-commands.h"
#ifdef DEBUG_ARCH_INIT
#define DPRINTF(fmt, ...) \
do { fprintf(stdout, "arch_init: " fmt, ## __VA_ARGS__); } while (0)
#else
#define DPRINTF(fmt, ...) \
do { } while (0)
#endif
#ifdef TARGET_SPARC
int graphic_width = 1024;
@@ -64,6 +52,7 @@ int graphic_height = 600;
int graphic_depth = 15;
#endif
const char arch_config_name[] = CONFIG_QEMU_CONFDIR "/target-" TARGET_ARCH ".conf";
#if defined(TARGET_ALPHA)
#define QEMU_ARCH QEMU_ARCH_ALPHA
@@ -81,8 +70,6 @@ int graphic_depth = 15;
#define QEMU_ARCH QEMU_ARCH_MICROBLAZE
#elif defined(TARGET_MIPS)
#define QEMU_ARCH QEMU_ARCH_MIPS
#elif defined(TARGET_OPENRISC)
#define QEMU_ARCH QEMU_ARCH_OPENRISC
#elif defined(TARGET_PPC)
#define QEMU_ARCH QEMU_ARCH_PPC
#elif defined(TARGET_S390X)
@@ -91,10 +78,6 @@ int graphic_depth = 15;
#define QEMU_ARCH QEMU_ARCH_SH4
#elif defined(TARGET_SPARC)
#define QEMU_ARCH QEMU_ARCH_SPARC
#elif defined(TARGET_XTENSA)
#define QEMU_ARCH QEMU_ARCH_XTENSA
#elif defined(TARGET_UNICORE32)
#define QEMU_ARCH QEMU_ARCH_UNICORE32
#endif
const uint32_t arch_type = QEMU_ARCH;
@@ -108,67 +91,15 @@ const uint32_t arch_type = QEMU_ARCH;
#define RAM_SAVE_FLAG_PAGE 0x08
#define RAM_SAVE_FLAG_EOS 0x10
#define RAM_SAVE_FLAG_CONTINUE 0x20
#define RAM_SAVE_FLAG_XBZRLE 0x40
#ifdef __ALTIVEC__
#include <altivec.h>
#define VECTYPE vector unsigned char
#define SPLAT(p) vec_splat(vec_ld(0, p), 0)
#define ALL_EQ(v1, v2) vec_all_eq(v1, v2)
/* altivec.h may redefine the bool macro as vector type.
* Reset it to POSIX semantics. */
#undef bool
#define bool _Bool
#elif defined __SSE2__
#include <emmintrin.h>
#define VECTYPE __m128i
#define SPLAT(p) _mm_set1_epi8(*(p))
#define ALL_EQ(v1, v2) (_mm_movemask_epi8(_mm_cmpeq_epi8(v1, v2)) == 0xFFFF)
#else
#define VECTYPE unsigned long
#define SPLAT(p) (*(p) * (~0UL / 255))
#define ALL_EQ(v1, v2) ((v1) == (v2))
#endif
static struct defconfig_file {
const char *filename;
/* Indicates it is an user config file (disabled by -no-user-config) */
bool userconfig;
} default_config_files[] = {
{ CONFIG_QEMU_DATADIR "/cpus-" TARGET_ARCH ".conf", false },
{ CONFIG_QEMU_CONFDIR "/qemu.conf", true },
{ CONFIG_QEMU_CONFDIR "/target-" TARGET_ARCH ".conf", true },
{ NULL }, /* end of list */
};
int qemu_read_default_config_files(bool userconfig)
static int is_dup_page(uint8_t *page, uint8_t ch)
{
int ret;
struct defconfig_file *f;
for (f = default_config_files; f->filename; f++) {
if (!userconfig && f->userconfig) {
continue;
}
ret = qemu_read_config_file(f->filename);
if (ret < 0 && ret != -ENOENT) {
return ret;
}
}
return 0;
}
static int is_dup_page(uint8_t *page)
{
VECTYPE *p = (VECTYPE *)page;
VECTYPE val = SPLAT(page);
uint32_t val = ch << 24 | ch << 16 | ch << 8 | ch;
uint32_t *array = (uint32_t *)page;
int i;
for (i = 0; i < TARGET_PAGE_SIZE / sizeof(VECTYPE); i++) {
if (!ALL_EQ(val, p[i])) {
for (i = 0; i < (TARGET_PAGE_SIZE / 4); i++) {
if (array[i] != val) {
return 0;
}
}
@@ -176,219 +107,53 @@ static int is_dup_page(uint8_t *page)
return 1;
}
/* struct contains XBZRLE cache and a static page
used by the compression */
static struct {
/* buffer used for XBZRLE encoding */
uint8_t *encoded_buf;
/* buffer for storing page content */
uint8_t *current_buf;
/* buffer used for XBZRLE decoding */
uint8_t *decoded_buf;
/* Cache for XBZRLE */
PageCache *cache;
} XBZRLE = {
.encoded_buf = NULL,
.current_buf = NULL,
.decoded_buf = NULL,
.cache = NULL,
};
int64_t xbzrle_cache_resize(int64_t new_size)
{
if (XBZRLE.cache != NULL) {
return cache_resize(XBZRLE.cache, new_size / TARGET_PAGE_SIZE) *
TARGET_PAGE_SIZE;
}
return pow2floor(new_size);
}
/* accounting for migration statistics */
typedef struct AccountingInfo {
uint64_t dup_pages;
uint64_t norm_pages;
uint64_t iterations;
uint64_t xbzrle_bytes;
uint64_t xbzrle_pages;
uint64_t xbzrle_cache_miss;
uint64_t xbzrle_overflows;
} AccountingInfo;
static AccountingInfo acct_info;
static void acct_clear(void)
{
memset(&acct_info, 0, sizeof(acct_info));
}
uint64_t dup_mig_bytes_transferred(void)
{
return acct_info.dup_pages * TARGET_PAGE_SIZE;
}
uint64_t dup_mig_pages_transferred(void)
{
return acct_info.dup_pages;
}
uint64_t norm_mig_bytes_transferred(void)
{
return acct_info.norm_pages * TARGET_PAGE_SIZE;
}
uint64_t norm_mig_pages_transferred(void)
{
return acct_info.norm_pages;
}
uint64_t xbzrle_mig_bytes_transferred(void)
{
return acct_info.xbzrle_bytes;
}
uint64_t xbzrle_mig_pages_transferred(void)
{
return acct_info.xbzrle_pages;
}
uint64_t xbzrle_mig_pages_cache_miss(void)
{
return acct_info.xbzrle_cache_miss;
}
uint64_t xbzrle_mig_pages_overflow(void)
{
return acct_info.xbzrle_overflows;
}
static void save_block_hdr(QEMUFile *f, RAMBlock *block, ram_addr_t offset,
int cont, int flag)
{
qemu_put_be64(f, offset | cont | flag);
if (!cont) {
qemu_put_byte(f, strlen(block->idstr));
qemu_put_buffer(f, (uint8_t *)block->idstr,
strlen(block->idstr));
}
}
#define ENCODING_FLAG_XBZRLE 0x1
static int save_xbzrle_page(QEMUFile *f, uint8_t *current_data,
ram_addr_t current_addr, RAMBlock *block,
ram_addr_t offset, int cont, bool last_stage)
{
int encoded_len = 0, bytes_sent = -1;
uint8_t *prev_cached_page;
if (!cache_is_cached(XBZRLE.cache, current_addr)) {
if (!last_stage) {
cache_insert(XBZRLE.cache, current_addr,
g_memdup(current_data, TARGET_PAGE_SIZE));
}
acct_info.xbzrle_cache_miss++;
return -1;
}
prev_cached_page = get_cached_data(XBZRLE.cache, current_addr);
/* save current buffer into memory */
memcpy(XBZRLE.current_buf, current_data, TARGET_PAGE_SIZE);
/* XBZRLE encoding (if there is no overflow) */
encoded_len = xbzrle_encode_buffer(prev_cached_page, XBZRLE.current_buf,
TARGET_PAGE_SIZE, XBZRLE.encoded_buf,
TARGET_PAGE_SIZE);
if (encoded_len == 0) {
DPRINTF("Skipping unmodified page\n");
return 0;
} else if (encoded_len == -1) {
DPRINTF("Overflow\n");
acct_info.xbzrle_overflows++;
/* update data in the cache */
memcpy(prev_cached_page, current_data, TARGET_PAGE_SIZE);
return -1;
}
/* we need to update the data in the cache, in order to get the same data */
if (!last_stage) {
memcpy(prev_cached_page, XBZRLE.current_buf, TARGET_PAGE_SIZE);
}
/* Send XBZRLE based compressed page */
save_block_hdr(f, block, offset, cont, RAM_SAVE_FLAG_XBZRLE);
qemu_put_byte(f, ENCODING_FLAG_XBZRLE);
qemu_put_be16(f, encoded_len);
qemu_put_buffer(f, XBZRLE.encoded_buf, encoded_len);
bytes_sent = encoded_len + 1 + 2;
acct_info.xbzrle_pages++;
acct_info.xbzrle_bytes += bytes_sent;
return bytes_sent;
}
static RAMBlock *last_block;
static ram_addr_t last_offset;
/*
* ram_save_block: Writes a page of memory to the stream f
*
* Returns: 0: if the page hasn't changed
* -1: if there are no more dirty pages
* n: the amount of bytes written in other case
*/
static int ram_save_block(QEMUFile *f, bool last_stage)
static int ram_save_block(QEMUFile *f)
{
RAMBlock *block = last_block;
ram_addr_t offset = last_offset;
int bytes_sent = -1;
MemoryRegion *mr;
ram_addr_t current_addr;
int bytes_sent = 0;
if (!block)
block = QLIST_FIRST(&ram_list.blocks);
current_addr = block->offset + offset;
do {
mr = block->mr;
if (memory_region_get_dirty(mr, offset, TARGET_PAGE_SIZE,
DIRTY_MEMORY_MIGRATION)) {
if (cpu_physical_memory_get_dirty(current_addr, MIGRATION_DIRTY_FLAG)) {
uint8_t *p;
int cont = (block == last_block) ? RAM_SAVE_FLAG_CONTINUE : 0;
memory_region_reset_dirty(mr, offset, TARGET_PAGE_SIZE,
DIRTY_MEMORY_MIGRATION);
cpu_physical_memory_reset_dirty(current_addr,
current_addr + TARGET_PAGE_SIZE,
MIGRATION_DIRTY_FLAG);
p = memory_region_get_ram_ptr(mr) + offset;
p = block->host + offset;
if (is_dup_page(p)) {
acct_info.dup_pages++;
save_block_hdr(f, block, offset, cont, RAM_SAVE_FLAG_COMPRESS);
if (is_dup_page(p, *p)) {
qemu_put_be64(f, offset | cont | RAM_SAVE_FLAG_COMPRESS);
if (!cont) {
qemu_put_byte(f, strlen(block->idstr));
qemu_put_buffer(f, (uint8_t *)block->idstr,
strlen(block->idstr));
}
qemu_put_byte(f, *p);
bytes_sent = 1;
} else if (migrate_use_xbzrle()) {
current_addr = block->offset + offset;
bytes_sent = save_xbzrle_page(f, p, current_addr, block,
offset, cont, last_stage);
if (!last_stage) {
p = get_cached_data(XBZRLE.cache, current_addr);
} else {
qemu_put_be64(f, offset | cont | RAM_SAVE_FLAG_PAGE);
if (!cont) {
qemu_put_byte(f, strlen(block->idstr));
qemu_put_buffer(f, (uint8_t *)block->idstr,
strlen(block->idstr));
}
}
/* either we didn't send yet (we may have had XBZRLE overflow) */
if (bytes_sent == -1) {
save_block_hdr(f, block, offset, cont, RAM_SAVE_FLAG_PAGE);
qemu_put_buffer(f, p, TARGET_PAGE_SIZE);
bytes_sent = TARGET_PAGE_SIZE;
acct_info.norm_pages++;
}
/* if page is unmodified, continue to the next */
if (bytes_sent != 0) {
break;
}
break;
}
offset += TARGET_PAGE_SIZE;
@@ -398,7 +163,10 @@ static int ram_save_block(QEMUFile *f, bool last_stage)
if (!block)
block = QLIST_FIRST(&ram_list.blocks);
}
} while (block != last_block || offset != last_offset);
current_addr = block->offset + offset;
} while (current_addr != last_block->offset + last_offset);
last_block = block;
last_offset = offset;
@@ -410,7 +178,20 @@ static uint64_t bytes_transferred;
static ram_addr_t ram_save_remaining(void)
{
return ram_list.dirty_pages;
RAMBlock *block;
ram_addr_t count = 0;
QLIST_FOREACH(block, &ram_list.blocks, next) {
ram_addr_t addr;
for (addr = block->offset; addr < block->offset + block->length;
addr += TARGET_PAGE_SIZE) {
if (cpu_physical_memory_get_dirty(addr, MIGRATION_DIRTY_FLAG)) {
count++;
}
}
}
return count;
}
uint64_t ram_bytes_remaining(void)
@@ -438,8 +219,12 @@ static int block_compar(const void *a, const void *b)
{
RAMBlock * const *ablock = a;
RAMBlock * const *bblock = b;
return strcmp((*ablock)->idstr, (*bblock)->idstr);
if ((*ablock)->offset < (*bblock)->offset) {
return -1;
} else if ((*ablock)->offset > (*bblock)->offset) {
return 1;
}
return 0;
}
static void sort_ram_list(void)
@@ -450,7 +235,7 @@ static void sort_ram_list(void)
QLIST_FOREACH(block, &ram_list.blocks, next) {
++n;
}
blocks = g_malloc(n * sizeof *blocks);
blocks = qemu_malloc(n * sizeof *blocks);
n = 0;
QLIST_FOREACH_SAFE(block, &ram_list.blocks, next, nblock) {
blocks[n++] = block;
@@ -460,118 +245,67 @@ static void sort_ram_list(void)
while (--n >= 0) {
QLIST_INSERT_HEAD(&ram_list.blocks, blocks[n], next);
}
g_free(blocks);
qemu_free(blocks);
}
static void migration_end(void)
{
memory_global_dirty_log_stop();
if (migrate_use_xbzrle()) {
cache_fini(XBZRLE.cache);
g_free(XBZRLE.cache);
g_free(XBZRLE.encoded_buf);
g_free(XBZRLE.current_buf);
g_free(XBZRLE.decoded_buf);
XBZRLE.cache = NULL;
}
}
static void ram_migration_cancel(void *opaque)
{
migration_end();
}
#define MAX_WAIT 50 /* ms, half buffered_file limit */
static int ram_save_setup(QEMUFile *f, void *opaque)
int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
{
ram_addr_t addr;
RAMBlock *block;
bytes_transferred = 0;
last_block = NULL;
last_offset = 0;
sort_ram_list();
if (migrate_use_xbzrle()) {
XBZRLE.cache = cache_init(migrate_xbzrle_cache_size() /
TARGET_PAGE_SIZE,
TARGET_PAGE_SIZE);
if (!XBZRLE.cache) {
DPRINTF("Error creating cache\n");
return -1;
}
XBZRLE.encoded_buf = g_malloc0(TARGET_PAGE_SIZE);
XBZRLE.current_buf = g_malloc(TARGET_PAGE_SIZE);
acct_clear();
}
/* Make sure all dirty bits are set */
QLIST_FOREACH(block, &ram_list.blocks, next) {
for (addr = 0; addr < block->length; addr += TARGET_PAGE_SIZE) {
if (!memory_region_get_dirty(block->mr, addr, TARGET_PAGE_SIZE,
DIRTY_MEMORY_MIGRATION)) {
memory_region_set_dirty(block->mr, addr, TARGET_PAGE_SIZE);
}
}
}
memory_global_dirty_log_start();
qemu_put_be64(f, ram_bytes_total() | RAM_SAVE_FLAG_MEM_SIZE);
QLIST_FOREACH(block, &ram_list.blocks, next) {
qemu_put_byte(f, strlen(block->idstr));
qemu_put_buffer(f, (uint8_t *)block->idstr, strlen(block->idstr));
qemu_put_be64(f, block->length);
}
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
return 0;
}
static int ram_save_iterate(QEMUFile *f, void *opaque)
{
uint64_t bytes_transferred_last;
double bwidth = 0;
int ret;
int i;
uint64_t expected_time;
uint64_t expected_time = 0;
if (stage < 0) {
cpu_physical_memory_set_dirty_tracking(0);
return 0;
}
if (cpu_physical_sync_dirty_bitmap(0, TARGET_PHYS_ADDR_MAX) != 0) {
qemu_file_set_error(f);
return 0;
}
if (stage == 1) {
RAMBlock *block;
bytes_transferred = 0;
last_block = NULL;
last_offset = 0;
sort_ram_list();
/* Make sure all dirty bits are set */
QLIST_FOREACH(block, &ram_list.blocks, next) {
for (addr = block->offset; addr < block->offset + block->length;
addr += TARGET_PAGE_SIZE) {
if (!cpu_physical_memory_get_dirty(addr,
MIGRATION_DIRTY_FLAG)) {
cpu_physical_memory_set_dirty(addr);
}
}
}
/* Enable dirty memory tracking */
cpu_physical_memory_set_dirty_tracking(1);
qemu_put_be64(f, ram_bytes_total() | RAM_SAVE_FLAG_MEM_SIZE);
QLIST_FOREACH(block, &ram_list.blocks, next) {
qemu_put_byte(f, strlen(block->idstr));
qemu_put_buffer(f, (uint8_t *)block->idstr, strlen(block->idstr));
qemu_put_be64(f, block->length);
}
}
bytes_transferred_last = bytes_transferred;
bwidth = qemu_get_clock_ns(rt_clock);
i = 0;
while ((ret = qemu_file_rate_limit(f)) == 0) {
while (!qemu_file_rate_limit(f)) {
int bytes_sent;
bytes_sent = ram_save_block(f, false);
/* no more blocks to sent */
if (bytes_sent < 0) {
bytes_sent = ram_save_block(f);
bytes_transferred += bytes_sent;
if (bytes_sent == 0) { /* no more blocks */
break;
}
bytes_transferred += bytes_sent;
acct_info.iterations++;
/* we want to check in the 1st loop, just in case it was the 1st time
and we had to sync the dirty bitmap.
qemu_get_clock_ns() is a bit expensive, so we only check each some
iterations
*/
if ((i & 63) == 0) {
uint64_t t1 = (qemu_get_clock_ns(rt_clock) - bwidth) / 1000000;
if (t1 > MAX_WAIT) {
DPRINTF("big wait: %" PRIu64 " milliseconds, %d iterations\n",
t1, i);
break;
}
}
i++;
}
if (ret < 0) {
return ret;
}
bwidth = qemu_get_clock_ns(rt_clock) - bwidth;
@@ -583,85 +317,22 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
bwidth = 0.000001;
}
/* try transferring iterative blocks of memory */
if (stage == 3) {
int bytes_sent;
/* flush all remaining blocks regardless of rate limiting */
while ((bytes_sent = ram_save_block(f)) != 0) {
bytes_transferred += bytes_sent;
}
cpu_physical_memory_set_dirty_tracking(0);
}
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
expected_time = ram_save_remaining() * TARGET_PAGE_SIZE / bwidth;
DPRINTF("ram_save_live: expected(%" PRIu64 ") <= max(%" PRIu64 ")?\n",
expected_time, migrate_max_downtime());
if (expected_time <= migrate_max_downtime()) {
memory_global_sync_dirty_bitmap(get_system_memory());
expected_time = ram_save_remaining() * TARGET_PAGE_SIZE / bwidth;
return expected_time <= migrate_max_downtime();
}
return 0;
}
static int ram_save_complete(QEMUFile *f, void *opaque)
{
memory_global_sync_dirty_bitmap(get_system_memory());
/* try transferring iterative blocks of memory */
/* flush all remaining blocks regardless of rate limiting */
while (true) {
int bytes_sent;
bytes_sent = ram_save_block(f, true);
/* no more blocks to sent */
if (bytes_sent < 0) {
break;
}
bytes_transferred += bytes_sent;
}
memory_global_dirty_log_stop();
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
return 0;
}
static int load_xbzrle(QEMUFile *f, ram_addr_t addr, void *host)
{
int ret, rc = 0;
unsigned int xh_len;
int xh_flags;
if (!XBZRLE.decoded_buf) {
XBZRLE.decoded_buf = g_malloc(TARGET_PAGE_SIZE);
}
/* extract RLE header */
xh_flags = qemu_get_byte(f);
xh_len = qemu_get_be16(f);
if (xh_flags != ENCODING_FLAG_XBZRLE) {
fprintf(stderr, "Failed to load XBZRLE page - wrong compression!\n");
return -1;
}
if (xh_len > TARGET_PAGE_SIZE) {
fprintf(stderr, "Failed to load XBZRLE page - len overflow!\n");
return -1;
}
/* load data and decode */
qemu_get_buffer(f, XBZRLE.decoded_buf, xh_len);
/* decode RLE */
ret = xbzrle_decode_buffer(XBZRLE.decoded_buf, xh_len, host,
TARGET_PAGE_SIZE);
if (ret == -1) {
fprintf(stderr, "Failed to load XBZRLE page - decode error!\n");
rc = -1;
} else if (ret > TARGET_PAGE_SIZE) {
fprintf(stderr, "Failed to load XBZRLE page - size %d exceeds %d!\n",
ret, TARGET_PAGE_SIZE);
abort();
}
return rc;
return (stage == 2) && (expected_time <= migrate_max_downtime());
}
static inline void *host_from_stream_offset(QEMUFile *f,
@@ -678,7 +349,7 @@ static inline void *host_from_stream_offset(QEMUFile *f,
return NULL;
}
return memory_region_get_ram_ptr(block->mr) + offset;
return block->host + offset;
}
len = qemu_get_byte(f);
@@ -687,23 +358,19 @@ static inline void *host_from_stream_offset(QEMUFile *f,
QLIST_FOREACH(block, &ram_list.blocks, next) {
if (!strncmp(id, block->idstr, sizeof(id)))
return memory_region_get_ram_ptr(block->mr) + offset;
return block->host + offset;
}
fprintf(stderr, "Can't find block %s!\n", id);
return NULL;
}
static int ram_load(QEMUFile *f, void *opaque, int version_id)
int ram_load(QEMUFile *f, void *opaque, int version_id)
{
ram_addr_t addr;
int flags, ret = 0;
int error;
static uint64_t seq_iter;
int flags;
seq_iter++;
if (version_id < 4 || version_id > 4) {
if (version_id < 3 || version_id > 4) {
return -EINVAL;
}
@@ -714,7 +381,11 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
addr &= TARGET_PAGE_MASK;
if (flags & RAM_SAVE_FLAG_MEM_SIZE) {
if (version_id == 4) {
if (version_id == 3) {
if (addr != ram_bytes_total()) {
return -EINVAL;
}
} else {
/* Synchronize RAM block list */
char id[256];
ram_addr_t length;
@@ -731,10 +402,8 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
QLIST_FOREACH(block, &ram_list.blocks, next) {
if (!strncmp(id, block->idstr, sizeof(id))) {
if (block->length != length) {
ret = -EINVAL;
goto done;
}
if (block->length != length)
return -EINVAL;
break;
}
}
@@ -742,8 +411,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
if (!block) {
fprintf(stderr, "Unknown ramblock \"%s\", cannot "
"accept migration\n", id);
ret = -EINVAL;
goto done;
return -EINVAL;
}
total_ram_bytes -= length;
@@ -755,7 +423,10 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
void *host;
uint8_t ch;
host = host_from_stream_offset(f, addr, flags);
if (version_id == 3)
host = qemu_get_ram_ptr(addr);
else
host = host_from_stream_offset(f, addr, flags);
if (!host) {
return -EINVAL;
}
@@ -771,46 +442,25 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
} else if (flags & RAM_SAVE_FLAG_PAGE) {
void *host;
host = host_from_stream_offset(f, addr, flags);
if (!host) {
return -EINVAL;
}
if (version_id == 3)
host = qemu_get_ram_ptr(addr);
else
host = host_from_stream_offset(f, addr, flags);
qemu_get_buffer(f, host, TARGET_PAGE_SIZE);
} else if (flags & RAM_SAVE_FLAG_XBZRLE) {
if (!migrate_use_xbzrle()) {
return -EINVAL;
}
void *host = host_from_stream_offset(f, addr, flags);
if (!host) {
return -EINVAL;
}
if (load_xbzrle(f, addr, host) < 0) {
ret = -EINVAL;
goto done;
}
}
error = qemu_file_get_error(f);
if (error) {
ret = error;
goto done;
if (qemu_file_has_error(f)) {
return -EIO;
}
} while (!(flags & RAM_SAVE_FLAG_EOS));
done:
DPRINTF("Completed load of VM with exit code %d seq iteration "
"%" PRIu64 "\n", ret, seq_iter);
return ret;
return 0;
}
SaveVMHandlers savevm_ram_handlers = {
.save_live_setup = ram_save_setup,
.save_live_iterate = ram_save_iterate,
.save_live_complete = ram_save_complete,
.load_state = ram_load,
.cancel = ram_migration_cancel,
};
void qemu_service_io(void)
{
qemu_notify_event();
}
#ifdef HAS_AUDIO
struct soundhw {
@@ -819,14 +469,14 @@ struct soundhw {
int enabled;
int isa;
union {
int (*init_isa) (ISABus *bus);
int (*init_isa) (qemu_irq *pic);
int (*init_pci) (PCIBus *bus);
} init;
};
static struct soundhw soundhw[] = {
#ifdef HAS_AUDIO_CHOICE
#ifdef CONFIG_PCSPK
#if defined(TARGET_I386) || defined(TARGET_MIPS)
{
"pcspk",
"PC speaker",
@@ -919,20 +569,15 @@ void select_soundhw(const char *optarg)
{
struct soundhw *c;
if (is_help_option(optarg)) {
if (*optarg == '?') {
show_valid_cards:
#ifdef HAS_AUDIO_CHOICE
printf("Valid sound card names (comma separated):\n");
for (c = soundhw; c->name; ++c) {
printf ("%-11s %s\n", c->name, c->descr);
}
printf("\n-soundhw all will enable all of the above\n");
#else
printf("Machine has no user-selectable audio hardware "
"(it may or may not have always-present audio hardware).\n");
#endif
exit(!is_help_option(optarg));
exit(*optarg != '?');
}
else {
size_t l;
@@ -979,15 +624,15 @@ void select_soundhw(const char *optarg)
}
}
void audio_init(ISABus *isa_bus, PCIBus *pci_bus)
void audio_init(qemu_irq *isa_pic, PCIBus *pci_bus)
{
struct soundhw *c;
for (c = soundhw; c->name; ++c) {
if (c->enabled) {
if (c->isa) {
if (isa_bus) {
c->init.init_isa(isa_bus);
if (isa_pic) {
c->init.init_isa(isa_pic);
}
} else {
if (pci_bus) {
@@ -1001,7 +646,7 @@ void audio_init(ISABus *isa_bus, PCIBus *pci_bus)
void select_soundhw(const char *optarg)
{
}
void audio_init(ISABus *isa_bus, PCIBus *pci_bus)
void audio_init(qemu_irq *isa_pic, PCIBus *pci_bus)
{
}
#endif
@@ -1086,13 +731,3 @@ int xen_available(void)
return 0;
#endif
}
TargetInfo *qmp_query_target(Error **errp)
{
TargetInfo *info = g_malloc0(sizeof(*info));
info->arch = TARGET_TYPE;
return info;
}

View File

@@ -1,7 +1,7 @@
#ifndef QEMU_ARCH_INIT_H
#define QEMU_ARCH_INIT_H
#include "qmp-commands.h"
extern const char arch_config_name[];
enum {
QEMU_ARCH_ALL = -1,
@@ -17,9 +17,6 @@ enum {
QEMU_ARCH_S390X = 512,
QEMU_ARCH_SH4 = 1024,
QEMU_ARCH_SPARC = 2048,
QEMU_ARCH_XTENSA = 4096,
QEMU_ARCH_OPENRISC = 8192,
QEMU_ARCH_UNICORE32 = 0x4000,
};
extern const uint32_t arch_type;
@@ -29,11 +26,9 @@ void do_acpitable_option(const char *optarg);
void do_smbios_option(const char *optarg);
void cpudef_init(void);
int audio_available(void);
void audio_init(ISABus *isa_bus, PCIBus *pci_bus);
void audio_init(qemu_irq *isa_pic, PCIBus *pci_bus);
int tcg_available(void);
int kvm_available(void);
int xen_available(void);
CpuDefinitionInfoList GCC_WEAK_DECL *arch_query_cpu_definitions(Error **errp);
#endif

View File

@@ -1624,7 +1624,7 @@ arm_decode_shift (long given, fprintf_function func, void *stream,
}
/* Print one coprocessor instruction on INFO->STREAM.
Return true if the instruction matched, false if this is not a
Return true if the instuction matched, false if this is not a
recognised coprocessor instruction. */
static bfd_boolean
@@ -2214,7 +2214,7 @@ print_arm_address (bfd_vma pc, struct disassemble_info *info, long given)
}
/* Print one neon instruction on INFO->STREAM.
Return true if the instruction matched, false if this is not a
Return true if the instuction matched, false if this is not a
recognised neon instruction. */
static bfd_boolean
@@ -3927,7 +3927,7 @@ print_insn_arm (bfd_vma pc, struct disassemble_info *info)
n = last_mapping_sym - 1;
/* No mapping symbol found at this address. Look backwards
for a preceding one. */
for a preceeding one. */
for (; n >= 0; n--)
{
if (get_sym_code_type (info, n, &type))

View File

@@ -37,26 +37,26 @@
#include "hw/arm-misc.h"
#endif
#define TARGET_SYS_OPEN 0x01
#define TARGET_SYS_CLOSE 0x02
#define TARGET_SYS_WRITEC 0x03
#define TARGET_SYS_WRITE0 0x04
#define TARGET_SYS_WRITE 0x05
#define TARGET_SYS_READ 0x06
#define TARGET_SYS_READC 0x07
#define TARGET_SYS_ISTTY 0x09
#define TARGET_SYS_SEEK 0x0a
#define TARGET_SYS_FLEN 0x0c
#define TARGET_SYS_TMPNAM 0x0d
#define TARGET_SYS_REMOVE 0x0e
#define TARGET_SYS_RENAME 0x0f
#define TARGET_SYS_CLOCK 0x10
#define TARGET_SYS_TIME 0x11
#define TARGET_SYS_SYSTEM 0x12
#define TARGET_SYS_ERRNO 0x13
#define TARGET_SYS_GET_CMDLINE 0x15
#define TARGET_SYS_HEAPINFO 0x16
#define TARGET_SYS_EXIT 0x18
#define SYS_OPEN 0x01
#define SYS_CLOSE 0x02
#define SYS_WRITEC 0x03
#define SYS_WRITE0 0x04
#define SYS_WRITE 0x05
#define SYS_READ 0x06
#define SYS_READC 0x07
#define SYS_ISTTY 0x09
#define SYS_SEEK 0x0a
#define SYS_FLEN 0x0c
#define SYS_TMPNAM 0x0d
#define SYS_REMOVE 0x0e
#define SYS_RENAME 0x0f
#define SYS_CLOCK 0x10
#define SYS_TIME 0x11
#define SYS_SYSTEM 0x12
#define SYS_ERRNO 0x13
#define SYS_GET_CMDLINE 0x15
#define SYS_HEAPINFO 0x16
#define SYS_EXIT 0x18
#ifndef O_BINARY
#define O_BINARY 0
@@ -108,7 +108,7 @@ static inline uint32_t set_swi_errno(TaskState *ts, uint32_t code)
return code;
}
#else
static inline uint32_t set_swi_errno(CPUARMState *env, uint32_t code)
static inline uint32_t set_swi_errno(CPUState *env, uint32_t code)
{
return code;
}
@@ -122,7 +122,7 @@ static target_ulong arm_semi_syscall_len;
static target_ulong syscall_err;
#endif
static void arm_semi_cb(CPUARMState *env, target_ulong ret, target_ulong err)
static void arm_semi_cb(CPUState *env, target_ulong ret, target_ulong err)
{
#ifdef CONFIG_USER_ONLY
TaskState *ts = env->opaque;
@@ -138,11 +138,11 @@ static void arm_semi_cb(CPUARMState *env, target_ulong ret, target_ulong err)
} else {
/* Fixup syscalls that use nonstardard return conventions. */
switch (env->regs[0]) {
case TARGET_SYS_WRITE:
case TARGET_SYS_READ:
case SYS_WRITE:
case SYS_READ:
env->regs[0] = arm_semi_syscall_len - ret;
break;
case TARGET_SYS_SEEK:
case SYS_SEEK:
env->regs[0] = 0;
break;
default:
@@ -152,7 +152,7 @@ static void arm_semi_cb(CPUARMState *env, target_ulong ret, target_ulong err)
}
}
static void arm_semi_flen_cb(CPUARMState *env, target_ulong ret, target_ulong err)
static void arm_semi_flen_cb(CPUState *env, target_ulong ret, target_ulong err)
{
/* The size is always stored in big-endian order, extract
the value. We assume the size always fit in 32 bits. */
@@ -174,7 +174,7 @@ static void arm_semi_flen_cb(CPUARMState *env, target_ulong ret, target_ulong er
__arg; \
})
#define SET_ARG(n, val) put_user_ual(val, args + (n) * 4)
uint32_t do_arm_semihosting(CPUARMState *env)
uint32_t do_arm_semihosting(CPUState *env)
{
target_ulong args;
char * s;
@@ -184,42 +184,41 @@ uint32_t do_arm_semihosting(CPUARMState *env)
#ifdef CONFIG_USER_ONLY
TaskState *ts = env->opaque;
#else
CPUARMState *ts = env;
CPUState *ts = env;
#endif
nr = env->regs[0];
args = env->regs[1];
switch (nr) {
case TARGET_SYS_OPEN:
case SYS_OPEN:
if (!(s = lock_user_string(ARG(0))))
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
if (ARG(1) >= 12) {
unlock_user(s, ARG(0), 0);
if (ARG(1) >= 12)
return (uint32_t)-1;
}
if (strcmp(s, ":tt") == 0) {
int result_fileno = ARG(1) < 4 ? STDIN_FILENO : STDOUT_FILENO;
unlock_user(s, ARG(0), 0);
return result_fileno;
if (ARG(1) < 4)
return STDIN_FILENO;
else
return STDOUT_FILENO;
}
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "open,%s,%x,1a4", ARG(0),
(int)ARG(2)+1, gdb_open_modeflags[ARG(1)]);
ret = env->regs[0];
return env->regs[0];
} else {
ret = set_swi_errno(ts, open(s, open_modeflags[ARG(1)], 0644));
}
unlock_user(s, ARG(0), 0);
return ret;
case TARGET_SYS_CLOSE:
case SYS_CLOSE:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "close,%x", ARG(0));
return env->regs[0];
} else {
return set_swi_errno(ts, close(ARG(0)));
}
case TARGET_SYS_WRITEC:
case SYS_WRITEC:
{
char c;
@@ -234,7 +233,7 @@ uint32_t do_arm_semihosting(CPUARMState *env)
return write(STDERR_FILENO, &c, 1);
}
}
case TARGET_SYS_WRITE0:
case SYS_WRITE0:
if (!(s = lock_user_string(args)))
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
@@ -247,7 +246,7 @@ uint32_t do_arm_semihosting(CPUARMState *env)
}
unlock_user(s, args, 0);
return ret;
case TARGET_SYS_WRITE:
case SYS_WRITE:
len = ARG(2);
if (use_gdb_syscalls()) {
arm_semi_syscall_len = len;
@@ -263,7 +262,7 @@ uint32_t do_arm_semihosting(CPUARMState *env)
return -1;
return len - ret;
}
case TARGET_SYS_READ:
case SYS_READ:
len = ARG(2);
if (use_gdb_syscalls()) {
arm_semi_syscall_len = len;
@@ -281,17 +280,17 @@ uint32_t do_arm_semihosting(CPUARMState *env)
return -1;
return len - ret;
}
case TARGET_SYS_READC:
/* XXX: Read from debug console. Not implemented. */
case SYS_READC:
/* XXX: Read from debug cosole. Not implemented. */
return 0;
case TARGET_SYS_ISTTY:
case SYS_ISTTY:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "isatty,%x", ARG(0));
return env->regs[0];
} else {
return isatty(ARG(0));
}
case TARGET_SYS_SEEK:
case SYS_SEEK:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "lseek,%x,%x,0", ARG(0), ARG(1));
return env->regs[0];
@@ -301,7 +300,7 @@ uint32_t do_arm_semihosting(CPUARMState *env)
return -1;
return 0;
}
case TARGET_SYS_FLEN:
case SYS_FLEN:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_flen_cb, "fstat,%x,%x",
ARG(0), env->regs[13]-64);
@@ -313,10 +312,10 @@ uint32_t do_arm_semihosting(CPUARMState *env)
return -1;
return buf.st_size;
}
case TARGET_SYS_TMPNAM:
case SYS_TMPNAM:
/* XXX: Not implemented. */
return -1;
case TARGET_SYS_REMOVE:
case SYS_REMOVE:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "unlink,%s", ARG(0), (int)ARG(1)+1);
ret = env->regs[0];
@@ -328,7 +327,7 @@ uint32_t do_arm_semihosting(CPUARMState *env)
unlock_user(s, ARG(0), 0);
}
return ret;
case TARGET_SYS_RENAME:
case SYS_RENAME:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "rename,%s,%s",
ARG(0), (int)ARG(1)+1, ARG(2), (int)ARG(3)+1);
@@ -348,11 +347,11 @@ uint32_t do_arm_semihosting(CPUARMState *env)
unlock_user(s, ARG(0), 0);
return ret;
}
case TARGET_SYS_CLOCK:
case SYS_CLOCK:
return clock() / (CLOCKS_PER_SEC / 100);
case TARGET_SYS_TIME:
case SYS_TIME:
return set_swi_errno(ts, time(NULL));
case TARGET_SYS_SYSTEM:
case SYS_SYSTEM:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "system,%s", ARG(0), (int)ARG(1)+1);
return env->regs[0];
@@ -364,13 +363,13 @@ uint32_t do_arm_semihosting(CPUARMState *env)
unlock_user(s, ARG(0), 0);
return ret;
}
case TARGET_SYS_ERRNO:
case SYS_ERRNO:
#ifdef CONFIG_USER_ONLY
return ts->swi_errno;
#else
return syscall_err;
#endif
case TARGET_SYS_GET_CMDLINE:
case SYS_GET_CMDLINE:
{
/* Build a command-line from the original argv.
*
@@ -453,7 +452,7 @@ uint32_t do_arm_semihosting(CPUARMState *env)
return status;
}
case TARGET_SYS_HEAPINFO:
case SYS_HEAPINFO:
{
uint32_t *ptr;
uint32_t limit;
@@ -499,7 +498,7 @@ uint32_t do_arm_semihosting(CPUARMState *env)
#endif
return 0;
}
case TARGET_SYS_EXIT:
case SYS_EXIT:
gdb_exit(env, 0);
exit(0);
default:

12
arm.ld
View File

@@ -71,23 +71,23 @@ SECTIONS
.data1 : { *(.data1) }
.preinit_array :
{
PROVIDE (__preinit_array_start = .);
PROVIDE_HIDDEN (__preinit_array_start = .);
KEEP (*(.preinit_array))
PROVIDE (__preinit_array_end = .);
PROVIDE_HIDDEN (__preinit_array_end = .);
}
.init_array :
{
PROVIDE (__init_array_start = .);
PROVIDE_HIDDEN (__init_array_start = .);
KEEP (*(SORT(.init_array.*)))
KEEP (*(.init_array))
PROVIDE (__init_array_end = .);
PROVIDE_HIDDEN (__init_array_end = .);
}
.fini_array :
{
PROVIDE (__fini_array_start = .);
PROVIDE_HIDDEN (__fini_array_start = .);
KEEP (*(.fini_array))
KEEP (*(SORT(.fini_array.*)))
PROVIDE (__fini_array_end = .);
PROVIDE_HIDDEN (__fini_array_end = .);
}
.ctors :
{

131
async.c
View File

@@ -24,10 +24,93 @@
#include "qemu-common.h"
#include "qemu-aio.h"
#include "main-loop.h"
/* Anchor of the list of Bottom Halves belonging to the context */
static struct QEMUBH *first_bh;
/*
* An AsyncContext protects the callbacks of AIO requests and Bottom Halves
* against interfering with each other. A typical example is qcow2 that accepts
* asynchronous requests, but relies for manipulation of its metadata on
* synchronous bdrv_read/write that doesn't trigger any callbacks.
*
* However, these functions are often emulated using AIO which means that AIO
* callbacks must be run - but at the same time we must not run callbacks of
* other requests as they might start to modify metadata and corrupt the
* internal state of the caller of bdrv_read/write.
*
* To achieve the desired semantics we switch into a new AsyncContext.
* Callbacks must only be run if they belong to the current AsyncContext.
* Otherwise they need to be queued until their own context is active again.
* This is how you can make qemu_aio_wait() wait only for your own callbacks.
*
* The AsyncContexts form a stack. When you leave a AsyncContexts, you always
* return to the old ("parent") context.
*/
struct AsyncContext {
/* Consecutive number of the AsyncContext (position in the stack) */
int id;
/* Anchor of the list of Bottom Halves belonging to the context */
struct QEMUBH *first_bh;
/* Link to parent context */
struct AsyncContext *parent;
};
/* The currently active AsyncContext */
static struct AsyncContext *async_context = &(struct AsyncContext) { 0 };
/*
* Enter a new AsyncContext. Already scheduled Bottom Halves and AIO callbacks
* won't be called until this context is left again.
*/
void async_context_push(void)
{
struct AsyncContext *new = qemu_mallocz(sizeof(*new));
new->parent = async_context;
new->id = async_context->id + 1;
async_context = new;
}
/* Run queued AIO completions and destroy Bottom Half */
static void bh_run_aio_completions(void *opaque)
{
QEMUBH **bh = opaque;
qemu_bh_delete(*bh);
qemu_free(bh);
qemu_aio_process_queue();
}
/*
* Leave the currently active AsyncContext. All Bottom Halves belonging to the
* old context are executed before changing the context.
*/
void async_context_pop(void)
{
struct AsyncContext *old = async_context;
QEMUBH **bh;
/* Flush the bottom halves, we don't want to lose them */
while (qemu_bh_poll());
/* Switch back to the parent context */
async_context = async_context->parent;
qemu_free(old);
if (async_context == NULL) {
abort();
}
/* Schedule BH to run any queued AIO completions as soon as possible */
bh = qemu_malloc(sizeof(*bh));
*bh = qemu_bh_new(bh_run_aio_completions, bh);
qemu_bh_schedule(*bh);
}
/*
* Returns the ID of the currently active AsyncContext
*/
int get_async_context_id(void)
{
return async_context->id;
}
/***********************************************************/
/* bottom halves (can be seen as timers which expire ASAP) */
@@ -35,20 +118,20 @@ static struct QEMUBH *first_bh;
struct QEMUBH {
QEMUBHFunc *cb;
void *opaque;
int scheduled;
int idle;
int deleted;
QEMUBH *next;
bool scheduled;
bool idle;
bool deleted;
};
QEMUBH *qemu_bh_new(QEMUBHFunc *cb, void *opaque)
{
QEMUBH *bh;
bh = g_malloc0(sizeof(QEMUBH));
bh = qemu_mallocz(sizeof(QEMUBH));
bh->cb = cb;
bh->opaque = opaque;
bh->next = first_bh;
first_bh = bh;
bh->next = async_context->first_bh;
async_context->first_bh = bh;
return bh;
}
@@ -56,12 +139,9 @@ int qemu_bh_poll(void)
{
QEMUBH *bh, **bhp, *next;
int ret;
static int nesting = 0;
nesting++;
ret = 0;
for (bh = first_bh; bh; bh = next) {
for (bh = async_context->first_bh; bh; bh = next) {
next = bh->next;
if (!bh->deleted && bh->scheduled) {
bh->scheduled = 0;
@@ -72,20 +152,15 @@ int qemu_bh_poll(void)
}
}
nesting--;
/* remove deleted bhs */
if (!nesting) {
bhp = &first_bh;
while (*bhp) {
bh = *bhp;
if (bh->deleted) {
*bhp = bh->next;
g_free(bh);
} else {
bhp = &bh->next;
}
}
bhp = &async_context->first_bh;
while (*bhp) {
bh = *bhp;
if (bh->deleted) {
*bhp = bh->next;
qemu_free(bh);
} else
bhp = &bh->next;
}
return ret;
@@ -120,11 +195,11 @@ void qemu_bh_delete(QEMUBH *bh)
bh->deleted = 1;
}
void qemu_bh_update_timeout(uint32_t *timeout)
void qemu_bh_update_timeout(int *timeout)
{
QEMUBH *bh;
for (bh = first_bh; bh; bh = bh->next) {
for (bh = async_context->first_bh; bh; bh = bh->next) {
if (!bh->deleted && bh->scheduled) {
if (bh->idle) {
/* idle bottom halves will be polled at least

View File

@@ -1,14 +0,0 @@
common-obj-y = audio.o noaudio.o wavaudio.o mixeng.o
common-obj-$(CONFIG_SDL) += sdlaudio.o
common-obj-$(CONFIG_OSS) += ossaudio.o
common-obj-$(CONFIG_SPICE) += spiceaudio.o
common-obj-$(CONFIG_COREAUDIO) += coreaudio.o
common-obj-$(CONFIG_ALSA) += alsaaudio.o
common-obj-$(CONFIG_DSOUND) += dsoundaudio.o
common-obj-$(CONFIG_FMOD) += fmodaudio.o
common-obj-$(CONFIG_ESD) += esdaudio.o
common-obj-$(CONFIG_PA) += paaudio.o
common-obj-$(CONFIG_WINWAVE) += winwaveaudio.o
common-obj-$(CONFIG_AUDIO_PT_INT) += audio_pt_int.o
common-obj-$(CONFIG_AUDIO_WIN_INT) += audio_win_int.o
common-obj-y += wavcapture.o

View File

@@ -136,7 +136,7 @@ static void alsa_fini_poll (struct pollhlp *hlp)
for (i = 0; i < hlp->count; ++i) {
qemu_set_fd_handler (pfds[i].fd, NULL, NULL, NULL);
}
g_free (pfds);
qemu_free (pfds);
}
hlp->pfds = NULL;
hlp->count = 0;
@@ -260,7 +260,7 @@ static int alsa_poll_helper (snd_pcm_t *handle, struct pollhlp *hlp, int mask)
if (err < 0) {
alsa_logerr (err, "Could not initialize poll mode\n"
"Could not obtain poll descriptors\n");
g_free (pfds);
qemu_free (pfds);
return -1;
}
@@ -288,7 +288,7 @@ static int alsa_poll_helper (snd_pcm_t *handle, struct pollhlp *hlp, int mask)
while (i--) {
qemu_set_fd_handler (pfds[i].fd, NULL, NULL, NULL);
}
g_free (pfds);
qemu_free (pfds);
return -1;
}
}
@@ -816,7 +816,7 @@ static void alsa_fini_out (HWVoiceOut *hw)
alsa_anal_close (&alsa->handle, &alsa->pollhlp);
if (alsa->pcm_buf) {
g_free (alsa->pcm_buf);
qemu_free (alsa->pcm_buf);
alsa->pcm_buf = NULL;
}
}
@@ -979,7 +979,7 @@ static void alsa_fini_in (HWVoiceIn *hw)
alsa_anal_close (&alsa->handle, &alsa->pollhlp);
if (alsa->pcm_buf) {
g_free (alsa->pcm_buf);
qemu_free (alsa->pcm_buf);
alsa->pcm_buf = NULL;
}
}

View File

@@ -196,7 +196,7 @@ void *audio_calloc (const char *funcname, int nmemb, size_t size)
return NULL;
}
return g_malloc0 (len);
return qemu_mallocz (len);
}
static char *audio_alloc_prefix (const char *s)
@@ -210,7 +210,7 @@ static char *audio_alloc_prefix (const char *s)
}
len = strlen (s);
r = g_malloc (len + sizeof (qemu_prefix));
r = qemu_malloc (len + sizeof (qemu_prefix));
u = r + sizeof (qemu_prefix) - 1;
@@ -425,7 +425,7 @@ static void audio_print_options (const char *prefix,
printf (" %s\n", opt->descr);
}
g_free (uprefix);
qemu_free (uprefix);
}
static void audio_process_options (const char *prefix,
@@ -462,7 +462,7 @@ static void audio_process_options (const char *prefix,
* (includes trailing zero) + zero + underscore (on behalf of
* sizeof) */
optlen = len + preflen + sizeof (qemu_prefix) + 1;
optname = g_malloc (optlen);
optname = qemu_malloc (optlen);
pstrcpy (optname, optlen, qemu_prefix);
@@ -507,7 +507,7 @@ static void audio_process_options (const char *prefix,
opt->overriddenp = &opt->overridden;
}
*opt->overriddenp = !def;
g_free (optname);
qemu_free (optname);
}
}
@@ -585,20 +585,17 @@ static int audio_pcm_info_eq (struct audio_pcm_info *info, struct audsettings *a
switch (as->fmt) {
case AUD_FMT_S8:
sign = 1;
/* fall through */
case AUD_FMT_U8:
break;
case AUD_FMT_S16:
sign = 1;
/* fall through */
case AUD_FMT_U16:
bits = 16;
break;
case AUD_FMT_S32:
sign = 1;
/* fall through */
case AUD_FMT_U32:
bits = 32;
break;
@@ -781,7 +778,7 @@ static void audio_detach_capture (HWVoiceOut *hw)
QLIST_REMOVE (sw, entries);
QLIST_REMOVE (sc, entries);
g_free (sc);
qemu_free (sc);
if (was_active) {
/* We have removed soft voice from the capture:
this might have changed the overall status of the capture
@@ -818,11 +815,10 @@ static int audio_attach_capture (HWVoiceOut *hw)
sw->active = hw->enabled;
sw->conv = noop_conv;
sw->ratio = ((int64_t) hw_cap->info.freq << 32) / sw->info.freq;
sw->vol = nominal_volume;
sw->rate = st_rate_start (sw->info.freq, hw_cap->info.freq);
if (!sw->rate) {
dolog ("Could not start rate conversion for `%s'\n", SW_NAME (sw));
g_free (sw);
qemu_free (sw);
return -1;
}
QLIST_INSERT_HEAD (&hw_cap->sw_head, sw, entries);
@@ -958,9 +954,7 @@ int audio_pcm_sw_read (SWVoiceIn *sw, void *buf, int size)
total += isamp;
}
if (!(hw->ctl_caps & VOICE_VOLUME_CAP)) {
mixeng_volume (sw->buf, ret, &sw->vol);
}
mixeng_volume (sw->buf, ret, &sw->vol);
sw->clip (buf, sw->buf, ret);
sw->total_hw_samples_acquired += total;
@@ -1044,10 +1038,7 @@ int audio_pcm_sw_write (SWVoiceOut *sw, void *buf, int size)
swlim = audio_MIN (swlim, samples);
if (swlim) {
sw->conv (sw->buf, buf, swlim);
if (!(sw->hw->ctl_caps & VOICE_VOLUME_CAP)) {
mixeng_volume (sw->buf, swlim, &sw->vol);
}
mixeng_volume (sw->buf, swlim, &sw->vol);
}
while (swlim) {
@@ -1674,7 +1665,7 @@ static void audio_pp_nb_voices (const char *typ, int nb)
printf ("Theoretically supports many %s voices\n", typ);
break;
default:
printf ("Theoretically supports up to %d %s voices\n", nb, typ);
printf ("Theoretically supports upto %d %s voices\n", nb, typ);
break;
}
@@ -1752,7 +1743,7 @@ static int audio_driver_init (AudioState *s, struct audio_driver *drv)
}
static void audio_vm_change_state_handler (void *opaque, int running,
RunState state)
int reason)
{
AudioState *s = opaque;
HWVoiceOut *hwo = NULL;
@@ -1776,12 +1767,10 @@ static void audio_atexit (void)
HWVoiceOut *hwo = NULL;
HWVoiceIn *hwi = NULL;
while ((hwo = audio_pcm_hw_find_any_out (hwo))) {
while ((hwo = audio_pcm_hw_find_any_enabled_out (hwo))) {
SWVoiceCap *sc;
if (hwo->enabled) {
hwo->pcm_ops->ctl_out (hwo, VOICE_DISABLE);
}
hwo->pcm_ops->ctl_out (hwo, VOICE_DISABLE);
hwo->pcm_ops->fini_out (hwo);
for (sc = hwo->cap_head.lh_first; sc; sc = sc->entries.le_next) {
@@ -1794,10 +1783,8 @@ static void audio_atexit (void)
}
}
while ((hwi = audio_pcm_hw_find_any_in (hwi))) {
if (hwi->enabled) {
hwi->pcm_ops->ctl_in (hwi, VOICE_DISABLE);
}
while ((hwi = audio_pcm_hw_find_any_enabled_in (hwi))) {
hwi->pcm_ops->ctl_in (hwi, VOICE_DISABLE);
hwi->pcm_ops->fini_in (hwi);
}
@@ -1920,7 +1907,7 @@ static void audio_init (void)
void AUD_register_card (const char *name, QEMUSoundCard *card)
{
audio_init ();
card->name = g_strdup (name);
card->name = qemu_strdup (name);
memset (&card->entries, 0, sizeof (card->entries));
QLIST_INSERT_HEAD (&glob_audio_state.card_head, card, entries);
}
@@ -1928,7 +1915,7 @@ void AUD_register_card (const char *name, QEMUSoundCard *card)
void AUD_remove_card (QEMUSoundCard *card)
{
QLIST_REMOVE (card, entries);
g_free (card->name);
qemu_free (card->name);
}
@@ -2013,11 +2000,11 @@ CaptureVoiceOut *AUD_add_capture (
return cap;
err3:
g_free (cap->hw.mix_buf);
qemu_free (cap->hw.mix_buf);
err2:
g_free (cap);
qemu_free (cap);
err1:
g_free (cb);
qemu_free (cb);
err0:
return NULL;
}
@@ -2031,7 +2018,7 @@ void AUD_del_capture (CaptureVoiceOut *cap, void *cb_opaque)
if (cb->opaque == cb_opaque) {
cb->ops.destroy (cb_opaque);
QLIST_REMOVE (cb, entries);
g_free (cb);
qemu_free (cb);
if (!cap->cb_head.lh_first) {
SWVoiceOut *sw = cap->hw.sw_head.lh_first, *sw1;
@@ -2049,11 +2036,11 @@ void AUD_del_capture (CaptureVoiceOut *cap, void *cb_opaque)
}
QLIST_REMOVE (sw, entries);
QLIST_REMOVE (sc, entries);
g_free (sc);
qemu_free (sc);
sw = sw1;
}
QLIST_REMOVE (cap, entries);
g_free (cap);
qemu_free (cap);
}
return;
}
@@ -2063,29 +2050,17 @@ void AUD_del_capture (CaptureVoiceOut *cap, void *cb_opaque)
void AUD_set_volume_out (SWVoiceOut *sw, int mute, uint8_t lvol, uint8_t rvol)
{
if (sw) {
HWVoiceOut *hw = sw->hw;
sw->vol.mute = mute;
sw->vol.l = nominal_volume.l * lvol / 255;
sw->vol.r = nominal_volume.r * rvol / 255;
if (hw->pcm_ops->ctl_out) {
hw->pcm_ops->ctl_out (hw, VOICE_VOLUME, sw);
}
}
}
void AUD_set_volume_in (SWVoiceIn *sw, int mute, uint8_t lvol, uint8_t rvol)
{
if (sw) {
HWVoiceIn *hw = sw->hw;
sw->vol.mute = mute;
sw->vol.l = nominal_volume.l * lvol / 255;
sw->vol.r = nominal_volume.r * rvol / 255;
if (hw->pcm_ops->ctl_in) {
hw->pcm_ops->ctl_in (hw, VOICE_VOLUME, sw);
}
}
}

View File

@@ -82,7 +82,6 @@ typedef struct HWVoiceOut {
int samples;
QLIST_HEAD (sw_out_listhead, SWVoiceOut) sw_head;
QLIST_HEAD (sw_cap_listhead, SWVoiceCap) cap_head;
int ctl_caps;
struct audio_pcm_ops *pcm_ops;
QLIST_ENTRY (HWVoiceOut) entries;
} HWVoiceOut;
@@ -102,7 +101,6 @@ typedef struct HWVoiceIn {
int samples;
QLIST_HEAD (sw_in_listhead, SWVoiceIn) sw_head;
int ctl_caps;
struct audio_pcm_ops *pcm_ops;
QLIST_ENTRY (HWVoiceIn) entries;
} HWVoiceIn;
@@ -152,7 +150,6 @@ struct audio_driver {
int max_voices_in;
int voice_size_out;
int voice_size_in;
int ctl_caps;
};
struct audio_pcm_ops {
@@ -234,9 +231,6 @@ void audio_run (const char *msg);
#define VOICE_ENABLE 1
#define VOICE_DISABLE 2
#define VOICE_VOLUME 3
#define VOICE_VOLUME_CAP (1 << VOICE_VOLUME)
static inline int audio_ring_dist (int dst, int src, int len)
{

View File

@@ -72,7 +72,7 @@ static void glue (audio_init_nb_voices_, TYPE) (struct audio_driver *drv)
static void glue (audio_pcm_hw_free_resources_, TYPE) (HW *hw)
{
if (HWBUF) {
g_free (HWBUF);
qemu_free (HWBUF);
}
HWBUF = NULL;
@@ -93,7 +93,7 @@ static int glue (audio_pcm_hw_alloc_resources_, TYPE) (HW *hw)
static void glue (audio_pcm_sw_free_resources_, TYPE) (SW *sw)
{
if (sw->buf) {
g_free (sw->buf);
qemu_free (sw->buf);
}
if (sw->rate) {
@@ -123,7 +123,7 @@ static int glue (audio_pcm_sw_alloc_resources_, TYPE) (SW *sw)
sw->rate = st_rate_start (sw->hw->info.freq, sw->info.freq);
#endif
if (!sw->rate) {
g_free (sw->buf);
qemu_free (sw->buf);
sw->buf = NULL;
return -1;
}
@@ -160,10 +160,10 @@ static int glue (audio_pcm_sw_init_, TYPE) (
[sw->info.swap_endianness]
[audio_bits_to_index (sw->info.bits)];
sw->name = g_strdup (name);
sw->name = qemu_strdup (name);
err = glue (audio_pcm_sw_alloc_resources_, TYPE) (sw);
if (err) {
g_free (sw->name);
qemu_free (sw->name);
sw->name = NULL;
}
return err;
@@ -173,7 +173,7 @@ static void glue (audio_pcm_sw_fini_, TYPE) (SW *sw)
{
glue (audio_pcm_sw_free_resources_, TYPE) (sw);
if (sw->name) {
g_free (sw->name);
qemu_free (sw->name);
sw->name = NULL;
}
}
@@ -201,7 +201,7 @@ static void glue (audio_pcm_hw_gc_, TYPE) (HW **hwp)
glue (s->nb_hw_voices_, TYPE) += 1;
glue (audio_pcm_hw_free_resources_ ,TYPE) (hw);
glue (hw->pcm_ops->fini_, TYPE) (hw);
g_free (hw);
qemu_free (hw);
*hwp = NULL;
}
}
@@ -263,8 +263,6 @@ static HW *glue (audio_pcm_hw_add_new_, TYPE) (struct audsettings *as)
}
hw->pcm_ops = drv->pcm_ops;
hw->ctl_caps = drv->ctl_caps;
QLIST_INIT (&hw->sw_head);
#ifdef DAC
QLIST_INIT (&hw->cap_head);
@@ -302,7 +300,7 @@ static HW *glue (audio_pcm_hw_add_new_, TYPE) (struct audsettings *as)
err1:
glue (hw->pcm_ops->fini_, TYPE) (hw);
err0:
g_free (hw);
qemu_free (hw);
return NULL;
}
@@ -370,7 +368,7 @@ err3:
glue (audio_pcm_hw_del_sw_, TYPE) (sw);
glue (audio_pcm_hw_gc_, TYPE) (&hw);
err2:
g_free (sw);
qemu_free (sw);
err1:
return NULL;
}
@@ -380,7 +378,7 @@ static void glue (audio_close_, TYPE) (SW *sw)
glue (audio_pcm_sw_fini_, TYPE) (sw);
glue (audio_pcm_hw_del_sw_, TYPE) (sw);
glue (audio_pcm_hw_gc_, TYPE) (&sw->hw);
g_free (sw);
qemu_free (sw);
}
void glue (AUD_close_, TYPE) (QEMUSoundCard *card, SW *sw)
@@ -410,15 +408,15 @@ SW *glue (AUD_open_, TYPE) (
SW *old_sw = NULL;
#endif
ldebug ("open %s, freq %d, nchannels %d, fmt %d\n",
name, as->freq, as->nchannels, as->fmt);
if (audio_bug (AUDIO_FUNC, !card || !name || !callback_fn || !as)) {
dolog ("card=%p name=%p callback_fn=%p as=%p\n",
card, name, callback_fn, as);
goto fail;
}
ldebug ("open %s, freq %d, nchannels %d, fmt %d\n",
name, as->freq, as->nchannels, as->fmt);
if (audio_bug (AUDIO_FUNC, audio_validate_settings (as))) {
audio_print_settings (as);
goto fail;

View File

@@ -201,7 +201,7 @@ static int qesd_init_out (HWVoiceOut *hw, struct audsettings *as)
case AUD_FMT_S32:
case AUD_FMT_U32:
dolog ("Will use 16 instead of 32 bit samples\n");
/* fall through */
case AUD_FMT_S16:
case AUD_FMT_U16:
deffmt:
@@ -246,7 +246,7 @@ static int qesd_init_out (HWVoiceOut *hw, struct audsettings *as)
esd->fd = -1;
fail1:
g_free (esd->pcm_buf);
qemu_free (esd->pcm_buf);
esd->pcm_buf = NULL;
return -1;
}
@@ -270,7 +270,7 @@ static void qesd_fini_out (HWVoiceOut *hw)
audio_pt_fini (&esd->pt, AUDIO_FUNC);
g_free (esd->pcm_buf);
qemu_free (esd->pcm_buf);
esd->pcm_buf = NULL;
}
@@ -453,7 +453,7 @@ static int qesd_init_in (HWVoiceIn *hw, struct audsettings *as)
esd->fd = -1;
fail1:
g_free (esd->pcm_buf);
qemu_free (esd->pcm_buf);
esd->pcm_buf = NULL;
return -1;
}
@@ -477,7 +477,7 @@ static void qesd_fini_in (HWVoiceIn *hw)
audio_pt_fini (&esd->pt, AUDIO_FUNC);
g_free (esd->pcm_buf);
qemu_free (esd->pcm_buf);
esd->pcm_buf = NULL;
}

View File

@@ -343,7 +343,7 @@ static void fmod_fini_out (HWVoiceOut *hw)
static int fmod_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int mode, channel;
int bits16, mode, channel;
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
struct audsettings obt_as = *as;
@@ -374,6 +374,7 @@ static int fmod_init_out (HWVoiceOut *hw, struct audsettings *as)
/* FMOD always operates on little endian frames? */
obt_as.endianness = 0;
audio_pcm_init_info (&hw->info, &obt_as);
bits16 = (mode & FSOUND_16BITS) != 0;
hw->samples = conf.nb_samples;
return 0;
}
@@ -404,7 +405,7 @@ static int fmod_ctl_out (HWVoiceOut *hw, int cmd, ...)
static int fmod_init_in (HWVoiceIn *hw, struct audsettings *as)
{
int mode;
int bits16, mode;
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
struct audsettings obt_as = *as;
@@ -431,6 +432,7 @@ static int fmod_init_in (HWVoiceIn *hw, struct audsettings *as)
/* FMOD always operates on little endian frames? */
obt_as.endianness = 0;
audio_pcm_init_info (&hw->info, &obt_as);
bits16 = (mode & FSOUND_16BITS) != 0;
hw->samples = conf.nb_samples;
return 0;
}

View File

@@ -33,8 +33,7 @@
#define ENDIAN_CONVERT(v) (v)
/* Signed 8 bit */
#define BSIZE 8
#define ITYPE int
#define IN_T int8_t
#define IN_MIN SCHAR_MIN
#define IN_MAX SCHAR_MAX
#define SIGNED
@@ -43,29 +42,25 @@
#undef SIGNED
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
/* Unsigned 8 bit */
#define BSIZE 8
#define ITYPE uint
#define IN_T uint8_t
#define IN_MIN 0
#define IN_MAX UCHAR_MAX
#define SHIFT 8
#include "mixeng_template.h"
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
#undef ENDIAN_CONVERT
#undef ENDIAN_CONVERSION
/* Signed 16 bit */
#define BSIZE 16
#define ITYPE int
#define IN_T int16_t
#define IN_MIN SHRT_MIN
#define IN_MAX SHRT_MAX
#define SIGNED
@@ -83,13 +78,11 @@
#undef SIGNED
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
/* Unsigned 16 bit */
#define BSIZE 16
#define ITYPE uint
#define IN_T uint16_t
#define IN_MIN 0
#define IN_MAX USHRT_MAX
#define SHIFT 16
@@ -105,13 +98,11 @@
#undef ENDIAN_CONVERSION
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
/* Signed 32 bit */
#define BSIZE 32
#define ITYPE int
#define IN_T int32_t
#define IN_MIN INT32_MIN
#define IN_MAX INT32_MAX
#define SIGNED
@@ -129,13 +120,11 @@
#undef SIGNED
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
/* Unsigned 32 bit */
#define BSIZE 32
#define ITYPE uint
#define IN_T uint32_t
#define IN_MIN 0
#define IN_MAX UINT32_MAX
#define SHIFT 32
@@ -151,8 +140,7 @@
#undef ENDIAN_CONVERSION
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
t_sample *mixeng_conv[2][2][2][3] = {
@@ -338,7 +326,7 @@ void *st_rate_start (int inrate, int outrate)
void st_rate_stop (void *opaque)
{
g_free (opaque);
qemu_free (opaque);
}
void mixeng_clear (struct st_sample *buf, int len)

View File

@@ -31,8 +31,7 @@
#define HALF (IN_MAX >> 1)
#endif
#define ET glue (ENDIAN_CONVERSION, glue (glue (glue (_, ITYPE), BSIZE), _t))
#define IN_T glue (glue (ITYPE, BSIZE), _t)
#define ET glue (ENDIAN_CONVERSION, glue (_, IN_T))
#ifdef FLOAT_MIXENG
static mixeng_real inline glue (conv_, ET) (IN_T v)
@@ -151,4 +150,3 @@ static void glue (glue (clip_, ET), _from_mono)
#undef ET
#undef HALF
#undef IN_T

View File

@@ -508,7 +508,7 @@ static void oss_fini_out (HWVoiceOut *hw)
}
}
else {
g_free (oss->pcm_buf);
qemu_free (oss->pcm_buf);
}
oss->pcm_buf = NULL;
}
@@ -741,7 +741,7 @@ static void oss_fini_in (HWVoiceIn *hw)
oss_anal_close (&oss->fd);
if (oss->pcm_buf) {
g_free (oss->pcm_buf);
qemu_free (oss->pcm_buf);
oss->pcm_buf = NULL;
}
}

View File

@@ -2,7 +2,8 @@
#include "qemu-common.h"
#include "audio.h"
#include <pulse/pulseaudio.h>
#include <pulse/simple.h>
#include <pulse/error.h>
#define AUDIO_CAP "pulseaudio"
#include "audio_int.h"
@@ -14,7 +15,7 @@ typedef struct {
int live;
int decr;
int rpos;
pa_stream *stream;
pa_simple *s;
void *pcm_buf;
struct audio_pt pt;
} PAVoiceOut;
@@ -25,23 +26,17 @@ typedef struct {
int dead;
int incr;
int wpos;
pa_stream *stream;
pa_simple *s;
void *pcm_buf;
struct audio_pt pt;
const void *read_data;
size_t read_index, read_length;
} PAVoiceIn;
typedef struct {
static struct {
int samples;
char *server;
char *sink;
char *source;
pa_threaded_mainloop *mainloop;
pa_context *context;
} paaudio;
static paaudio glob_paaudio = {
} conf = {
.samples = 4096,
};
@@ -56,146 +51,6 @@ static void GCC_FMT_ATTR (2, 3) qpa_logerr (int err, const char *fmt, ...)
AUD_log (AUDIO_CAP, "Reason: %s\n", pa_strerror (err));
}
#ifndef PA_CONTEXT_IS_GOOD
static inline int PA_CONTEXT_IS_GOOD(pa_context_state_t x)
{
return
x == PA_CONTEXT_CONNECTING ||
x == PA_CONTEXT_AUTHORIZING ||
x == PA_CONTEXT_SETTING_NAME ||
x == PA_CONTEXT_READY;
}
#endif
#ifndef PA_STREAM_IS_GOOD
static inline int PA_STREAM_IS_GOOD(pa_stream_state_t x)
{
return
x == PA_STREAM_CREATING ||
x == PA_STREAM_READY;
}
#endif
#define CHECK_SUCCESS_GOTO(c, rerror, expression, label) \
do { \
if (!(expression)) { \
if (rerror) { \
*(rerror) = pa_context_errno ((c)->context); \
} \
goto label; \
} \
} while (0);
#define CHECK_DEAD_GOTO(c, stream, rerror, label) \
do { \
if (!(c)->context || !PA_CONTEXT_IS_GOOD (pa_context_get_state((c)->context)) || \
!(stream) || !PA_STREAM_IS_GOOD (pa_stream_get_state ((stream)))) { \
if (((c)->context && pa_context_get_state ((c)->context) == PA_CONTEXT_FAILED) || \
((stream) && pa_stream_get_state ((stream)) == PA_STREAM_FAILED)) { \
if (rerror) { \
*(rerror) = pa_context_errno ((c)->context); \
} \
} else { \
if (rerror) { \
*(rerror) = PA_ERR_BADSTATE; \
} \
} \
goto label; \
} \
} while (0);
static int qpa_simple_read (PAVoiceIn *p, void *data, size_t length, int *rerror)
{
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_lock (g->mainloop);
CHECK_DEAD_GOTO (g, p->stream, rerror, unlock_and_fail);
while (length > 0) {
size_t l;
while (!p->read_data) {
int r;
r = pa_stream_peek (p->stream, &p->read_data, &p->read_length);
CHECK_SUCCESS_GOTO (g, rerror, r == 0, unlock_and_fail);
if (!p->read_data) {
pa_threaded_mainloop_wait (g->mainloop);
CHECK_DEAD_GOTO (g, p->stream, rerror, unlock_and_fail);
} else {
p->read_index = 0;
}
}
l = p->read_length < length ? p->read_length : length;
memcpy (data, (const uint8_t *) p->read_data+p->read_index, l);
data = (uint8_t *) data + l;
length -= l;
p->read_index += l;
p->read_length -= l;
if (!p->read_length) {
int r;
r = pa_stream_drop (p->stream);
p->read_data = NULL;
p->read_length = 0;
p->read_index = 0;
CHECK_SUCCESS_GOTO (g, rerror, r == 0, unlock_and_fail);
}
}
pa_threaded_mainloop_unlock (g->mainloop);
return 0;
unlock_and_fail:
pa_threaded_mainloop_unlock (g->mainloop);
return -1;
}
static int qpa_simple_write (PAVoiceOut *p, const void *data, size_t length, int *rerror)
{
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_lock (g->mainloop);
CHECK_DEAD_GOTO (g, p->stream, rerror, unlock_and_fail);
while (length > 0) {
size_t l;
int r;
while (!(l = pa_stream_writable_size (p->stream))) {
pa_threaded_mainloop_wait (g->mainloop);
CHECK_DEAD_GOTO (g, p->stream, rerror, unlock_and_fail);
}
CHECK_SUCCESS_GOTO (g, rerror, l != (size_t) -1, unlock_and_fail);
if (l > length) {
l = length;
}
r = pa_stream_write (p->stream, data, l, NULL, 0LL, PA_SEEK_RELATIVE);
CHECK_SUCCESS_GOTO (g, rerror, r >= 0, unlock_and_fail);
data = (const uint8_t *) data + l;
length -= l;
}
pa_threaded_mainloop_unlock (g->mainloop);
return 0;
unlock_and_fail:
pa_threaded_mainloop_unlock (g->mainloop);
return -1;
}
static void *qpa_thread_out (void *arg)
{
PAVoiceOut *pa = arg;
@@ -222,7 +77,7 @@ static void *qpa_thread_out (void *arg)
}
}
decr = to_mix = audio_MIN (pa->live, glob_paaudio.samples >> 2);
decr = to_mix = audio_MIN (pa->live, conf.samples >> 2);
rpos = pa->rpos;
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
@@ -236,8 +91,8 @@ static void *qpa_thread_out (void *arg)
hw->clip (pa->pcm_buf, src, chunk);
if (qpa_simple_write (pa, pa->pcm_buf,
chunk << hw->info.shift, &error) < 0) {
if (pa_simple_write (pa->s, pa->pcm_buf,
chunk << hw->info.shift, &error) < 0) {
qpa_logerr (error, "pa_simple_write failed\n");
return NULL;
}
@@ -314,7 +169,7 @@ static void *qpa_thread_in (void *arg)
}
}
incr = to_grab = audio_MIN (pa->dead, glob_paaudio.samples >> 2);
incr = to_grab = audio_MIN (pa->dead, conf.samples >> 2);
wpos = pa->wpos;
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
@@ -326,8 +181,8 @@ static void *qpa_thread_in (void *arg)
int chunk = audio_MIN (to_grab, hw->samples - wpos);
void *buf = advance (pa->pcm_buf, wpos);
if (qpa_simple_read (pa, buf,
chunk << hw->info.shift, &error) < 0) {
if (pa_simple_read (pa->s, buf,
chunk << hw->info.shift, &error) < 0) {
qpa_logerr (error, "pa_simple_read failed\n");
return NULL;
}
@@ -428,112 +283,6 @@ static audfmt_e pa_to_audfmt (pa_sample_format_t fmt, int *endianness)
}
}
static void context_state_cb (pa_context *c, void *userdata)
{
paaudio *g = &glob_paaudio;
switch (pa_context_get_state(c)) {
case PA_CONTEXT_READY:
case PA_CONTEXT_TERMINATED:
case PA_CONTEXT_FAILED:
pa_threaded_mainloop_signal (g->mainloop, 0);
break;
case PA_CONTEXT_UNCONNECTED:
case PA_CONTEXT_CONNECTING:
case PA_CONTEXT_AUTHORIZING:
case PA_CONTEXT_SETTING_NAME:
break;
}
}
static void stream_state_cb (pa_stream *s, void * userdata)
{
paaudio *g = &glob_paaudio;
switch (pa_stream_get_state (s)) {
case PA_STREAM_READY:
case PA_STREAM_FAILED:
case PA_STREAM_TERMINATED:
pa_threaded_mainloop_signal (g->mainloop, 0);
break;
case PA_STREAM_UNCONNECTED:
case PA_STREAM_CREATING:
break;
}
}
static void stream_request_cb (pa_stream *s, size_t length, void *userdata)
{
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_signal (g->mainloop, 0);
}
static pa_stream *qpa_simple_new (
const char *server,
const char *name,
pa_stream_direction_t dir,
const char *dev,
const char *stream_name,
const pa_sample_spec *ss,
const pa_channel_map *map,
const pa_buffer_attr *attr,
int *rerror)
{
paaudio *g = &glob_paaudio;
int r;
pa_stream *stream;
pa_threaded_mainloop_lock (g->mainloop);
stream = pa_stream_new (g->context, name, ss, map);
if (!stream) {
goto fail;
}
pa_stream_set_state_callback (stream, stream_state_cb, g);
pa_stream_set_read_callback (stream, stream_request_cb, g);
pa_stream_set_write_callback (stream, stream_request_cb, g);
if (dir == PA_STREAM_PLAYBACK) {
r = pa_stream_connect_playback (stream, dev, attr,
PA_STREAM_INTERPOLATE_TIMING
#ifdef PA_STREAM_ADJUST_LATENCY
|PA_STREAM_ADJUST_LATENCY
#endif
|PA_STREAM_AUTO_TIMING_UPDATE, NULL, NULL);
} else {
r = pa_stream_connect_record (stream, dev, attr,
PA_STREAM_INTERPOLATE_TIMING
#ifdef PA_STREAM_ADJUST_LATENCY
|PA_STREAM_ADJUST_LATENCY
#endif
|PA_STREAM_AUTO_TIMING_UPDATE);
}
if (r < 0) {
goto fail;
}
pa_threaded_mainloop_unlock (g->mainloop);
return stream;
fail:
pa_threaded_mainloop_unlock (g->mainloop);
if (stream) {
pa_stream_unref (stream);
}
*rerror = pa_context_errno (g->context);
return NULL;
}
static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int error;
@@ -557,24 +306,24 @@ static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
obt_as.fmt = pa_to_audfmt (ss.format, &obt_as.endianness);
pa->stream = qpa_simple_new (
glob_paaudio.server,
pa->s = pa_simple_new (
conf.server,
"qemu",
PA_STREAM_PLAYBACK,
glob_paaudio.sink,
conf.sink,
"pcm.playback",
&ss,
NULL, /* channel map */
&ba, /* buffering attributes */
&error
);
if (!pa->stream) {
if (!pa->s) {
qpa_logerr (error, "pa_simple_new for playback failed\n");
goto fail1;
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = glob_paaudio.samples;
hw->samples = conf.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->rpos = hw->rpos;
if (!pa->pcm_buf) {
@@ -590,13 +339,11 @@ static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
return 0;
fail3:
g_free (pa->pcm_buf);
qemu_free (pa->pcm_buf);
pa->pcm_buf = NULL;
fail2:
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
}
pa_simple_free (pa->s);
pa->s = NULL;
fail1:
return -1;
}
@@ -614,24 +361,24 @@ static int qpa_init_in (HWVoiceIn *hw, struct audsettings *as)
obt_as.fmt = pa_to_audfmt (ss.format, &obt_as.endianness);
pa->stream = qpa_simple_new (
glob_paaudio.server,
pa->s = pa_simple_new (
conf.server,
"qemu",
PA_STREAM_RECORD,
glob_paaudio.source,
conf.source,
"pcm.capture",
&ss,
NULL, /* channel map */
NULL, /* buffering attributes */
&error
);
if (!pa->stream) {
if (!pa->s) {
qpa_logerr (error, "pa_simple_new for capture failed\n");
goto fail1;
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = glob_paaudio.samples;
hw->samples = conf.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->wpos = hw->wpos;
if (!pa->pcm_buf) {
@@ -647,13 +394,11 @@ static int qpa_init_in (HWVoiceIn *hw, struct audsettings *as)
return 0;
fail3:
g_free (pa->pcm_buf);
qemu_free (pa->pcm_buf);
pa->pcm_buf = NULL;
fail2:
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
}
pa_simple_free (pa->s);
pa->s = NULL;
fail1:
return -1;
}
@@ -668,13 +413,13 @@ static void qpa_fini_out (HWVoiceOut *hw)
audio_pt_unlock_and_signal (&pa->pt, AUDIO_FUNC);
audio_pt_join (&pa->pt, &ret, AUDIO_FUNC);
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
if (pa->s) {
pa_simple_free (pa->s);
pa->s = NULL;
}
audio_pt_fini (&pa->pt, AUDIO_FUNC);
g_free (pa->pcm_buf);
qemu_free (pa->pcm_buf);
pa->pcm_buf = NULL;
}
@@ -688,225 +433,64 @@ static void qpa_fini_in (HWVoiceIn *hw)
audio_pt_unlock_and_signal (&pa->pt, AUDIO_FUNC);
audio_pt_join (&pa->pt, &ret, AUDIO_FUNC);
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
if (pa->s) {
pa_simple_free (pa->s);
pa->s = NULL;
}
audio_pt_fini (&pa->pt, AUDIO_FUNC);
g_free (pa->pcm_buf);
qemu_free (pa->pcm_buf);
pa->pcm_buf = NULL;
}
static int qpa_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
PAVoiceOut *pa = (PAVoiceOut *) hw;
pa_operation *op;
pa_cvolume v;
paaudio *g = &glob_paaudio;
#ifdef PA_CHECK_VERSION /* macro is present in 0.9.16+ */
pa_cvolume_init (&v); /* function is present in 0.9.13+ */
#endif
switch (cmd) {
case VOICE_VOLUME:
{
SWVoiceOut *sw;
va_list ap;
va_start (ap, cmd);
sw = va_arg (ap, SWVoiceOut *);
va_end (ap);
v.channels = 2;
v.values[0] = ((PA_VOLUME_NORM - PA_VOLUME_MUTED) * sw->vol.l) / UINT32_MAX;
v.values[1] = ((PA_VOLUME_NORM - PA_VOLUME_MUTED) * sw->vol.r) / UINT32_MAX;
pa_threaded_mainloop_lock (g->mainloop);
op = pa_context_set_sink_input_volume (g->context,
pa_stream_get_index (pa->stream),
&v, NULL, NULL);
if (!op)
qpa_logerr (pa_context_errno (g->context),
"set_sink_input_volume() failed\n");
else
pa_operation_unref (op);
op = pa_context_set_sink_input_mute (g->context,
pa_stream_get_index (pa->stream),
sw->vol.mute, NULL, NULL);
if (!op) {
qpa_logerr (pa_context_errno (g->context),
"set_sink_input_mute() failed\n");
} else {
pa_operation_unref (op);
}
pa_threaded_mainloop_unlock (g->mainloop);
}
}
(void) hw;
(void) cmd;
return 0;
}
static int qpa_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
PAVoiceIn *pa = (PAVoiceIn *) hw;
pa_operation *op;
pa_cvolume v;
paaudio *g = &glob_paaudio;
#ifdef PA_CHECK_VERSION
pa_cvolume_init (&v);
#endif
switch (cmd) {
case VOICE_VOLUME:
{
SWVoiceIn *sw;
va_list ap;
va_start (ap, cmd);
sw = va_arg (ap, SWVoiceIn *);
va_end (ap);
v.channels = 2;
v.values[0] = ((PA_VOLUME_NORM - PA_VOLUME_MUTED) * sw->vol.l) / UINT32_MAX;
v.values[1] = ((PA_VOLUME_NORM - PA_VOLUME_MUTED) * sw->vol.r) / UINT32_MAX;
pa_threaded_mainloop_lock (g->mainloop);
/* FIXME: use the upcoming "set_source_output_{volume,mute}" */
op = pa_context_set_source_volume_by_index (g->context,
pa_stream_get_device_index (pa->stream),
&v, NULL, NULL);
if (!op) {
qpa_logerr (pa_context_errno (g->context),
"set_source_volume() failed\n");
} else {
pa_operation_unref(op);
}
op = pa_context_set_source_mute_by_index (g->context,
pa_stream_get_index (pa->stream),
sw->vol.mute, NULL, NULL);
if (!op) {
qpa_logerr (pa_context_errno (g->context),
"set_source_mute() failed\n");
} else {
pa_operation_unref (op);
}
pa_threaded_mainloop_unlock (g->mainloop);
}
}
(void) hw;
(void) cmd;
return 0;
}
/* common */
static void *qpa_audio_init (void)
{
paaudio *g = &glob_paaudio;
g->mainloop = pa_threaded_mainloop_new ();
if (!g->mainloop) {
goto fail;
}
g->context = pa_context_new (pa_threaded_mainloop_get_api (g->mainloop), glob_paaudio.server);
if (!g->context) {
goto fail;
}
pa_context_set_state_callback (g->context, context_state_cb, g);
if (pa_context_connect (g->context, glob_paaudio.server, 0, NULL) < 0) {
qpa_logerr (pa_context_errno (g->context),
"pa_context_connect() failed\n");
goto fail;
}
pa_threaded_mainloop_lock (g->mainloop);
if (pa_threaded_mainloop_start (g->mainloop) < 0) {
goto unlock_and_fail;
}
for (;;) {
pa_context_state_t state;
state = pa_context_get_state (g->context);
if (state == PA_CONTEXT_READY) {
break;
}
if (!PA_CONTEXT_IS_GOOD (state)) {
qpa_logerr (pa_context_errno (g->context),
"Wrong context state\n");
goto unlock_and_fail;
}
/* Wait until the context is ready */
pa_threaded_mainloop_wait (g->mainloop);
}
pa_threaded_mainloop_unlock (g->mainloop);
return &glob_paaudio;
unlock_and_fail:
pa_threaded_mainloop_unlock (g->mainloop);
fail:
AUD_log (AUDIO_CAP, "Failed to initialize PA context");
return NULL;
return &conf;
}
static void qpa_audio_fini (void *opaque)
{
paaudio *g = opaque;
if (g->mainloop) {
pa_threaded_mainloop_stop (g->mainloop);
}
if (g->context) {
pa_context_disconnect (g->context);
pa_context_unref (g->context);
g->context = NULL;
}
if (g->mainloop) {
pa_threaded_mainloop_free (g->mainloop);
}
g->mainloop = NULL;
(void) opaque;
}
struct audio_option qpa_options[] = {
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &glob_paaudio.samples,
.valp = &conf.samples,
.descr = "buffer size in samples"
},
{
.name = "SERVER",
.tag = AUD_OPT_STR,
.valp = &glob_paaudio.server,
.valp = &conf.server,
.descr = "server address"
},
{
.name = "SINK",
.tag = AUD_OPT_STR,
.valp = &glob_paaudio.sink,
.valp = &conf.sink,
.descr = "sink device name"
},
{
.name = "SOURCE",
.tag = AUD_OPT_STR,
.valp = &glob_paaudio.source,
.valp = &conf.source,
.descr = "source device name"
},
{ /* End of list */ }
@@ -937,6 +521,5 @@ struct audio_driver pa_audio_driver = {
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (PAVoiceOut),
.voice_size_in = sizeof (PAVoiceIn),
.ctl_caps = VOICE_VOLUME_CAP
.voice_size_in = sizeof (PAVoiceIn)
};

View File

@@ -202,26 +202,7 @@ static int line_out_ctl (HWVoiceOut *hw, int cmd, ...)
}
spice_server_playback_stop (&out->sin);
break;
case VOICE_VOLUME:
{
#if ((SPICE_INTERFACE_PLAYBACK_MAJOR >= 1) && (SPICE_INTERFACE_PLAYBACK_MINOR >= 2))
SWVoiceOut *sw;
va_list ap;
uint16_t vol[2];
va_start (ap, cmd);
sw = va_arg (ap, SWVoiceOut *);
va_end (ap);
vol[0] = sw->vol.l / ((1ULL << 16) + 1);
vol[1] = sw->vol.r / ((1ULL << 16) + 1);
spice_server_playback_set_volume (&out->sin, 2, vol);
spice_server_playback_set_mute (&out->sin, sw->vol.mute);
#endif
break;
}
}
return 0;
}
@@ -323,26 +304,7 @@ static int line_in_ctl (HWVoiceIn *hw, int cmd, ...)
in->active = 0;
spice_server_record_stop (&in->sin);
break;
case VOICE_VOLUME:
{
#if ((SPICE_INTERFACE_RECORD_MAJOR >= 2) && (SPICE_INTERFACE_RECORD_MINOR >= 2))
SWVoiceIn *sw;
va_list ap;
uint16_t vol[2];
va_start (ap, cmd);
sw = va_arg (ap, SWVoiceIn *);
va_end (ap);
vol[0] = sw->vol.l / ((1ULL << 16) + 1);
vol[1] = sw->vol.r / ((1ULL << 16) + 1);
spice_server_record_set_volume (&in->sin, 2, vol);
spice_server_record_set_mute (&in->sin, sw->vol.mute);
#endif
break;
}
}
return 0;
}
@@ -375,9 +337,6 @@ struct audio_driver spice_audio_driver = {
.max_voices_in = 1,
.voice_size_out = sizeof (SpiceVoiceOut),
.voice_size_in = sizeof (SpiceVoiceIn),
#if ((SPICE_INTERFACE_PLAYBACK_MAJOR >= 1) && (SPICE_INTERFACE_PLAYBACK_MINOR >= 2))
.ctl_caps = VOICE_VOLUME_CAP
#endif
};
void qemu_spice_audio_init (void)

View File

@@ -30,7 +30,7 @@
typedef struct WAVVoiceOut {
HWVoiceOut hw;
FILE *f;
QEMUFile *f;
int64_t old_ticks;
void *pcm_buf;
int total_samples;
@@ -76,10 +76,7 @@ static int wav_run_out (HWVoiceOut *hw, int live)
dst = advance (wav->pcm_buf, rpos << hw->info.shift);
hw->clip (dst, src, convert_samples);
if (fwrite (dst, convert_samples << hw->info.shift, 1, wav->f) != 1) {
dolog ("wav_run_out: fwrite of %d bytes failed\nReaons: %s\n",
convert_samples << hw->info.shift, strerror (errno));
}
qemu_put_buffer (wav->f, dst, convert_samples << hw->info.shift);
rpos = (rpos + convert_samples) % hw->samples;
samples -= convert_samples;
@@ -155,20 +152,16 @@ static int wav_init_out (HWVoiceOut *hw, struct audsettings *as)
le_store (hdr + 28, hw->info.freq << (bits16 + stereo), 4);
le_store (hdr + 32, 1 << (bits16 + stereo), 2);
wav->f = fopen (conf.wav_path, "wb");
wav->f = qemu_fopen (conf.wav_path, "wb");
if (!wav->f) {
dolog ("Failed to open wave file `%s'\nReason: %s\n",
conf.wav_path, strerror (errno));
g_free (wav->pcm_buf);
qemu_free (wav->pcm_buf);
wav->pcm_buf = NULL;
return -1;
}
if (fwrite (hdr, sizeof (hdr), 1, wav->f) != 1) {
dolog ("wav_init_out: failed to write header\nReason: %s\n",
strerror(errno));
return -1;
}
qemu_put_buffer (wav->f, hdr, sizeof (hdr));
return 0;
}
@@ -187,35 +180,16 @@ static void wav_fini_out (HWVoiceOut *hw)
le_store (rlen, rifflen, 4);
le_store (dlen, datalen, 4);
if (fseek (wav->f, 4, SEEK_SET)) {
dolog ("wav_fini_out: fseek to rlen failed\nReason: %s\n",
strerror(errno));
goto doclose;
}
if (fwrite (rlen, 4, 1, wav->f) != 1) {
dolog ("wav_fini_out: failed to write rlen\nReason: %s\n",
strerror (errno));
goto doclose;
}
if (fseek (wav->f, 32, SEEK_CUR)) {
dolog ("wav_fini_out: fseek to dlen failed\nReason: %s\n",
strerror (errno));
goto doclose;
}
if (fwrite (dlen, 4, 1, wav->f) != 1) {
dolog ("wav_fini_out: failed to write dlen\nReaons: %s\n",
strerror (errno));
goto doclose;
}
qemu_fseek (wav->f, 4, SEEK_SET);
qemu_put_buffer (wav->f, rlen, 4);
doclose:
if (fclose (wav->f)) {
dolog ("wav_fini_out: fclose %p failed\nReason: %s\n",
wav->f, strerror (errno));
}
qemu_fseek (wav->f, 32, SEEK_CUR);
qemu_put_buffer (wav->f, dlen, 4);
qemu_fclose (wav->f);
wav->f = NULL;
g_free (wav->pcm_buf);
qemu_free (wav->pcm_buf);
wav->pcm_buf = NULL;
}

View File

@@ -3,7 +3,7 @@
#include "audio.h"
typedef struct {
FILE *f;
QEMUFile *f;
int bytes;
char *path;
int freq;
@@ -35,50 +35,27 @@ static void wav_destroy (void *opaque)
uint8_t dlen[4];
uint32_t datalen = wav->bytes;
uint32_t rifflen = datalen + 36;
Monitor *mon = cur_mon;
if (wav->f) {
le_store (rlen, rifflen, 4);
le_store (dlen, datalen, 4);
if (fseek (wav->f, 4, SEEK_SET)) {
monitor_printf (mon, "wav_destroy: rlen fseek failed\nReason: %s\n",
strerror (errno));
goto doclose;
}
if (fwrite (rlen, 4, 1, wav->f) != 1) {
monitor_printf (mon, "wav_destroy: rlen fwrite failed\nReason %s\n",
strerror (errno));
goto doclose;
}
if (fseek (wav->f, 32, SEEK_CUR)) {
monitor_printf (mon, "wav_destroy: dlen fseek failed\nReason %s\n",
strerror (errno));
goto doclose;
}
if (fwrite (dlen, 1, 4, wav->f) != 4) {
monitor_printf (mon, "wav_destroy: dlen fwrite failed\nReason %s\n",
strerror (errno));
goto doclose;
}
doclose:
if (fclose (wav->f)) {
fprintf (stderr, "wav_destroy: fclose failed: %s",
strerror (errno));
}
qemu_fseek (wav->f, 4, SEEK_SET);
qemu_put_buffer (wav->f, rlen, 4);
qemu_fseek (wav->f, 32, SEEK_CUR);
qemu_put_buffer (wav->f, dlen, 4);
qemu_fclose (wav->f);
}
g_free (wav->path);
qemu_free (wav->path);
}
static void wav_capture (void *opaque, void *buf, int size)
{
WAVState *wav = opaque;
if (fwrite (buf, size, 1, wav->f) != 1) {
monitor_printf (cur_mon, "wav_capture: fwrite error\nReason: %s",
strerror (errno));
}
qemu_put_buffer (wav->f, buf, size);
wav->bytes += size;
}
@@ -94,9 +71,9 @@ static void wav_capture_info (void *opaque)
WAVState *wav = opaque;
char *path = wav->path;
monitor_printf (cur_mon, "Capturing audio(%d,%d,%d) to %s: %d bytes\n",
wav->freq, wav->bits, wav->nchannels,
path ? path : "<not available>", wav->bytes);
monitor_printf(cur_mon, "Capturing audio(%d,%d,%d) to %s: %d bytes\n",
wav->freq, wav->bits, wav->nchannels,
path ? path : "<not available>", wav->bytes);
}
static struct capture_ops wav_capture_ops = {
@@ -121,13 +98,13 @@ int wav_start_capture (CaptureState *s, const char *path, int freq,
CaptureVoiceOut *cap;
if (bits != 8 && bits != 16) {
monitor_printf (mon, "incorrect bit count %d, must be 8 or 16\n", bits);
monitor_printf(mon, "incorrect bit count %d, must be 8 or 16\n", bits);
return -1;
}
if (nchannels != 1 && nchannels != 2) {
monitor_printf (mon, "incorrect channel count %d, must be 1 or 2\n",
nchannels);
monitor_printf(mon, "incorrect channel count %d, must be 1 or 2\n",
nchannels);
return -1;
}
@@ -143,7 +120,7 @@ int wav_start_capture (CaptureState *s, const char *path, int freq,
ops.capture = wav_capture;
ops.destroy = wav_destroy;
wav = g_malloc0 (sizeof (*wav));
wav = qemu_mallocz (sizeof (*wav));
shift = bits16 + stereo;
hdr[34] = bits16 ? 0x10 : 0x08;
@@ -153,42 +130,32 @@ int wav_start_capture (CaptureState *s, const char *path, int freq,
le_store (hdr + 28, freq << shift, 4);
le_store (hdr + 32, 1 << shift, 2);
wav->f = fopen (path, "wb");
wav->f = qemu_fopen (path, "wb");
if (!wav->f) {
monitor_printf (mon, "Failed to open wave file `%s'\nReason: %s\n",
path, strerror (errno));
g_free (wav);
monitor_printf(mon, "Failed to open wave file `%s'\nReason: %s\n",
path, strerror (errno));
qemu_free (wav);
return -1;
}
wav->path = g_strdup (path);
wav->path = qemu_strdup (path);
wav->bits = bits;
wav->nchannels = nchannels;
wav->freq = freq;
if (fwrite (hdr, sizeof (hdr), 1, wav->f) != 1) {
monitor_printf (mon, "Failed to write header\nReason: %s\n",
strerror (errno));
goto error_free;
}
qemu_put_buffer (wav->f, hdr, sizeof (hdr));
cap = AUD_add_capture (&as, &ops, wav);
if (!cap) {
monitor_printf (mon, "Failed to add audio capture\n");
goto error_free;
monitor_printf(mon, "Failed to add audio capture\n");
qemu_free (wav->path);
qemu_fclose (wav->f);
qemu_free (wav);
return -1;
}
wav->cap = cap;
s->opaque = wav;
s->ops = wav_capture_ops;
return 0;
error_free:
g_free (wav->path);
if (fclose (wav->f)) {
monitor_printf (mon, "Failed to close wave file\nReason: %s\n",
strerror (errno));
}
g_free (wav);
return -1;
}

View File

@@ -72,7 +72,7 @@ static void winwave_log_mmresult (MMRESULT mr)
break;
case MMSYSERR_NOMEM:
str = "Unable to allocate or lock memory";
str = "Unable to allocate or locl memory";
break;
case WAVERR_SYNC:
@@ -222,9 +222,9 @@ static int winwave_init_out (HWVoiceOut *hw, struct audsettings *as)
return 0;
err4:
g_free (wave->pcm_buf);
qemu_free (wave->pcm_buf);
err3:
g_free (wave->hdrs);
qemu_free (wave->hdrs);
err2:
winwave_anal_close_out (wave);
err1:
@@ -310,10 +310,10 @@ static void winwave_fini_out (HWVoiceOut *hw)
wave->event = NULL;
}
g_free (wave->pcm_buf);
qemu_free (wave->pcm_buf);
wave->pcm_buf = NULL;
g_free (wave->hdrs);
qemu_free (wave->hdrs);
wave->hdrs = NULL;
}
@@ -349,15 +349,21 @@ static int winwave_ctl_out (HWVoiceOut *hw, int cmd, ...)
else {
hw->poll_mode = 0;
}
wave->paused = 0;
if (wave->paused) {
mr = waveOutRestart (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutRestart");
}
wave->paused = 0;
}
}
return 0;
case VOICE_DISABLE:
if (!wave->paused) {
mr = waveOutReset (wave->hwo);
mr = waveOutPause (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutReset");
winwave_logerr (mr, "waveOutPause");
}
else {
wave->paused = 1;
@@ -505,9 +511,9 @@ static int winwave_init_in (HWVoiceIn *hw, struct audsettings *as)
return 0;
err4:
g_free (wave->pcm_buf);
qemu_free (wave->pcm_buf);
err3:
g_free (wave->hdrs);
qemu_free (wave->hdrs);
err2:
winwave_anal_close_in (wave);
err1:
@@ -544,10 +550,10 @@ static void winwave_fini_in (HWVoiceIn *hw)
wave->event = NULL;
}
g_free (wave->pcm_buf);
qemu_free (wave->pcm_buf);
wave->pcm_buf = NULL;
g_free (wave->hdrs);
qemu_free (wave->hdrs);
wave->hdrs = NULL;
}

140
balloon.c
View File

@@ -25,12 +25,12 @@
*/
#include "monitor.h"
#include "qjson.h"
#include "qint.h"
#include "cpu-common.h"
#include "kvm.h"
#include "balloon.h"
#include "trace.h"
#include "qmp-commands.h"
#include "qjson.h"
static QEMUBalloonEvent *balloon_event_fn;
static QEMUBalloonStatus *balloon_stat_fn;
@@ -52,16 +52,6 @@ int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
return 0;
}
void qemu_remove_balloon_handler(void *opaque)
{
if (balloon_opaque != opaque) {
return;
}
balloon_event_fn = NULL;
balloon_stat_fn = NULL;
balloon_opaque = NULL;
}
static int qemu_balloon(ram_addr_t target)
{
if (!balloon_event_fn) {
@@ -72,61 +62,103 @@ static int qemu_balloon(ram_addr_t target)
return 1;
}
static int qemu_balloon_status(BalloonInfo *info)
static int qemu_balloon_status(MonitorCompletion cb, void *opaque)
{
if (!balloon_stat_fn) {
return 0;
}
balloon_stat_fn(balloon_opaque, info);
balloon_stat_fn(balloon_opaque, cb, opaque);
return 1;
}
void qemu_balloon_changed(int64_t actual)
static void print_balloon_stat(const char *key, QObject *obj, void *opaque)
{
QObject *data;
Monitor *mon = opaque;
data = qobject_from_jsonf("{ 'actual': %" PRId64 " }",
actual);
monitor_protocol_event(QEVENT_BALLOON_CHANGE, data);
qobject_decref(data);
if (strcmp(key, "actual")) {
monitor_printf(mon, ",%s=%" PRId64, key,
qint_get_int(qobject_to_qint(obj)));
}
}
BalloonInfo *qmp_query_balloon(Error **errp)
void monitor_print_balloon(Monitor *mon, const QObject *data)
{
BalloonInfo *info;
QDict *qdict;
if (kvm_enabled() && !kvm_has_sync_mmu()) {
error_set(errp, QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return NULL;
}
info = g_malloc0(sizeof(*info));
if (qemu_balloon_status(info) == 0) {
error_set(errp, QERR_DEVICE_NOT_ACTIVE, "balloon");
qapi_free_BalloonInfo(info);
return NULL;
}
return info;
}
void qmp_balloon(int64_t value, Error **errp)
{
if (kvm_enabled() && !kvm_has_sync_mmu()) {
error_set(errp, QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
qdict = qobject_to_qdict(data);
if (!qdict_haskey(qdict, "actual")) {
return;
}
if (value <= 0) {
error_set(errp, QERR_INVALID_PARAMETER_VALUE, "target", "a size");
return;
}
if (qemu_balloon(value) == 0) {
error_set(errp, QERR_DEVICE_NOT_ACTIVE, "balloon");
}
monitor_printf(mon, "balloon: actual=%" PRId64,
qdict_get_int(qdict, "actual") >> 20);
qdict_iter(qdict, print_balloon_stat, mon);
monitor_printf(mon, "\n");
}
/**
* do_info_balloon(): Balloon information
*
* Make an asynchronous request for balloon info. When the request completes
* a QDict will be returned according to the following specification:
*
* - "actual": current balloon value in bytes
* The following fields may or may not be present:
* - "mem_swapped_in": Amount of memory swapped in (bytes)
* - "mem_swapped_out": Amount of memory swapped out (bytes)
* - "major_page_faults": Number of major faults
* - "minor_page_faults": Number of minor faults
* - "free_mem": Total amount of free and unused memory (bytes)
* - "total_mem": Total amount of available memory (bytes)
*
* Example:
*
* { "actual": 1073741824, "mem_swapped_in": 0, "mem_swapped_out": 0,
* "major_page_faults": 142, "minor_page_faults": 239245,
* "free_mem": 1014185984, "total_mem": 1044668416 }
*/
int do_info_balloon(Monitor *mon, MonitorCompletion cb, void *opaque)
{
int ret;
if (kvm_enabled() && !kvm_has_sync_mmu()) {
qerror_report(QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return -1;
}
ret = qemu_balloon_status(cb, opaque);
if (!ret) {
qerror_report(QERR_DEVICE_NOT_ACTIVE, "balloon");
return -1;
}
return 0;
}
/**
* do_balloon(): Request VM to change its memory allocation
*/
int do_balloon(Monitor *mon, const QDict *params,
MonitorCompletion cb, void *opaque)
{
int64_t target;
int ret;
if (kvm_enabled() && !kvm_has_sync_mmu()) {
qerror_report(QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return -1;
}
target = qdict_get_int(params, "value");
if (target <= 0) {
qerror_report(QERR_INVALID_PARAMETER_VALUE, "target", "a size");
return -1;
}
ret = qemu_balloon(target);
if (ret == 0) {
qerror_report(QERR_DEVICE_NOT_ACTIVE, "balloon");
return -1;
}
cb(opaque, NULL);
return 0;
}

View File

@@ -15,15 +15,17 @@
#define _QEMU_BALLOON_H
#include "monitor.h"
#include "qapi-types.h"
typedef void (QEMUBalloonEvent)(void *opaque, ram_addr_t target);
typedef void (QEMUBalloonStatus)(void *opaque, BalloonInfo *info);
typedef void (QEMUBalloonStatus)(void *opaque, MonitorCompletion cb,
void *cb_data);
int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
QEMUBalloonStatus *stat_func, void *opaque);
void qemu_remove_balloon_handler(void *opaque);
void qemu_balloon_changed(int64_t actual);
void monitor_print_balloon(Monitor *mon, const QObject *data);
int do_info_balloon(Monitor *mon, MonitorCompletion cb, void *opaque);
int do_balloon(Monitor *mon, const QDict *params,
MonitorCompletion cb, void *opaque);
#endif

View File

@@ -91,7 +91,7 @@ int slow_bitmap_intersects(const unsigned long *bitmap1,
static inline unsigned long *bitmap_new(int nbits)
{
int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
return g_malloc0(len);
return qemu_mallocz(len);
}
static inline void bitmap_zero(unsigned long *dst, int nbits)

116
bitops.h
View File

@@ -114,10 +114,10 @@ static inline unsigned long ffz(unsigned long word)
* @nr: the bit to set
* @addr: the address to start counting from
*/
static inline void set_bit(int nr, unsigned long *addr)
static inline void set_bit(int nr, volatile unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
*p |= mask;
}
@@ -127,10 +127,10 @@ static inline void set_bit(int nr, unsigned long *addr)
* @nr: Bit to clear
* @addr: Address to start counting from
*/
static inline void clear_bit(int nr, unsigned long *addr)
static inline void clear_bit(int nr, volatile unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
*p &= ~mask;
}
@@ -140,10 +140,10 @@ static inline void clear_bit(int nr, unsigned long *addr)
* @nr: Bit to change
* @addr: Address to start counting from
*/
static inline void change_bit(int nr, unsigned long *addr)
static inline void change_bit(int nr, volatile unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
*p ^= mask;
}
@@ -153,10 +153,10 @@ static inline void change_bit(int nr, unsigned long *addr)
* @nr: Bit to set
* @addr: Address to count from
*/
static inline int test_and_set_bit(int nr, unsigned long *addr)
static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
unsigned long old = *p;
*p = old | mask;
@@ -168,10 +168,10 @@ static inline int test_and_set_bit(int nr, unsigned long *addr)
* @nr: Bit to clear
* @addr: Address to count from
*/
static inline int test_and_clear_bit(int nr, unsigned long *addr)
static inline int test_and_clear_bit(int nr, volatile unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
unsigned long old = *p;
*p = old & ~mask;
@@ -183,10 +183,10 @@ static inline int test_and_clear_bit(int nr, unsigned long *addr)
* @nr: Bit to change
* @addr: Address to count from
*/
static inline int test_and_change_bit(int nr, unsigned long *addr)
static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
unsigned long old = *p;
*p = old ^ mask;
@@ -198,7 +198,7 @@ static inline int test_and_change_bit(int nr, unsigned long *addr)
* @nr: bit number to test
* @addr: Address to start counting from
*/
static inline int test_bit(int nr, const unsigned long *addr)
static inline int test_bit(int nr, const volatile unsigned long *addr)
{
return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1)));
}
@@ -269,94 +269,4 @@ static inline unsigned long hweight_long(unsigned long w)
return count;
}
/**
* extract32:
* @value: the value to extract the bit field from
* @start: the lowest bit in the bit field (numbered from 0)
* @length: the length of the bit field
*
* Extract from the 32 bit input @value the bit field specified by the
* @start and @length parameters, and return it. The bit field must
* lie entirely within the 32 bit word. It is valid to request that
* all 32 bits are returned (ie @length 32 and @start 0).
*
* Returns: the value of the bit field extracted from the input value.
*/
static inline uint32_t extract32(uint32_t value, int start, int length)
{
assert(start >= 0 && length > 0 && length <= 32 - start);
return (value >> start) & (~0U >> (32 - length));
}
/**
* extract64:
* @value: the value to extract the bit field from
* @start: the lowest bit in the bit field (numbered from 0)
* @length: the length of the bit field
*
* Extract from the 64 bit input @value the bit field specified by the
* @start and @length parameters, and return it. The bit field must
* lie entirely within the 64 bit word. It is valid to request that
* all 64 bits are returned (ie @length 64 and @start 0).
*
* Returns: the value of the bit field extracted from the input value.
*/
static inline uint64_t extract64(uint64_t value, int start, int length)
{
assert(start >= 0 && length > 0 && length <= 64 - start);
return (value >> start) & (~0ULL >> (64 - length));
}
/**
* deposit32:
* @value: initial value to insert bit field into
* @start: the lowest bit in the bit field (numbered from 0)
* @length: the length of the bit field
* @fieldval: the value to insert into the bit field
*
* Deposit @fieldval into the 32 bit @value at the bit field specified
* by the @start and @length parameters, and return the modified
* @value. Bits of @value outside the bit field are not modified.
* Bits of @fieldval above the least significant @length bits are
* ignored. The bit field must lie entirely within the 32 bit word.
* It is valid to request that all 32 bits are modified (ie @length
* 32 and @start 0).
*
* Returns: the modified @value.
*/
static inline uint32_t deposit32(uint32_t value, int start, int length,
uint32_t fieldval)
{
uint32_t mask;
assert(start >= 0 && length > 0 && length <= 32 - start);
mask = (~0U >> (32 - length)) << start;
return (value & ~mask) | ((fieldval << start) & mask);
}
/**
* deposit64:
* @value: initial value to insert bit field into
* @start: the lowest bit in the bit field (numbered from 0)
* @length: the length of the bit field
* @fieldval: the value to insert into the bit field
*
* Deposit @fieldval into the 64 bit @value at the bit field specified
* by the @start and @length parameters, and return the modified
* @value. Bits of @value outside the bit field are not modified.
* Bits of @fieldval above the least significant @length bits are
* ignored. The bit field must lie entirely within the 64 bit word.
* It is valid to request that all 64 bits are modified (ie @length
* 64 and @start 0).
*
* Returns: the modified @value.
*/
static inline uint64_t deposit64(uint64_t value, int start, int length,
uint64_t fieldval)
{
uint64_t mask;
assert(start >= 0 && length > 0 && length <= 64 - start);
mask = (~0ULL >> (64 - length)) << start;
return (value & ~mask) | ((fieldval << start) & mask);
}
#endif

View File

@@ -9,8 +9,6 @@
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu-common.h"
@@ -18,6 +16,7 @@
#include "hw/hw.h"
#include "qemu-queue.h"
#include "qemu-timer.h"
#include "monitor.h"
#include "block-migration.h"
#include "migration.h"
#include "blockdev.h"
@@ -181,7 +180,7 @@ static void alloc_aio_bitmap(BlkMigDevState *bmds)
BDRV_SECTORS_PER_DIRTY_CHUNK * 8 - 1;
bitmap_size /= BDRV_SECTORS_PER_DIRTY_CHUNK * 8;
bmds->aio_bitmap = g_malloc0(bitmap_size);
bmds->aio_bitmap = qemu_mallocz(bitmap_size);
}
static void blk_mig_read_cb(void *opaque, int ret)
@@ -203,7 +202,8 @@ static void blk_mig_read_cb(void *opaque, int ret)
assert(block_mig_state.submitted >= 0);
}
static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
static int mig_save_device_bulk(Monitor *mon, QEMUFile *f,
BlkMigDevState *bmds)
{
int64_t total_sectors = bmds->total_sectors;
int64_t cur_sector = bmds->cur_sector;
@@ -235,8 +235,8 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
nr_sectors = total_sectors - cur_sector;
}
blk = g_malloc(sizeof(BlkMigBlock));
blk->buf = g_malloc(BLOCK_SIZE);
blk = qemu_malloc(sizeof(BlkMigBlock));
blk->buf = qemu_malloc(BLOCK_SIZE);
blk->bmds = bmds;
blk->sector = cur_sector;
blk->nr_sectors = nr_sectors;
@@ -251,12 +251,22 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
blk->aiocb = bdrv_aio_readv(bs, cur_sector, &blk->qiov,
nr_sectors, blk_mig_read_cb, blk);
if (!blk->aiocb) {
goto error;
}
block_mig_state.submitted++;
bdrv_reset_dirty(bs, cur_sector, nr_sectors);
bmds->cur_sector = cur_sector + nr_sectors;
return (bmds->cur_sector >= total_sectors);
error:
monitor_printf(mon, "Error reading sector %" PRId64 "\n", cur_sector);
qemu_file_set_error(f);
qemu_free(blk->buf);
qemu_free(blk);
return 0;
}
static void set_dirty_tracking(int enable)
@@ -270,6 +280,7 @@ static void set_dirty_tracking(int enable)
static void init_blk_migration_it(void *opaque, BlockDriverState *bs)
{
Monitor *mon = opaque;
BlkMigDevState *bmds;
int64_t sectors;
@@ -279,7 +290,7 @@ static void init_blk_migration_it(void *opaque, BlockDriverState *bs)
return;
}
bmds = g_malloc0(sizeof(BlkMigDevState));
bmds = qemu_mallocz(sizeof(BlkMigDevState));
bmds->bs = bs;
bmds->bulk_completed = 0;
bmds->total_sectors = sectors;
@@ -292,17 +303,19 @@ static void init_blk_migration_it(void *opaque, BlockDriverState *bs)
block_mig_state.total_sector_sum += sectors;
if (bmds->shared_base) {
DPRINTF("Start migration for %s with shared base image\n",
bs->device_name);
monitor_printf(mon, "Start migration for %s with shared base "
"image\n",
bs->device_name);
} else {
DPRINTF("Start full migration for %s\n", bs->device_name);
monitor_printf(mon, "Start full migration for %s\n",
bs->device_name);
}
QSIMPLEQ_INSERT_TAIL(&block_mig_state.bmds_list, bmds, entry);
}
}
static void init_blk_migration(QEMUFile *f)
static void init_blk_migration(Monitor *mon, QEMUFile *f)
{
block_mig_state.submitted = 0;
block_mig_state.read_done = 0;
@@ -313,10 +326,10 @@ static void init_blk_migration(QEMUFile *f)
block_mig_state.total_time = 0;
block_mig_state.reads = 0;
bdrv_iterate(init_blk_migration_it, NULL);
bdrv_iterate(init_blk_migration_it, mon);
}
static int blk_mig_save_bulked_block(QEMUFile *f)
static int blk_mig_save_bulked_block(Monitor *mon, QEMUFile *f)
{
int64_t completed_sector_sum = 0;
BlkMigDevState *bmds;
@@ -325,7 +338,7 @@ static int blk_mig_save_bulked_block(QEMUFile *f)
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
if (bmds->bulk_completed == 0) {
if (mig_save_device_bulk(f, bmds) == 1) {
if (mig_save_device_bulk(mon, f, bmds) == 1) {
/* completed bulk section for this device */
bmds->bulk_completed = 1;
}
@@ -347,7 +360,8 @@ static int blk_mig_save_bulked_block(QEMUFile *f)
block_mig_state.prev_progress = progress;
qemu_put_be64(f, (progress << BDRV_SECTOR_BITS)
| BLK_MIG_FLAG_PROGRESS);
DPRINTF("Completed %d %%\r", progress);
monitor_printf(mon, "Completed %d %%\r", progress);
monitor_flush(mon);
}
return ret;
@@ -362,18 +376,17 @@ static void blk_mig_reset_dirty_cursor(void)
}
}
static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
int is_async)
static int mig_save_device_dirty(Monitor *mon, QEMUFile *f,
BlkMigDevState *bmds, int is_async)
{
BlkMigBlock *blk;
int64_t total_sectors = bmds->total_sectors;
int64_t sector;
int nr_sectors;
int ret = -EIO;
for (sector = bmds->cur_dirty; sector < bmds->total_sectors;) {
if (bmds_aio_inflight(bmds, sector)) {
bdrv_drain_all();
qemu_aio_flush();
}
if (bdrv_get_dirty(bmds->bs, sector)) {
@@ -382,8 +395,8 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
} else {
nr_sectors = BDRV_SECTORS_PER_DIRTY_CHUNK;
}
blk = g_malloc(sizeof(BlkMigBlock));
blk->buf = g_malloc(BLOCK_SIZE);
blk = qemu_malloc(sizeof(BlkMigBlock));
blk->buf = qemu_malloc(BLOCK_SIZE);
blk->bmds = bmds;
blk->sector = sector;
blk->nr_sectors = nr_sectors;
@@ -399,17 +412,20 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
blk->aiocb = bdrv_aio_readv(bmds->bs, sector, &blk->qiov,
nr_sectors, blk_mig_read_cb, blk);
if (!blk->aiocb) {
goto error;
}
block_mig_state.submitted++;
bmds_set_aio_inflight(bmds, sector, nr_sectors, 1);
} else {
ret = bdrv_read(bmds->bs, sector, blk->buf, nr_sectors);
if (ret < 0) {
if (bdrv_read(bmds->bs, sector, blk->buf,
nr_sectors) < 0) {
goto error;
}
blk_send(f, blk);
g_free(blk->buf);
g_free(blk);
qemu_free(blk->buf);
qemu_free(blk);
}
bdrv_reset_dirty(bmds->bs, sector, nr_sectors);
@@ -422,20 +438,20 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
return (bmds->cur_dirty >= bmds->total_sectors);
error:
DPRINTF("Error reading sector %" PRId64 "\n", sector);
qemu_file_set_error(f, ret);
g_free(blk->buf);
g_free(blk);
monitor_printf(mon, "Error reading sector %" PRId64 "\n", sector);
qemu_file_set_error(f);
qemu_free(blk->buf);
qemu_free(blk);
return 0;
}
static int blk_mig_save_dirty_block(QEMUFile *f, int is_async)
static int blk_mig_save_dirty_block(Monitor *mon, QEMUFile *f, int is_async)
{
BlkMigDevState *bmds;
int ret = 0;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
if (mig_save_device_dirty(f, bmds, is_async) == 0) {
if (mig_save_device_dirty(mon, f, bmds, is_async) == 0) {
ret = 1;
break;
}
@@ -457,14 +473,14 @@ static void flush_blks(QEMUFile* f)
break;
}
if (blk->ret < 0) {
qemu_file_set_error(f, blk->ret);
qemu_file_set_error(f);
break;
}
blk_send(f, blk);
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.blk_list, entry);
g_free(blk->buf);
g_free(blk);
qemu_free(blk->buf);
qemu_free(blk);
block_mig_state.read_done--;
block_mig_state.transferred++;
@@ -504,7 +520,7 @@ static int is_stage2_completed(void)
if ((remaining_dirty / bwidth) <=
migrate_max_downtime()) {
/* finish stage2 because we think that we can finish remaining work
/* finish stage2 because we think that we can finish remaing work
below max_downtime */
return 1;
@@ -514,7 +530,7 @@ static int is_stage2_completed(void)
return 0;
}
static void blk_mig_cleanup(void)
static void blk_mig_cleanup(Monitor *mon)
{
BlkMigDevState *bmds;
BlkMigBlock *blk;
@@ -525,136 +541,99 @@ static void blk_mig_cleanup(void)
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.bmds_list, entry);
bdrv_set_in_use(bmds->bs, 0);
drive_put_ref(drive_get_by_blockdev(bmds->bs));
g_free(bmds->aio_bitmap);
g_free(bmds);
qemu_free(bmds->aio_bitmap);
qemu_free(bmds);
}
while ((blk = QSIMPLEQ_FIRST(&block_mig_state.blk_list)) != NULL) {
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.blk_list, entry);
g_free(blk->buf);
g_free(blk);
qemu_free(blk->buf);
qemu_free(blk);
}
monitor_printf(mon, "\n");
}
static void block_migration_cancel(void *opaque)
static int block_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
{
blk_mig_cleanup();
}
DPRINTF("Enter save live stage %d submitted %d transferred %d\n",
stage, block_mig_state.submitted, block_mig_state.transferred);
static int block_save_setup(QEMUFile *f, void *opaque)
{
int ret;
if (stage < 0) {
blk_mig_cleanup(mon);
return 0;
}
DPRINTF("Enter save live setup submitted %d transferred %d\n",
block_mig_state.submitted, block_mig_state.transferred);
if (block_mig_state.blk_enable != 1) {
/* no need to migrate storage */
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
return 1;
}
init_blk_migration(f);
if (stage == 1) {
init_blk_migration(mon, f);
/* start track dirty blocks */
set_dirty_tracking(1);
/* start track dirty blocks */
set_dirty_tracking(1);
}
flush_blks(f);
ret = qemu_file_get_error(f);
if (ret) {
blk_mig_cleanup();
return ret;
if (qemu_file_has_error(f)) {
blk_mig_cleanup(mon);
return 0;
}
blk_mig_reset_dirty_cursor();
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
return 0;
}
static int block_save_iterate(QEMUFile *f, void *opaque)
{
int ret;
DPRINTF("Enter save live iterate submitted %d transferred %d\n",
block_mig_state.submitted, block_mig_state.transferred);
flush_blks(f);
ret = qemu_file_get_error(f);
if (ret) {
blk_mig_cleanup();
return ret;
}
blk_mig_reset_dirty_cursor();
/* control the rate of transfer */
while ((block_mig_state.submitted +
block_mig_state.read_done) * BLOCK_SIZE <
qemu_file_get_rate_limit(f)) {
if (block_mig_state.bulk_completed == 0) {
/* first finish the bulk phase */
if (blk_mig_save_bulked_block(f) == 0) {
/* finished saving bulk on all devices */
block_mig_state.bulk_completed = 1;
}
} else {
if (blk_mig_save_dirty_block(f, 1) == 0) {
/* no more dirty blocks */
break;
if (stage == 2) {
/* control the rate of transfer */
while ((block_mig_state.submitted +
block_mig_state.read_done) * BLOCK_SIZE <
qemu_file_get_rate_limit(f)) {
if (block_mig_state.bulk_completed == 0) {
/* first finish the bulk phase */
if (blk_mig_save_bulked_block(mon, f) == 0) {
/* finished saving bulk on all devices */
block_mig_state.bulk_completed = 1;
}
} else {
if (blk_mig_save_dirty_block(mon, f, 1) == 0) {
/* no more dirty blocks */
break;
}
}
}
flush_blks(f);
if (qemu_file_has_error(f)) {
blk_mig_cleanup(mon);
return 0;
}
}
flush_blks(f);
if (stage == 3) {
/* we know for sure that save bulk is completed and
all async read completed */
assert(block_mig_state.submitted == 0);
ret = qemu_file_get_error(f);
if (ret) {
blk_mig_cleanup();
return ret;
while (blk_mig_save_dirty_block(mon, f, 0) != 0);
blk_mig_cleanup(mon);
/* report completion */
qemu_put_be64(f, (100 << BDRV_SECTOR_BITS) | BLK_MIG_FLAG_PROGRESS);
if (qemu_file_has_error(f)) {
return 0;
}
monitor_printf(mon, "Block migration completed\n");
}
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
return is_stage2_completed();
}
static int block_save_complete(QEMUFile *f, void *opaque)
{
int ret;
DPRINTF("Enter save live complete submitted %d transferred %d\n",
block_mig_state.submitted, block_mig_state.transferred);
flush_blks(f);
ret = qemu_file_get_error(f);
if (ret) {
blk_mig_cleanup();
return ret;
}
blk_mig_reset_dirty_cursor();
/* we know for sure that save bulk is completed and
all async read completed */
assert(block_mig_state.submitted == 0);
while (blk_mig_save_dirty_block(f, 0) != 0) {
/* Do nothing */
}
blk_mig_cleanup();
/* report completion */
qemu_put_be64(f, (100 << BDRV_SECTOR_BITS) | BLK_MIG_FLAG_PROGRESS);
ret = qemu_file_get_error(f);
if (ret) {
return ret;
}
DPRINTF("Block migration completed\n");
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
return 0;
return ((stage == 2) && is_stage2_completed());
}
static int block_load(QEMUFile *f, void *opaque, int version_id)
@@ -667,7 +646,6 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
uint8_t *buf;
int64_t total_sectors = 0;
int nr_sectors;
int ret;
do {
addr = qemu_get_be64(f);
@@ -676,6 +654,7 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
addr >>= BDRV_SECTOR_BITS;
if (flags & BLK_MIG_FLAG_DEVICE_BLOCK) {
int ret;
/* get device name */
len = qemu_get_byte(f);
qemu_get_buffer(f, (uint8_t *)device_name, len);
@@ -704,12 +683,12 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
nr_sectors = BDRV_SECTORS_PER_DIRTY_CHUNK;
}
buf = g_malloc(BLOCK_SIZE);
buf = qemu_malloc(BLOCK_SIZE);
qemu_get_buffer(f, buf, BLOCK_SIZE);
ret = bdrv_write(bs, addr, buf, nr_sectors);
g_free(buf);
qemu_free(buf);
if (ret < 0) {
return ret;
}
@@ -725,44 +704,28 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
fprintf(stderr, "Unknown flags\n");
return -EINVAL;
}
ret = qemu_file_get_error(f);
if (ret != 0) {
return ret;
if (qemu_file_has_error(f)) {
return -EIO;
}
} while (!(flags & BLK_MIG_FLAG_EOS));
return 0;
}
static void block_set_params(const MigrationParams *params, void *opaque)
static void block_set_params(int blk_enable, int shared_base, void *opaque)
{
block_mig_state.blk_enable = params->blk;
block_mig_state.shared_base = params->shared;
block_mig_state.blk_enable = blk_enable;
block_mig_state.shared_base = shared_base;
/* shared base means that blk_enable = 1 */
block_mig_state.blk_enable |= params->shared;
block_mig_state.blk_enable |= shared_base;
}
static bool block_is_active(void *opaque)
{
return block_mig_state.blk_enable == 1;
}
SaveVMHandlers savevm_block_handlers = {
.set_params = block_set_params,
.save_live_setup = block_save_setup,
.save_live_iterate = block_save_iterate,
.save_live_complete = block_save_complete,
.load_state = block_load,
.cancel = block_migration_cancel,
.is_active = block_is_active,
};
void blk_mig_init(void)
{
QSIMPLEQ_INIT(&block_mig_state.bmds_list);
QSIMPLEQ_INIT(&block_mig_state.blk_list);
register_savevm_live(NULL, "block", 0, 1, &savevm_block_handlers,
&block_mig_state);
register_savevm_live(NULL, "block", 0, 1, block_set_params,
block_save_live, NULL, block_load, &block_mig_state);
}

2987
block.c

File diff suppressed because it is too large Load Diff

200
block.h
View File

@@ -4,7 +4,6 @@
#include "qemu-aio.h"
#include "qemu-common.h"
#include "qemu-option.h"
#include "qemu-coroutine.h"
#include "qobject.h"
/* block.c */
@@ -15,61 +14,19 @@ typedef struct BlockDriverInfo {
int cluster_size;
/* offset at which the VM state can be saved (0 if not possible) */
int64_t vm_state_offset;
bool is_dirty;
} BlockDriverInfo;
typedef struct BlockFragInfo {
uint64_t allocated_clusters;
uint64_t total_clusters;
uint64_t fragmented_clusters;
} BlockFragInfo;
typedef struct QEMUSnapshotInfo {
char id_str[128]; /* unique snapshot id */
/* the following fields are informative. They are not needed for
the consistency of the snapshot */
char name[256]; /* user chosen name */
uint64_t vm_state_size; /* VM state info size */
char name[256]; /* user choosen name */
uint32_t vm_state_size; /* VM state info size */
uint32_t date_sec; /* UTC date of the snapshot */
uint32_t date_nsec;
uint64_t vm_clock_nsec; /* VM clock relative to boot */
} QEMUSnapshotInfo;
/* Callbacks for block device models */
typedef struct BlockDevOps {
/*
* Runs when virtual media changed (monitor commands eject, change)
* Argument load is true on load and false on eject.
* Beware: doesn't run when a host device's physical media
* changes. Sure would be useful if it did.
* Device models with removable media must implement this callback.
*/
void (*change_media_cb)(void *opaque, bool load);
/*
* Runs when an eject request is issued from the monitor, the tray
* is closed, and the medium is locked.
* Device models that do not implement is_medium_locked will not need
* this callback. Device models that can lock the medium or tray might
* want to implement the callback and unlock the tray when "force" is
* true, even if they do not support eject requests.
*/
void (*eject_request_cb)(void *opaque, bool force);
/*
* Is the virtual tray open?
* Device models implement this only when the device has a tray.
*/
bool (*is_tray_open)(void *opaque);
/*
* Is the virtual medium locked into the device?
* Device models implement this only when device has such a lock.
*/
bool (*is_medium_locked)(void *opaque);
/*
* Runs when the size changed (e.g. monitor command block_resize)
*/
void (*resize_cb)(void *opaque);
} BlockDevOps;
#define BDRV_O_RDWR 0x0002
#define BDRV_O_SNAPSHOT 0x0008 /* open the file read only and save writes in a snapshot */
#define BDRV_O_NOCACHE 0x0020 /* do not use the host page cache */
@@ -77,10 +34,6 @@ typedef struct BlockDevOps {
#define BDRV_O_NATIVE_AIO 0x0080 /* use native AIO instead of the thread pool */
#define BDRV_O_NO_BACKING 0x0100 /* don't open the backing file */
#define BDRV_O_NO_FLUSH 0x0200 /* disable flushing on this disk */
#define BDRV_O_COPY_ON_READ 0x0400 /* copy read backing sectors into image */
#define BDRV_O_INCOMING 0x0800 /* consistency hint for incoming migration */
#define BDRV_O_CHECK 0x1000 /* open solely for consistency check */
#define BDRV_O_ALLOW_RDWR 0x2000 /* allow reopen to change from r/o to r/w */
#define BDRV_O_CACHE_MASK (BDRV_O_NOCACHE | BDRV_O_CACHE_WB | BDRV_O_NO_FLUSH)
@@ -95,25 +48,15 @@ typedef enum {
typedef enum {
BDRV_ACTION_REPORT, BDRV_ACTION_IGNORE, BDRV_ACTION_STOP
} BlockQMPEventAction;
} BlockMonEventAction;
void bdrv_iostatus_enable(BlockDriverState *bs);
void bdrv_iostatus_reset(BlockDriverState *bs);
void bdrv_iostatus_disable(BlockDriverState *bs);
bool bdrv_iostatus_is_enabled(const BlockDriverState *bs);
void bdrv_iostatus_set_err(BlockDriverState *bs, int error);
void bdrv_emit_qmp_error_event(const BlockDriverState *bdrv,
BlockQMPEventAction action, int is_read);
void bdrv_mon_event(const BlockDriverState *bdrv,
BlockMonEventAction action, int is_read);
void bdrv_info_print(Monitor *mon, const QObject *data);
void bdrv_info(Monitor *mon, QObject **ret_data);
void bdrv_stats_print(Monitor *mon, const QObject *data);
void bdrv_info_stats(Monitor *mon, QObject **ret_data);
/* disk I/O throttling */
void bdrv_io_limits_enable(BlockDriverState *bs);
void bdrv_io_limits_disable(BlockDriverState *bs);
bool bdrv_io_limits_enabled(BlockDriverState *bs);
void bdrv_init(void);
void bdrv_init_with_whitelist(void);
BlockDriver *bdrv_find_protocol(const char *filename);
@@ -124,28 +67,16 @@ int bdrv_create(BlockDriver *drv, const char* filename,
int bdrv_create_file(const char* filename, QEMUOptionParameter *options);
BlockDriverState *bdrv_new(const char *device_name);
void bdrv_make_anon(BlockDriverState *bs);
void bdrv_swap(BlockDriverState *bs_new, BlockDriverState *bs_old);
void bdrv_append(BlockDriverState *bs_new, BlockDriverState *bs_top);
void bdrv_delete(BlockDriverState *bs);
int bdrv_parse_cache_flags(const char *mode, int *flags);
int bdrv_file_open(BlockDriverState **pbs, const char *filename, int flags);
int bdrv_open(BlockDriverState *bs, const char *filename, int flags,
BlockDriver *drv);
void bdrv_close(BlockDriverState *bs);
int bdrv_attach_dev(BlockDriverState *bs, void *dev);
void bdrv_attach_dev_nofail(BlockDriverState *bs, void *dev);
void bdrv_detach_dev(BlockDriverState *bs, void *dev);
void *bdrv_get_attached_dev(BlockDriverState *bs);
void bdrv_set_dev_ops(BlockDriverState *bs, const BlockDevOps *ops,
void *opaque);
void bdrv_dev_eject_request(BlockDriverState *bs, bool force);
bool bdrv_dev_has_removable_media(BlockDriverState *bs);
bool bdrv_dev_is_tray_open(BlockDriverState *bs);
bool bdrv_dev_is_medium_locked(BlockDriverState *bs);
int bdrv_attach(BlockDriverState *bs, DeviceState *qdev);
void bdrv_detach(BlockDriverState *bs, DeviceState *qdev);
DeviceState *bdrv_get_attached(BlockDriverState *bs);
int bdrv_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors);
int bdrv_read_unthrottled(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors);
int bdrv_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors);
int bdrv_pread(BlockDriverState *bs, int64_t offset,
@@ -154,35 +85,15 @@ int bdrv_pwrite(BlockDriverState *bs, int64_t offset,
const void *buf, int count);
int bdrv_pwrite_sync(BlockDriverState *bs, int64_t offset,
const void *buf, int count);
int coroutine_fn bdrv_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
int coroutine_fn bdrv_co_copy_on_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov);
int coroutine_fn bdrv_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
/*
* Efficiently zero a region of the disk image. Note that this is a regular
* I/O request like read or write and should have a reasonable size. This
* function is not suitable for zeroing the entire image in a single request
* because it may allocate memory for the entire region.
*/
int coroutine_fn bdrv_co_write_zeroes(BlockDriverState *bs, int64_t sector_num,
int nb_sectors);
int coroutine_fn bdrv_co_is_allocated(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum);
int coroutine_fn bdrv_co_is_allocated_above(BlockDriverState *top,
BlockDriverState *base,
int64_t sector_num,
int nb_sectors, int *pnum);
BlockDriverState *bdrv_find_backing_image(BlockDriverState *bs,
const char *backing_file);
int bdrv_get_backing_file_depth(BlockDriverState *bs);
int bdrv_write_sync(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors);
int bdrv_truncate(BlockDriverState *bs, int64_t offset);
int64_t bdrv_getlength(BlockDriverState *bs);
int64_t bdrv_get_allocated_file_size(BlockDriverState *bs);
void bdrv_get_geometry(BlockDriverState *bs, uint64_t *nb_sectors_ptr);
void bdrv_guess_geometry(BlockDriverState *bs, int *pcyls, int *pheads, int *psecs);
int bdrv_commit(BlockDriverState *bs);
int bdrv_commit_all(void);
void bdrv_commit_all(void);
int bdrv_change_backing_file(BlockDriverState *bs,
const char *backing_file, const char *backing_fmt);
void bdrv_register(BlockDriver *bdrv);
@@ -192,19 +103,13 @@ typedef struct BdrvCheckResult {
int corruptions;
int leaks;
int check_errors;
int corruptions_fixed;
int leaks_fixed;
BlockFragInfo bfi;
} BdrvCheckResult;
typedef enum {
BDRV_FIX_LEAKS = 1,
BDRV_FIX_ERRORS = 2,
} BdrvCheckMode;
int bdrv_check(BlockDriverState *bs, BdrvCheckResult *res, BdrvCheckMode fix);
int bdrv_check(BlockDriverState *bs, BdrvCheckResult *res);
/* async block I/O */
typedef struct BlockDriverAIOCB BlockDriverAIOCB;
typedef void BlockDriverCompletionFunc(void *opaque, int ret);
typedef void BlockDriverDirtyHandler(BlockDriverState *bs, int64_t sector,
int sector_num);
BlockDriverAIOCB *bdrv_aio_readv(BlockDriverState *bs, int64_t sector_num,
@@ -215,9 +120,6 @@ BlockDriverAIOCB *bdrv_aio_writev(BlockDriverState *bs, int64_t sector_num,
BlockDriverCompletionFunc *cb, void *opaque);
BlockDriverAIOCB *bdrv_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque);
BlockDriverAIOCB *bdrv_aio_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque);
void bdrv_aio_cancel(BlockDriverAIOCB *acb);
typedef struct BlockRequest {
@@ -241,37 +143,55 @@ BlockDriverAIOCB *bdrv_aio_ioctl(BlockDriverState *bs,
unsigned long int req, void *buf,
BlockDriverCompletionFunc *cb, void *opaque);
/* Invalidate any cached metadata used by image formats */
void bdrv_invalidate_cache(BlockDriverState *bs);
void bdrv_invalidate_cache_all(void);
void bdrv_clear_incoming_migration_all(void);
/* Ensure contents are flushed to disk. */
int bdrv_flush(BlockDriverState *bs);
int coroutine_fn bdrv_co_flush(BlockDriverState *bs);
void bdrv_flush_all(void);
void bdrv_close_all(void);
void bdrv_drain_all(void);
int bdrv_discard(BlockDriverState *bs, int64_t sector_num, int nb_sectors);
int bdrv_co_discard(BlockDriverState *bs, int64_t sector_num, int nb_sectors);
int bdrv_has_zero_init(BlockDriverState *bs);
int bdrv_is_allocated(BlockDriverState *bs, int64_t sector_num, int nb_sectors,
int *pnum);
#define BIOS_ATA_TRANSLATION_AUTO 0
#define BIOS_ATA_TRANSLATION_NONE 1
#define BIOS_ATA_TRANSLATION_LBA 2
#define BIOS_ATA_TRANSLATION_LARGE 3
#define BIOS_ATA_TRANSLATION_RECHS 4
void bdrv_set_geometry_hint(BlockDriverState *bs,
int cyls, int heads, int secs);
void bdrv_set_translation_hint(BlockDriverState *bs, int translation);
void bdrv_get_geometry_hint(BlockDriverState *bs,
int *pcyls, int *pheads, int *psecs);
typedef enum FDriveType {
FDRIVE_DRV_144 = 0x00, /* 1.44 MB 3"5 drive */
FDRIVE_DRV_288 = 0x01, /* 2.88 MB 3"5 drive */
FDRIVE_DRV_120 = 0x02, /* 1.2 MB 5"25 drive */
FDRIVE_DRV_NONE = 0x03, /* No drive connected */
} FDriveType;
void bdrv_get_floppy_geometry_hint(BlockDriverState *bs, int *nb_heads,
int *max_track, int *last_sect,
FDriveType drive_in, FDriveType *drive);
int bdrv_get_translation_hint(BlockDriverState *bs);
void bdrv_set_on_error(BlockDriverState *bs, BlockErrorAction on_read_error,
BlockErrorAction on_write_error);
BlockErrorAction bdrv_get_on_error(BlockDriverState *bs, int is_read);
void bdrv_set_removable(BlockDriverState *bs, int removable);
int bdrv_is_removable(BlockDriverState *bs);
int bdrv_is_read_only(BlockDriverState *bs);
int bdrv_is_sg(BlockDriverState *bs);
int bdrv_enable_write_cache(BlockDriverState *bs);
void bdrv_set_enable_write_cache(BlockDriverState *bs, bool wce);
int bdrv_is_inserted(BlockDriverState *bs);
int bdrv_media_changed(BlockDriverState *bs);
void bdrv_lock_medium(BlockDriverState *bs, bool locked);
void bdrv_eject(BlockDriverState *bs, bool eject_flag);
const char *bdrv_get_format_name(BlockDriverState *bs);
int bdrv_is_locked(BlockDriverState *bs);
void bdrv_set_locked(BlockDriverState *bs, int locked);
int bdrv_eject(BlockDriverState *bs, int eject_flag);
void bdrv_set_change_cb(BlockDriverState *bs,
void (*change_cb)(void *opaque, int reason),
void *opaque);
void bdrv_get_format(BlockDriverState *bs, char *buf, int buf_size);
BlockDriverState *bdrv_find(const char *name);
BlockDriverState *bdrv_next(BlockDriverState *bs);
void bdrv_iterate(void (*it)(void *opaque, BlockDriverState *bs),
@@ -283,7 +203,6 @@ int bdrv_query_missing_keys(void);
void bdrv_iterate_format(void (*it)(void *opaque, const char *name),
void *opaque);
const char *bdrv_get_device_name(BlockDriverState *bs);
int bdrv_get_flags(BlockDriverState *bs);
int bdrv_write_compressed(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors);
int bdrv_get_info(BlockDriverState *bs, BlockDriverInfo *bdi);
@@ -291,8 +210,6 @@ int bdrv_get_info(BlockDriverState *bs, BlockDriverInfo *bdi);
const char *bdrv_get_encrypted_filename(BlockDriverState *bs);
void bdrv_get_backing_filename(BlockDriverState *bs,
char *filename, int filename_size);
void bdrv_get_full_backing_filename(BlockDriverState *bs,
char *dest, size_t sz);
int bdrv_can_snapshot(BlockDriverState *bs);
int bdrv_is_snapshot(BlockDriverState *bs);
BlockDriverState *bdrv_snapshots(void);
@@ -323,9 +240,6 @@ int bdrv_img_create(const char *filename, const char *fmt,
const char *base_filename, const char *base_fmt,
char *options, uint64_t img_size, int flags);
void bdrv_set_buffer_alignment(BlockDriverState *bs, int align);
void *qemu_blockalign(BlockDriverState *bs, size_t size);
#define BDRV_SECTORS_PER_DIRTY_CHUNK 2048
void bdrv_set_dirty_tracking(BlockDriverState *bs, int enable);
@@ -334,29 +248,9 @@ void bdrv_reset_dirty(BlockDriverState *bs, int64_t cur_sector,
int nr_sectors);
int64_t bdrv_get_dirty_count(BlockDriverState *bs);
void bdrv_enable_copy_on_read(BlockDriverState *bs);
void bdrv_disable_copy_on_read(BlockDriverState *bs);
void bdrv_set_in_use(BlockDriverState *bs, int in_use);
int bdrv_in_use(BlockDriverState *bs);
enum BlockAcctType {
BDRV_ACCT_READ,
BDRV_ACCT_WRITE,
BDRV_ACCT_FLUSH,
BDRV_MAX_IOTYPE,
};
typedef struct BlockAcctCookie {
int64_t bytes;
int64_t start_time_ns;
enum BlockAcctType type;
} BlockAcctCookie;
void bdrv_acct_start(BlockDriverState *bs, BlockAcctCookie *cookie,
int64_t bytes, enum BlockAcctType type);
void bdrv_acct_done(BlockDriverState *bs, BlockAcctCookie *cookie);
typedef enum {
BLKDBG_L1_UPDATE,
@@ -370,7 +264,9 @@ typedef enum {
BLKDBG_L2_ALLOC_COW_READ,
BLKDBG_L2_ALLOC_WRITE,
BLKDBG_READ,
BLKDBG_READ_AIO,
BLKDBG_READ_BACKING,
BLKDBG_READ_BACKING_AIO,
BLKDBG_READ_COMPRESSED,

View File

@@ -1,11 +0,0 @@
block-obj-y += raw.o cow.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat.o
block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o
block-obj-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o
block-obj-y += qed-check.o
block-obj-y += parallels.o nbd.o blkdebug.o sheepdog.o blkverify.o
block-obj-y += stream.o
block-obj-$(CONFIG_WIN32) += raw-win32.o
block-obj-$(CONFIG_POSIX) += raw-posix.o
block-obj-$(CONFIG_LIBISCSI) += iscsi.o
block-obj-$(CONFIG_CURL) += curl.o
block-obj-$(CONFIG_RBD) += rbd.o

View File

@@ -26,10 +26,24 @@
#include "block_int.h"
#include "module.h"
typedef struct BDRVBlkdebugState {
typedef struct BlkdebugVars {
int state;
QLIST_HEAD(, BlkdebugRule) rules[BLKDBG_EVENT_MAX];
QSIMPLEQ_HEAD(, BlkdebugRule) active_rules;
/* If inject_errno != 0, an error is injected for requests */
int inject_errno;
/* Decides if all future requests fail (false) or only the next one and
* after the next request inject_errno is reset to 0 (true) */
bool inject_once;
/* Decides if aio_readv/writev fails right away (true) or returns an error
* return value only in the callback (false) */
bool inject_immediately;
} BlkdebugVars;
typedef struct BDRVBlkdebugState {
BlkdebugVars vars;
QLIST_HEAD(list, BlkdebugRule) rules[BLKDBG_EVENT_MAX];
} BDRVBlkdebugState;
typedef struct BlkdebugAIOCB {
@@ -59,14 +73,12 @@ typedef struct BlkdebugRule {
int error;
int immediately;
int once;
int64_t sector;
} inject;
struct {
int new_state;
} set_state;
} options;
QLIST_ENTRY(BlkdebugRule) next;
QSIMPLEQ_ENTRY(BlkdebugRule) active_next;
} BlkdebugRule;
static QemuOptsList inject_error_opts = {
@@ -85,10 +97,6 @@ static QemuOptsList inject_error_opts = {
.name = "errno",
.type = QEMU_OPT_NUMBER,
},
{
.name = "sector",
.type = QEMU_OPT_NUMBER,
},
{
.name = "once",
.type = QEMU_OPT_BOOL,
@@ -139,7 +147,9 @@ static const char *event_names[BLKDBG_EVENT_MAX] = {
[BLKDBG_L2_ALLOC_COW_READ] = "l2_alloc.cow_read",
[BLKDBG_L2_ALLOC_WRITE] = "l2_alloc.write",
[BLKDBG_READ] = "read",
[BLKDBG_READ_AIO] = "read_aio",
[BLKDBG_READ_BACKING] = "read_backing",
[BLKDBG_READ_BACKING_AIO] = "read_backing_aio",
[BLKDBG_READ_COMPRESSED] = "read_compressed",
@@ -204,7 +214,7 @@ static int add_rule(QemuOpts *opts, void *opaque)
}
/* Set attributes common for all actions */
rule = g_malloc0(sizeof(*rule));
rule = qemu_mallocz(sizeof(*rule));
*rule = (struct BlkdebugRule) {
.event = event,
.action = d->action,
@@ -218,7 +228,6 @@ static int add_rule(QemuOpts *opts, void *opaque)
rule->options.inject.once = qemu_opt_get_bool(opts, "once", 0);
rule->options.inject.immediately =
qemu_opt_get_bool(opts, "immediately", 0);
rule->options.inject.sector = qemu_opt_get_number(opts, "sector", -1);
break;
case ACTION_SET_STATE:
@@ -283,17 +292,17 @@ static int blkdebug_open(BlockDriverState *bs, const char *filename, int flags)
return -EINVAL;
}
config = g_strdup(filename);
config = strdup(filename);
config[c - filename] = '\0';
ret = read_config(s, config);
g_free(config);
free(config);
if (ret < 0) {
return ret;
}
filename = c + 1;
/* Set initial state */
s->state = 1;
s->vars.state = 1;
/* Open the backing file */
ret = bdrv_file_open(&bs->file, filename, flags);
@@ -319,18 +328,18 @@ static void blkdebug_aio_cancel(BlockDriverAIOCB *blockacb)
}
static BlockDriverAIOCB *inject_error(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque, BlkdebugRule *rule)
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVBlkdebugState *s = bs->opaque;
int error = rule->options.inject.error;
int error = s->vars.inject_errno;
struct BlkdebugAIOCB *acb;
QEMUBH *bh;
if (rule->options.inject.once) {
QSIMPLEQ_INIT(&s->active_rules);
if (s->vars.inject_once) {
s->vars.inject_errno = 0;
}
if (rule->options.inject.immediately) {
if (s->vars.inject_immediately) {
return NULL;
}
@@ -349,21 +358,14 @@ static BlockDriverAIOCB *blkdebug_aio_readv(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugRule *rule = NULL;
QSIMPLEQ_FOREACH(rule, &s->active_rules, active_next) {
if (rule->options.inject.sector == -1 ||
(rule->options.inject.sector >= sector_num &&
rule->options.inject.sector < sector_num + nb_sectors)) {
break;
}
if (s->vars.inject_errno) {
return inject_error(bs, cb, opaque);
}
if (rule && rule->options.inject.error) {
return inject_error(bs, cb, opaque, rule);
}
return bdrv_aio_readv(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
BlockDriverAIOCB *acb =
bdrv_aio_readv(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
return acb;
}
static BlockDriverAIOCB *blkdebug_aio_writev(BlockDriverState *bs,
@@ -371,21 +373,14 @@ static BlockDriverAIOCB *blkdebug_aio_writev(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugRule *rule = NULL;
QSIMPLEQ_FOREACH(rule, &s->active_rules, active_next) {
if (rule->options.inject.sector == -1 ||
(rule->options.inject.sector >= sector_num &&
rule->options.inject.sector < sector_num + nb_sectors)) {
break;
}
if (s->vars.inject_errno) {
return inject_error(bs, cb, opaque);
}
if (rule && rule->options.inject.error) {
return inject_error(bs, cb, opaque, rule);
}
return bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
BlockDriverAIOCB *acb =
bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
return acb;
}
static void blkdebug_close(BlockDriverState *bs)
@@ -397,58 +392,60 @@ static void blkdebug_close(BlockDriverState *bs)
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
QLIST_FOREACH_SAFE(rule, &s->rules[i], next, next) {
QLIST_REMOVE(rule, next);
g_free(rule);
qemu_free(rule);
}
}
}
static bool process_rule(BlockDriverState *bs, struct BlkdebugRule *rule,
int old_state, bool injected)
static int blkdebug_flush(BlockDriverState *bs)
{
return bdrv_flush(bs->file);
}
static BlockDriverAIOCB *blkdebug_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
return bdrv_aio_flush(bs->file, cb, opaque);
}
static void process_rule(BlockDriverState *bs, struct BlkdebugRule *rule,
BlkdebugVars *old_vars)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugVars *vars = &s->vars;
/* Only process rules for the current state */
if (rule->state && rule->state != old_state) {
return injected;
if (rule->state && rule->state != old_vars->state) {
return;
}
/* Take the action */
switch (rule->action) {
case ACTION_INJECT_ERROR:
if (!injected) {
QSIMPLEQ_INIT(&s->active_rules);
injected = true;
}
QSIMPLEQ_INSERT_HEAD(&s->active_rules, rule, active_next);
vars->inject_errno = rule->options.inject.error;
vars->inject_once = rule->options.inject.once;
vars->inject_immediately = rule->options.inject.immediately;
break;
case ACTION_SET_STATE:
s->state = rule->options.set_state.new_state;
vars->state = rule->options.set_state.new_state;
break;
}
return injected;
}
static void blkdebug_debug_event(BlockDriverState *bs, BlkDebugEvent event)
{
BDRVBlkdebugState *s = bs->opaque;
struct BlkdebugRule *rule;
int old_state = s->state;
bool injected;
BlkdebugVars old_vars = s->vars;
assert((int)event >= 0 && event < BLKDBG_EVENT_MAX);
injected = false;
QLIST_FOREACH(rule, &s->rules[event], next) {
injected = process_rule(bs, rule, old_state, injected);
process_rule(bs, rule, &old_vars);
}
}
static int64_t blkdebug_getlength(BlockDriverState *bs)
{
return bdrv_getlength(bs->file);
}
static BlockDriver bdrv_blkdebug = {
.format_name = "blkdebug",
.protocol_name = "blkdebug",
@@ -457,10 +454,11 @@ static BlockDriver bdrv_blkdebug = {
.bdrv_file_open = blkdebug_open,
.bdrv_close = blkdebug_close,
.bdrv_getlength = blkdebug_getlength,
.bdrv_flush = blkdebug_flush,
.bdrv_aio_readv = blkdebug_aio_readv,
.bdrv_aio_writev = blkdebug_aio_writev,
.bdrv_aio_flush = blkdebug_aio_flush,
.bdrv_debug_event = blkdebug_debug_event,
};

View File

@@ -87,10 +87,10 @@ static int blkverify_open(BlockDriverState *bs, const char *filename, int flags)
return -EINVAL;
}
raw = g_strdup(filename);
raw = strdup(filename);
raw[c - filename] = '\0';
ret = bdrv_file_open(&bs->file, raw, flags);
g_free(raw);
free(raw);
if (ret < 0) {
return ret;
}
@@ -116,6 +116,14 @@ static void blkverify_close(BlockDriverState *bs)
s->test_file = NULL;
}
static int blkverify_flush(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
/* Only flush test file, the raw file is not important */
return bdrv_flush(s->test_file);
}
static int64_t blkverify_getlength(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
@@ -310,10 +318,14 @@ static BlockDriverAIOCB *blkverify_aio_readv(BlockDriverState *bs,
qemu_iovec_init(&acb->raw_qiov, acb->qiov->niov);
blkverify_iovec_clone(&acb->raw_qiov, qiov, acb->buf);
bdrv_aio_readv(s->test_file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb);
bdrv_aio_readv(bs->file, sector_num, &acb->raw_qiov, nb_sectors,
blkverify_aio_cb, acb);
if (!bdrv_aio_readv(s->test_file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb)) {
blkverify_aio_cb(acb, -EIO);
}
if (!bdrv_aio_readv(bs->file, sector_num, &acb->raw_qiov, nb_sectors,
blkverify_aio_cb, acb)) {
blkverify_aio_cb(acb, -EIO);
}
return &acb->common;
}
@@ -325,10 +337,14 @@ static BlockDriverAIOCB *blkverify_aio_writev(BlockDriverState *bs,
BlkverifyAIOCB *acb = blkverify_aio_get(bs, true, sector_num, qiov,
nb_sectors, cb, opaque);
bdrv_aio_writev(s->test_file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb);
bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb);
if (!bdrv_aio_writev(s->test_file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb)) {
blkverify_aio_cb(acb, -EIO);
}
if (!bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb)) {
blkverify_aio_cb(acb, -EIO);
}
return &acb->common;
}
@@ -352,6 +368,7 @@ static BlockDriver bdrv_blkverify = {
.bdrv_file_open = blkverify_open,
.bdrv_close = blkverify_close,
.bdrv_flush = blkverify_flush,
.bdrv_aio_readv = blkverify_aio_readv,
.bdrv_aio_writev = blkverify_aio_writev,

View File

@@ -80,7 +80,6 @@ struct bochs_header {
};
typedef struct BDRVBochsState {
CoMutex lock;
uint32_t *catalog_bitmap;
int catalog_size;
@@ -137,7 +136,7 @@ static int bochs_open(BlockDriverState *bs, int flags)
}
s->catalog_size = le32_to_cpu(bochs.extra.redolog.catalog);
s->catalog_bitmap = g_malloc(s->catalog_size * 4);
s->catalog_bitmap = qemu_malloc(s->catalog_size * 4);
if (bdrv_pread(bs->file, le32_to_cpu(bochs.header), s->catalog_bitmap,
s->catalog_size * 4) != s->catalog_size * 4)
goto fail;
@@ -151,7 +150,6 @@ static int bochs_open(BlockDriverState *bs, int flags)
s->extent_size = le32_to_cpu(bochs.extra.redolog.extent);
qemu_co_mutex_init(&s->lock);
return 0;
fail:
return -1;
@@ -209,21 +207,10 @@ static int bochs_read(BlockDriverState *bs, int64_t sector_num,
return 0;
}
static coroutine_fn int bochs_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVBochsState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = bochs_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void bochs_close(BlockDriverState *bs)
{
BDRVBochsState *s = bs->opaque;
g_free(s->catalog_bitmap);
qemu_free(s->catalog_bitmap);
}
static BlockDriver bdrv_bochs = {
@@ -231,7 +218,7 @@ static BlockDriver bdrv_bochs = {
.instance_size = sizeof(BDRVBochsState),
.bdrv_probe = bochs_probe,
.bdrv_open = bochs_open,
.bdrv_read = bochs_co_read,
.bdrv_read = bochs_read,
.bdrv_close = bochs_close,
};

View File

@@ -27,10 +27,9 @@
#include <zlib.h>
typedef struct BDRVCloopState {
CoMutex lock;
uint32_t block_size;
uint32_t n_blocks;
uint64_t *offsets;
uint64_t* offsets;
uint32_t sectors_per_block;
uint32_t current_block;
uint8_t *compressed_block;
@@ -40,23 +39,21 @@ typedef struct BDRVCloopState {
static int cloop_probe(const uint8_t *buf, int buf_size, const char *filename)
{
const char *magic_version_2_0 = "#!/bin/sh\n"
"#V2.0 Format\n"
"modprobe cloop file=$0 && mount -r -t iso9660 /dev/cloop $1\n";
int length = strlen(magic_version_2_0);
if (length > buf_size) {
length = buf_size;
}
if (!memcmp(magic_version_2_0, buf, length)) {
return 2;
}
const char* magic_version_2_0="#!/bin/sh\n"
"#V2.0 Format\n"
"modprobe cloop file=$0 && mount -r -t iso9660 /dev/cloop $1\n";
int length=strlen(magic_version_2_0);
if(length>buf_size)
length=buf_size;
if(!memcmp(magic_version_2_0,buf,length))
return 2;
return 0;
}
static int cloop_open(BlockDriverState *bs, int flags)
{
BDRVCloopState *s = bs->opaque;
uint32_t offsets_size, max_compressed_block_size = 1, i;
uint32_t offsets_size,max_compressed_block_size=1,i;
bs->read_only = 1;
@@ -73,32 +70,29 @@ static int cloop_open(BlockDriverState *bs, int flags)
/* read offsets */
offsets_size = s->n_blocks * sizeof(uint64_t);
s->offsets = g_malloc(offsets_size);
s->offsets = qemu_malloc(offsets_size);
if (bdrv_pread(bs->file, 128 + 4 + 4, s->offsets, offsets_size) <
offsets_size) {
goto cloop_close;
goto cloop_close;
}
for(i=0;i<s->n_blocks;i++) {
s->offsets[i] = be64_to_cpu(s->offsets[i]);
if (i > 0) {
uint32_t size = s->offsets[i] - s->offsets[i - 1];
if (size > max_compressed_block_size) {
max_compressed_block_size = size;
}
}
s->offsets[i]=be64_to_cpu(s->offsets[i]);
if(i>0) {
uint32_t size=s->offsets[i]-s->offsets[i-1];
if(size>max_compressed_block_size)
max_compressed_block_size=size;
}
}
/* initialize zlib engine */
s->compressed_block = g_malloc(max_compressed_block_size + 1);
s->uncompressed_block = g_malloc(s->block_size);
if (inflateInit(&s->zstream) != Z_OK) {
goto cloop_close;
}
s->current_block = s->n_blocks;
s->compressed_block = qemu_malloc(max_compressed_block_size+1);
s->uncompressed_block = qemu_malloc(s->block_size);
if(inflateInit(&s->zstream) != Z_OK)
goto cloop_close;
s->current_block=s->n_blocks;
s->sectors_per_block = s->block_size/512;
bs->total_sectors = s->n_blocks * s->sectors_per_block;
qemu_co_mutex_init(&s->lock);
bs->total_sectors = s->n_blocks*s->sectors_per_block;
return 0;
cloop_close:
@@ -109,30 +103,27 @@ static inline int cloop_read_block(BlockDriverState *bs, int block_num)
{
BDRVCloopState *s = bs->opaque;
if (s->current_block != block_num) {
int ret;
uint32_t bytes = s->offsets[block_num + 1] - s->offsets[block_num];
if(s->current_block != block_num) {
int ret;
uint32_t bytes = s->offsets[block_num+1]-s->offsets[block_num];
ret = bdrv_pread(bs->file, s->offsets[block_num], s->compressed_block,
bytes);
if (ret != bytes) {
if (ret != bytes)
return -1;
}
s->zstream.next_in = s->compressed_block;
s->zstream.avail_in = bytes;
s->zstream.next_out = s->uncompressed_block;
s->zstream.avail_out = s->block_size;
ret = inflateReset(&s->zstream);
if (ret != Z_OK) {
return -1;
}
ret = inflate(&s->zstream, Z_FINISH);
if (ret != Z_STREAM_END || s->zstream.total_out != s->block_size) {
return -1;
}
s->zstream.next_in = s->compressed_block;
s->zstream.avail_in = bytes;
s->zstream.next_out = s->uncompressed_block;
s->zstream.avail_out = s->block_size;
ret = inflateReset(&s->zstream);
if(ret != Z_OK)
return -1;
ret = inflate(&s->zstream, Z_FINISH);
if(ret != Z_STREAM_END || s->zstream.total_out != s->block_size)
return -1;
s->current_block = block_num;
s->current_block = block_num;
}
return 0;
}
@@ -143,48 +134,33 @@ static int cloop_read(BlockDriverState *bs, int64_t sector_num,
BDRVCloopState *s = bs->opaque;
int i;
for (i = 0; i < nb_sectors; i++) {
uint32_t sector_offset_in_block =
((sector_num + i) % s->sectors_per_block),
block_num = (sector_num + i) / s->sectors_per_block;
if (cloop_read_block(bs, block_num) != 0) {
return -1;
}
memcpy(buf + i * 512,
s->uncompressed_block + sector_offset_in_block * 512, 512);
for(i=0;i<nb_sectors;i++) {
uint32_t sector_offset_in_block=((sector_num+i)%s->sectors_per_block),
block_num=(sector_num+i)/s->sectors_per_block;
if(cloop_read_block(bs, block_num) != 0)
return -1;
memcpy(buf+i*512,s->uncompressed_block+sector_offset_in_block*512,512);
}
return 0;
}
static coroutine_fn int cloop_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVCloopState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = cloop_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void cloop_close(BlockDriverState *bs)
{
BDRVCloopState *s = bs->opaque;
if (s->n_blocks > 0) {
g_free(s->offsets);
}
g_free(s->compressed_block);
g_free(s->uncompressed_block);
if(s->n_blocks>0)
free(s->offsets);
free(s->compressed_block);
free(s->uncompressed_block);
inflateEnd(&s->zstream);
}
static BlockDriver bdrv_cloop = {
.format_name = "cloop",
.instance_size = sizeof(BDRVCloopState),
.bdrv_probe = cloop_probe,
.bdrv_open = cloop_open,
.bdrv_read = cloop_co_read,
.bdrv_close = cloop_close,
.format_name = "cloop",
.instance_size = sizeof(BDRVCloopState),
.bdrv_probe = cloop_probe,
.bdrv_open = cloop_open,
.bdrv_read = cloop_read,
.bdrv_close = cloop_close,
};
static void bdrv_cloop_init(void)

View File

@@ -42,7 +42,6 @@ struct cow_header_v2 {
};
typedef struct BDRVCowState {
CoMutex lock;
int64_t cow_sectors_offset;
} BDRVCowState;
@@ -64,26 +63,15 @@ static int cow_open(BlockDriverState *bs, int flags)
struct cow_header_v2 cow_header;
int bitmap_size;
int64_t size;
int ret;
/* see if it is a cow image */
ret = bdrv_pread(bs->file, 0, &cow_header, sizeof(cow_header));
if (ret < 0) {
if (bdrv_pread(bs->file, 0, &cow_header, sizeof(cow_header)) !=
sizeof(cow_header)) {
goto fail;
}
if (be32_to_cpu(cow_header.magic) != COW_MAGIC) {
ret = -EINVAL;
goto fail;
}
if (be32_to_cpu(cow_header.version) != COW_VERSION) {
char version[64];
snprintf(version, sizeof(version),
"COW version %d", cow_header.version);
qerror_report(QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "cow", version);
ret = -ENOTSUP;
if (be32_to_cpu(cow_header.magic) != COW_MAGIC ||
be32_to_cpu(cow_header.version) != COW_VERSION) {
goto fail;
}
@@ -96,14 +84,13 @@ static int cow_open(BlockDriverState *bs, int flags)
bitmap_size = ((bs->total_sectors + 7) >> 3) + sizeof(cow_header);
s->cow_sectors_offset = (bitmap_size + 511) & ~511;
qemu_co_mutex_init(&s->lock);
return 0;
fail:
return ret;
return -1;
}
/*
* XXX(hch): right now these functions are extremely inefficient.
* XXX(hch): right now these functions are extremly ineffcient.
* We should just read the whole bitmap we'll need in one go instead.
*/
static inline int cow_set_bit(BlockDriverState *bs, int64_t bitnum)
@@ -143,8 +130,8 @@ static inline int is_bit_set(BlockDriverState *bs, int64_t bitnum)
/* Return true if first block has been changed (ie. current version is
* in COW file). Set the number of continuous blocks for which that
* is true. */
static int coroutine_fn cow_co_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *num_same)
static int cow_is_allocated(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *num_same)
{
int changed;
@@ -182,30 +169,28 @@ static int cow_update_bitmap(BlockDriverState *bs, int64_t sector_num,
return error;
}
static int coroutine_fn cow_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
static int cow_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVCowState *s = bs->opaque;
int ret, n;
while (nb_sectors > 0) {
if (bdrv_co_is_allocated(bs, sector_num, nb_sectors, &n)) {
if (cow_is_allocated(bs, sector_num, nb_sectors, &n)) {
ret = bdrv_pread(bs->file,
s->cow_sectors_offset + sector_num * 512,
buf, n * 512);
if (ret < 0) {
return ret;
}
if (ret != n * 512)
return -1;
} else {
if (bs->backing_hd) {
/* read from the base image */
ret = bdrv_read(bs->backing_hd, sector_num, buf, n);
if (ret < 0) {
return ret;
}
if (ret < 0)
return -1;
} else {
memset(buf, 0, n * 512);
}
memset(buf, 0, n * 512);
}
}
nb_sectors -= n;
sector_num += n;
@@ -214,17 +199,6 @@ static int coroutine_fn cow_read(BlockDriverState *bs, int64_t sector_num,
return 0;
}
static coroutine_fn int cow_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVCowState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = cow_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static int cow_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
@@ -233,36 +207,24 @@ static int cow_write(BlockDriverState *bs, int64_t sector_num,
ret = bdrv_pwrite(bs->file, s->cow_sectors_offset + sector_num * 512,
buf, nb_sectors * 512);
if (ret < 0) {
return ret;
}
if (ret != nb_sectors * 512)
return -1;
return cow_update_bitmap(bs, sector_num, nb_sectors);
}
static coroutine_fn int cow_co_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
int ret;
BDRVCowState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = cow_write(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void cow_close(BlockDriverState *bs)
{
}
static int cow_create(const char *filename, QEMUOptionParameter *options)
{
int fd, cow_fd;
struct cow_header_v2 cow_header;
struct stat st;
int64_t image_sectors = 0;
const char *image_filename = NULL;
int ret;
BlockDriverState *cow_bs;
/* Read out options */
while (options && options->name) {
@@ -274,16 +236,10 @@ static int cow_create(const char *filename, QEMUOptionParameter *options)
options++;
}
ret = bdrv_create_file(filename, options);
if (ret < 0) {
return ret;
}
ret = bdrv_file_open(&cow_bs, filename, BDRV_O_RDWR);
if (ret < 0) {
return ret;
}
cow_fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY,
0644);
if (cow_fd < 0)
return -errno;
memset(&cow_header, 0, sizeof(cow_header));
cow_header.magic = cpu_to_be32(COW_MAGIC);
cow_header.version = cpu_to_be32(COW_VERSION);
@@ -291,9 +247,16 @@ static int cow_create(const char *filename, QEMUOptionParameter *options)
/* Note: if no file, we put a dummy mtime */
cow_header.mtime = cpu_to_be32(0);
if (stat(image_filename, &st) != 0) {
fd = open(image_filename, O_RDONLY | O_BINARY);
if (fd < 0) {
close(cow_fd);
goto mtime_fail;
}
if (fstat(fd, &st) != 0) {
close(fd);
goto mtime_fail;
}
close(fd);
cow_header.mtime = cpu_to_be32(st.st_mtime);
mtime_fail:
pstrcpy(cow_header.backing_file, sizeof(cow_header.backing_file),
@@ -301,23 +264,29 @@ static int cow_create(const char *filename, QEMUOptionParameter *options)
}
cow_header.sectorsize = cpu_to_be32(512);
cow_header.size = cpu_to_be64(image_sectors * 512);
ret = bdrv_pwrite(cow_bs, 0, &cow_header, sizeof(cow_header));
if (ret < 0) {
ret = qemu_write_full(cow_fd, &cow_header, sizeof(cow_header));
if (ret != sizeof(cow_header)) {
ret = -errno;
goto exit;
}
/* resize to include at least all the bitmap */
ret = bdrv_truncate(cow_bs,
sizeof(cow_header) + ((image_sectors + 7) >> 3));
if (ret < 0) {
ret = ftruncate(cow_fd, sizeof(cow_header) + ((image_sectors + 7) >> 3));
if (ret) {
ret = -errno;
goto exit;
}
exit:
bdrv_delete(cow_bs);
close(cow_fd);
return ret;
}
static int cow_flush(BlockDriverState *bs)
{
return bdrv_flush(bs->file);
}
static QEMUOptionParameter cow_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
@@ -333,17 +302,16 @@ static QEMUOptionParameter cow_create_options[] = {
};
static BlockDriver bdrv_cow = {
.format_name = "cow",
.instance_size = sizeof(BDRVCowState),
.bdrv_probe = cow_probe,
.bdrv_open = cow_open,
.bdrv_close = cow_close,
.bdrv_create = cow_create,
.bdrv_read = cow_co_read,
.bdrv_write = cow_co_write,
.bdrv_co_is_allocated = cow_co_is_allocated,
.format_name = "cow",
.instance_size = sizeof(BDRVCowState),
.bdrv_probe = cow_probe,
.bdrv_open = cow_open,
.bdrv_read = cow_read,
.bdrv_write = cow_write,
.bdrv_close = cow_close,
.bdrv_create = cow_create,
.bdrv_flush = cow_flush,
.bdrv_is_allocated = cow_is_allocated,
.create_options = cow_create_options,
};

View File

@@ -47,12 +47,7 @@ struct BDRVCURLState;
typedef struct CURLAIOCB {
BlockDriverAIOCB common;
QEMUBH *bh;
QEMUIOVector *qiov;
int64_t sector_num;
int nb_sectors;
size_t start;
size_t end;
} CURLAIOCB;
@@ -81,7 +76,6 @@ typedef struct BDRVCURLState {
static void curl_clean_state(CURLState *s);
static void curl_multi_do(void *arg);
static int curl_aio_flush(void *opaque);
static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
void *s, void *sp)
@@ -89,17 +83,17 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd);
switch (action) {
case CURL_POLL_IN:
qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, curl_aio_flush, s);
qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, NULL, NULL, s);
break;
case CURL_POLL_OUT:
qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, curl_aio_flush, s);
qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, NULL, NULL, s);
break;
case CURL_POLL_INOUT:
qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do,
curl_aio_flush, s);
qemu_aio_set_fd_handler(fd, curl_multi_do,
curl_multi_do, NULL, NULL, s);
break;
case CURL_POLL_REMOVE:
qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL);
qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL, NULL);
break;
}
@@ -140,8 +134,8 @@ static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
continue;
if ((s->buf_off >= acb->end)) {
qemu_iovec_from_buf(acb->qiov, 0, s->orig_buf + acb->start,
acb->end - acb->start);
qemu_iovec_from_buffer(acb->qiov, s->orig_buf + acb->start,
acb->end - acb->start);
acb->common.cb(acb->common.opaque, 0);
qemu_aio_release(acb);
s->acb[i] = NULL;
@@ -176,7 +170,7 @@ static int curl_find_buf(BDRVCURLState *s, size_t start, size_t len,
{
char *buf = state->orig_buf + (start - state->buf_start);
qemu_iovec_from_buf(acb->qiov, 0, buf, len);
qemu_iovec_from_buffer(acb->qiov, buf, len);
acb->common.cb(acb->common.opaque, 0);
return FIND_RET_OK;
@@ -235,23 +229,6 @@ static void curl_multi_do(void *arg)
{
CURLState *state = NULL;
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE, (char**)&state);
/* ACBs for successful messages get completed in curl_read_cb */
if (msg->data.result != CURLE_OK) {
int i;
for (i = 0; i < CURL_NUM_ACB; i++) {
CURLAIOCB *acb = state->acb[i];
if (acb == NULL) {
continue;
}
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_release(acb);
state->acb[i] = NULL;
}
}
curl_clean_state(state);
break;
}
@@ -280,7 +257,7 @@ static CURLState *curl_init_state(BDRVCURLState *s)
break;
}
if (!state) {
g_usleep(100);
usleep(100);
curl_multi_do(s);
}
} while(!state);
@@ -300,8 +277,7 @@ static CURLState *curl_init_state(BDRVCURLState *s)
curl_easy_setopt(state->curl, CURLOPT_FOLLOWLOCATION, 1);
curl_easy_setopt(state->curl, CURLOPT_NOSIGNAL, 1);
curl_easy_setopt(state->curl, CURLOPT_ERRORBUFFER, state->errmsg);
curl_easy_setopt(state->curl, CURLOPT_FAILONERROR, 1);
#ifdef DEBUG_VERBOSE
curl_easy_setopt(state->curl, CURLOPT_VERBOSE, 1);
#endif
@@ -334,7 +310,7 @@ static int curl_open(BlockDriverState *bs, const char *filename, int flags)
static int inited = 0;
file = g_strdup(filename);
file = qemu_strdup(filename);
s->readahead_size = READ_AHEAD_SIZE;
/* Parse a trailing ":readahead=#:" param, if present. */
@@ -414,25 +390,10 @@ out:
curl_easy_cleanup(state->curl);
state->curl = NULL;
out_noclean:
g_free(file);
qemu_free(file);
return -EINVAL;
}
static int curl_aio_flush(void *opaque)
{
BDRVCURLState *s = opaque;
int i, j;
for (i=0; i < CURL_NUM_STATES; i++) {
for(j=0; j < CURL_NUM_ACB; j++) {
if (s->states[i].acb[j]) {
return 1;
}
}
}
return 0;
}
static void curl_aio_cancel(BlockDriverAIOCB *blockacb)
{
// Do we have to implement canceling? Seems to work without...
@@ -443,82 +404,61 @@ static AIOPool curl_aio_pool = {
.cancel = curl_aio_cancel,
};
static void curl_readv_bh_cb(void *p)
static BlockDriverAIOCB *curl_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVCURLState *s = bs->opaque;
CURLAIOCB *acb;
size_t start = sector_num * SECTOR_SIZE;
size_t end;
CURLState *state;
CURLAIOCB *acb = p;
BDRVCURLState *s = acb->common.bs->opaque;
acb = qemu_aio_get(&curl_aio_pool, bs, cb, opaque);
if (!acb)
return NULL;
qemu_bh_delete(acb->bh);
acb->bh = NULL;
size_t start = acb->sector_num * SECTOR_SIZE;
size_t end;
acb->qiov = qiov;
// In case we have the requested data already (e.g. read-ahead),
// we can just call the callback and be done.
switch (curl_find_buf(s, start, acb->nb_sectors * SECTOR_SIZE, acb)) {
switch (curl_find_buf(s, start, nb_sectors * SECTOR_SIZE, acb)) {
case FIND_RET_OK:
qemu_aio_release(acb);
// fall through
case FIND_RET_WAIT:
return;
return &acb->common;
default:
break;
}
// No cache found, so let's start a new request
state = curl_init_state(s);
if (!state) {
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_release(acb);
return;
}
if (!state)
return NULL;
acb->start = 0;
acb->end = (acb->nb_sectors * SECTOR_SIZE);
acb->end = (nb_sectors * SECTOR_SIZE);
state->buf_off = 0;
if (state->orig_buf)
g_free(state->orig_buf);
qemu_free(state->orig_buf);
state->buf_start = start;
state->buf_len = acb->end + s->readahead_size;
end = MIN(start + state->buf_len, s->len) - 1;
state->orig_buf = g_malloc(state->buf_len);
state->orig_buf = qemu_malloc(state->buf_len);
state->acb[0] = acb;
snprintf(state->range, 127, "%zd-%zd", start, end);
DPRINTF("CURL (AIO): Reading %d at %zd (%s)\n",
(acb->nb_sectors * SECTOR_SIZE), start, state->range);
(nb_sectors * SECTOR_SIZE), start, state->range);
curl_easy_setopt(state->curl, CURLOPT_RANGE, state->range);
curl_multi_add_handle(s->multi, state->curl);
curl_multi_do(s);
}
static BlockDriverAIOCB *curl_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
CURLAIOCB *acb;
acb = qemu_aio_get(&curl_aio_pool, bs, cb, opaque);
acb->qiov = qiov;
acb->sector_num = sector_num;
acb->nb_sectors = nb_sectors;
acb->bh = qemu_bh_new(curl_readv_bh_cb, acb);
if (!acb->bh) {
DPRINTF("CURL: qemu_bh_new failed\n");
return NULL;
}
qemu_bh_schedule(acb->bh);
return &acb->common;
}
@@ -536,13 +476,14 @@ static void curl_close(BlockDriverState *bs)
s->states[i].curl = NULL;
}
if (s->states[i].orig_buf) {
g_free(s->states[i].orig_buf);
qemu_free(s->states[i].orig_buf);
s->states[i].orig_buf = NULL;
}
}
if (s->multi)
curl_multi_cleanup(s->multi);
g_free(s->url);
if (s->url)
free(s->url);
}
static int64_t curl_getlength(BlockDriverState *bs)

View File

@@ -28,7 +28,6 @@
#include <zlib.h>
typedef struct BDRVDMGState {
CoMutex lock;
/* each chunk contains a certain number of sectors,
* offsets[i] is the offset in the .dmg file,
* lengths[i] is the length of the compressed chunk,
@@ -128,11 +127,11 @@ static int dmg_open(BlockDriverState *bs, int flags)
chunk_count = (count-204)/40;
new_size = sizeof(uint64_t) * (s->n_chunks + chunk_count);
s->types = g_realloc(s->types, new_size/2);
s->offsets = g_realloc(s->offsets, new_size);
s->lengths = g_realloc(s->lengths, new_size);
s->sectors = g_realloc(s->sectors, new_size);
s->sectorcounts = g_realloc(s->sectorcounts, new_size);
s->types = qemu_realloc(s->types, new_size/2);
s->offsets = qemu_realloc(s->offsets, new_size);
s->lengths = qemu_realloc(s->lengths, new_size);
s->sectors = qemu_realloc(s->sectors, new_size);
s->sectorcounts = qemu_realloc(s->sectorcounts, new_size);
for(i=s->n_chunks;i<s->n_chunks+chunk_count;i++) {
s->types[i] = read_uint32(bs, offset);
@@ -171,14 +170,13 @@ static int dmg_open(BlockDriverState *bs, int flags)
}
/* initialize zlib engine */
s->compressed_chunk = g_malloc(max_compressed_size+1);
s->uncompressed_chunk = g_malloc(512*max_sectors_per_chunk);
s->compressed_chunk = qemu_malloc(max_compressed_size+1);
s->uncompressed_chunk = qemu_malloc(512*max_sectors_per_chunk);
if(inflateInit(&s->zstream) != Z_OK)
goto fail;
s->current_chunk = s->n_chunks;
qemu_co_mutex_init(&s->lock);
return 0;
fail:
return -1;
@@ -282,17 +280,6 @@ static int dmg_read(BlockDriverState *bs, int64_t sector_num,
return 0;
}
static coroutine_fn int dmg_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVDMGState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = dmg_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void dmg_close(BlockDriverState *bs)
{
BDRVDMGState *s = bs->opaque;
@@ -313,7 +300,7 @@ static BlockDriver bdrv_dmg = {
.instance_size = sizeof(BDRVDMGState),
.bdrv_probe = dmg_probe,
.bdrv_open = dmg_open,
.bdrv_read = dmg_co_read,
.bdrv_read = dmg_read,
.bdrv_close = dmg_close,
};

View File

@@ -1,991 +0,0 @@
/*
* QEMU Block driver for iSCSI images
*
* Copyright (c) 2010-2011 Ronnie Sahlberg <ronniesahlberg@gmail.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "config-host.h"
#include <poll.h>
#include <arpa/inet.h>
#include "qemu-common.h"
#include "qemu-error.h"
#include "block_int.h"
#include "trace.h"
#include "hw/scsi-defs.h"
#include <iscsi/iscsi.h>
#include <iscsi/scsi-lowlevel.h>
#ifdef __linux__
#include <scsi/sg.h>
#include <hw/scsi-defs.h>
#endif
typedef struct IscsiLun {
struct iscsi_context *iscsi;
int lun;
enum scsi_inquiry_peripheral_device_type type;
int block_size;
uint64_t num_blocks;
int events;
} IscsiLun;
typedef struct IscsiAIOCB {
BlockDriverAIOCB common;
QEMUIOVector *qiov;
QEMUBH *bh;
IscsiLun *iscsilun;
struct scsi_task *task;
uint8_t *buf;
int status;
int canceled;
size_t read_size;
size_t read_offset;
#ifdef __linux__
sg_io_hdr_t *ioh;
#endif
} IscsiAIOCB;
static void
iscsi_bh_cb(void *p)
{
IscsiAIOCB *acb = p;
qemu_bh_delete(acb->bh);
if (acb->canceled == 0) {
acb->common.cb(acb->common.opaque, acb->status);
}
if (acb->task != NULL) {
scsi_free_scsi_task(acb->task);
acb->task = NULL;
}
qemu_aio_release(acb);
}
static void
iscsi_schedule_bh(IscsiAIOCB *acb)
{
if (acb->bh) {
return;
}
acb->bh = qemu_bh_new(iscsi_bh_cb, acb);
qemu_bh_schedule(acb->bh);
}
static void
iscsi_abort_task_cb(struct iscsi_context *iscsi, int status, void *command_data,
void *private_data)
{
IscsiAIOCB *acb = private_data;
acb->status = -ECANCELED;
iscsi_schedule_bh(acb);
}
static void
iscsi_aio_cancel(BlockDriverAIOCB *blockacb)
{
IscsiAIOCB *acb = (IscsiAIOCB *)blockacb;
IscsiLun *iscsilun = acb->iscsilun;
if (acb->status != -EINPROGRESS) {
return;
}
acb->canceled = 1;
/* send a task mgmt call to the target to cancel the task on the target */
iscsi_task_mgmt_abort_task_async(iscsilun->iscsi, acb->task,
iscsi_abort_task_cb, acb);
while (acb->status == -EINPROGRESS) {
qemu_aio_wait();
}
}
static AIOPool iscsi_aio_pool = {
.aiocb_size = sizeof(IscsiAIOCB),
.cancel = iscsi_aio_cancel,
};
static void iscsi_process_read(void *arg);
static void iscsi_process_write(void *arg);
static int iscsi_process_flush(void *arg)
{
IscsiLun *iscsilun = arg;
return iscsi_queue_length(iscsilun->iscsi) > 0;
}
static void
iscsi_set_events(IscsiLun *iscsilun)
{
struct iscsi_context *iscsi = iscsilun->iscsi;
int ev;
/* We always register a read handler. */
ev = POLLIN;
ev |= iscsi_which_events(iscsi);
if (ev != iscsilun->events) {
qemu_aio_set_fd_handler(iscsi_get_fd(iscsi),
iscsi_process_read,
(ev & POLLOUT) ? iscsi_process_write : NULL,
iscsi_process_flush,
iscsilun);
}
iscsilun->events = ev;
}
static void
iscsi_process_read(void *arg)
{
IscsiLun *iscsilun = arg;
struct iscsi_context *iscsi = iscsilun->iscsi;
iscsi_service(iscsi, POLLIN);
iscsi_set_events(iscsilun);
}
static void
iscsi_process_write(void *arg)
{
IscsiLun *iscsilun = arg;
struct iscsi_context *iscsi = iscsilun->iscsi;
iscsi_service(iscsi, POLLOUT);
iscsi_set_events(iscsilun);
}
static void
iscsi_aio_write16_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
{
IscsiAIOCB *acb = opaque;
trace_iscsi_aio_write16_cb(iscsi, status, acb, acb->canceled);
g_free(acb->buf);
if (acb->canceled != 0) {
return;
}
acb->status = 0;
if (status < 0) {
error_report("Failed to write16 data to iSCSI lun. %s",
iscsi_get_error(iscsi));
acb->status = -EIO;
}
iscsi_schedule_bh(acb);
}
static int64_t sector_qemu2lun(int64_t sector, IscsiLun *iscsilun)
{
return sector * BDRV_SECTOR_SIZE / iscsilun->block_size;
}
static BlockDriverAIOCB *
iscsi_aio_writev(BlockDriverState *bs, int64_t sector_num,
QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque)
{
IscsiLun *iscsilun = bs->opaque;
struct iscsi_context *iscsi = iscsilun->iscsi;
IscsiAIOCB *acb;
size_t size;
uint32_t num_sectors;
uint64_t lba;
struct iscsi_data data;
acb = qemu_aio_get(&iscsi_aio_pool, bs, cb, opaque);
trace_iscsi_aio_writev(iscsi, sector_num, nb_sectors, opaque, acb);
acb->iscsilun = iscsilun;
acb->qiov = qiov;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
/* XXX we should pass the iovec to write16 to avoid the extra copy */
/* this will allow us to get rid of 'buf' completely */
size = nb_sectors * BDRV_SECTOR_SIZE;
acb->buf = g_malloc(size);
qemu_iovec_to_buf(acb->qiov, 0, acb->buf, size);
acb->task = malloc(sizeof(struct scsi_task));
if (acb->task == NULL) {
error_report("iSCSI: Failed to allocate task for scsi WRITE16 "
"command. %s", iscsi_get_error(iscsi));
qemu_aio_release(acb);
return NULL;
}
memset(acb->task, 0, sizeof(struct scsi_task));
acb->task->xfer_dir = SCSI_XFER_WRITE;
acb->task->cdb_size = 16;
acb->task->cdb[0] = 0x8a;
if (!(bs->open_flags & BDRV_O_CACHE_WB)) {
/* set FUA on writes when cache mode is write through */
acb->task->cdb[1] |= 0x04;
}
lba = sector_qemu2lun(sector_num, iscsilun);
*(uint32_t *)&acb->task->cdb[2] = htonl(lba >> 32);
*(uint32_t *)&acb->task->cdb[6] = htonl(lba & 0xffffffff);
num_sectors = size / iscsilun->block_size;
*(uint32_t *)&acb->task->cdb[10] = htonl(num_sectors);
acb->task->expxferlen = size;
data.data = acb->buf;
data.size = size;
if (iscsi_scsi_command_async(iscsi, iscsilun->lun, acb->task,
iscsi_aio_write16_cb,
&data,
acb) != 0) {
scsi_free_scsi_task(acb->task);
g_free(acb->buf);
qemu_aio_release(acb);
return NULL;
}
iscsi_set_events(iscsilun);
return &acb->common;
}
static void
iscsi_aio_read16_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
{
IscsiAIOCB *acb = opaque;
trace_iscsi_aio_read16_cb(iscsi, status, acb, acb->canceled);
if (acb->canceled != 0) {
return;
}
acb->status = 0;
if (status != 0) {
error_report("Failed to read16 data from iSCSI lun. %s",
iscsi_get_error(iscsi));
acb->status = -EIO;
}
iscsi_schedule_bh(acb);
}
static BlockDriverAIOCB *
iscsi_aio_readv(BlockDriverState *bs, int64_t sector_num,
QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque)
{
IscsiLun *iscsilun = bs->opaque;
struct iscsi_context *iscsi = iscsilun->iscsi;
IscsiAIOCB *acb;
size_t qemu_read_size;
int i;
uint64_t lba;
uint32_t num_sectors;
qemu_read_size = BDRV_SECTOR_SIZE * (size_t)nb_sectors;
acb = qemu_aio_get(&iscsi_aio_pool, bs, cb, opaque);
trace_iscsi_aio_readv(iscsi, sector_num, nb_sectors, opaque, acb);
acb->iscsilun = iscsilun;
acb->qiov = qiov;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
acb->read_size = qemu_read_size;
acb->buf = NULL;
/* If LUN blocksize is bigger than BDRV_BLOCK_SIZE a read from QEMU
* may be misaligned to the LUN, so we may need to read some extra
* data.
*/
acb->read_offset = 0;
if (iscsilun->block_size > BDRV_SECTOR_SIZE) {
uint64_t bdrv_offset = BDRV_SECTOR_SIZE * sector_num;
acb->read_offset = bdrv_offset % iscsilun->block_size;
}
num_sectors = (qemu_read_size + iscsilun->block_size
+ acb->read_offset - 1)
/ iscsilun->block_size;
acb->task = malloc(sizeof(struct scsi_task));
if (acb->task == NULL) {
error_report("iSCSI: Failed to allocate task for scsi READ16 "
"command. %s", iscsi_get_error(iscsi));
qemu_aio_release(acb);
return NULL;
}
memset(acb->task, 0, sizeof(struct scsi_task));
acb->task->xfer_dir = SCSI_XFER_READ;
lba = sector_qemu2lun(sector_num, iscsilun);
acb->task->expxferlen = qemu_read_size;
switch (iscsilun->type) {
case TYPE_DISK:
acb->task->cdb_size = 16;
acb->task->cdb[0] = 0x88;
*(uint32_t *)&acb->task->cdb[2] = htonl(lba >> 32);
*(uint32_t *)&acb->task->cdb[6] = htonl(lba & 0xffffffff);
*(uint32_t *)&acb->task->cdb[10] = htonl(num_sectors);
break;
default:
acb->task->cdb_size = 10;
acb->task->cdb[0] = 0x28;
*(uint32_t *)&acb->task->cdb[2] = htonl(lba);
*(uint16_t *)&acb->task->cdb[7] = htons(num_sectors);
break;
}
if (iscsi_scsi_command_async(iscsi, iscsilun->lun, acb->task,
iscsi_aio_read16_cb,
NULL,
acb) != 0) {
scsi_free_scsi_task(acb->task);
qemu_aio_release(acb);
return NULL;
}
for (i = 0; i < acb->qiov->niov; i++) {
scsi_task_add_data_in_buffer(acb->task,
acb->qiov->iov[i].iov_len,
acb->qiov->iov[i].iov_base);
}
iscsi_set_events(iscsilun);
return &acb->common;
}
static void
iscsi_synccache10_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
{
IscsiAIOCB *acb = opaque;
if (acb->canceled != 0) {
return;
}
acb->status = 0;
if (status < 0) {
error_report("Failed to sync10 data on iSCSI lun. %s",
iscsi_get_error(iscsi));
acb->status = -EIO;
}
iscsi_schedule_bh(acb);
}
static BlockDriverAIOCB *
iscsi_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
IscsiLun *iscsilun = bs->opaque;
struct iscsi_context *iscsi = iscsilun->iscsi;
IscsiAIOCB *acb;
acb = qemu_aio_get(&iscsi_aio_pool, bs, cb, opaque);
acb->iscsilun = iscsilun;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
acb->task = iscsi_synchronizecache10_task(iscsi, iscsilun->lun,
0, 0, 0, 0,
iscsi_synccache10_cb,
acb);
if (acb->task == NULL) {
error_report("iSCSI: Failed to send synchronizecache10 command. %s",
iscsi_get_error(iscsi));
qemu_aio_release(acb);
return NULL;
}
iscsi_set_events(iscsilun);
return &acb->common;
}
static void
iscsi_unmap_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
{
IscsiAIOCB *acb = opaque;
if (acb->canceled != 0) {
return;
}
acb->status = 0;
if (status < 0) {
error_report("Failed to unmap data on iSCSI lun. %s",
iscsi_get_error(iscsi));
acb->status = -EIO;
}
iscsi_schedule_bh(acb);
}
static BlockDriverAIOCB *
iscsi_aio_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
IscsiLun *iscsilun = bs->opaque;
struct iscsi_context *iscsi = iscsilun->iscsi;
IscsiAIOCB *acb;
struct unmap_list list[1];
acb = qemu_aio_get(&iscsi_aio_pool, bs, cb, opaque);
acb->iscsilun = iscsilun;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
list[0].lba = sector_qemu2lun(sector_num, iscsilun);
list[0].num = nb_sectors * BDRV_SECTOR_SIZE / iscsilun->block_size;
acb->task = iscsi_unmap_task(iscsi, iscsilun->lun,
0, 0, &list[0], 1,
iscsi_unmap_cb,
acb);
if (acb->task == NULL) {
error_report("iSCSI: Failed to send unmap command. %s",
iscsi_get_error(iscsi));
qemu_aio_release(acb);
return NULL;
}
iscsi_set_events(iscsilun);
return &acb->common;
}
#ifdef __linux__
static void
iscsi_aio_ioctl_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
{
IscsiAIOCB *acb = opaque;
if (acb->canceled != 0) {
return;
}
acb->status = 0;
if (status < 0) {
error_report("Failed to ioctl(SG_IO) to iSCSI lun. %s",
iscsi_get_error(iscsi));
acb->status = -EIO;
}
acb->ioh->driver_status = 0;
acb->ioh->host_status = 0;
acb->ioh->resid = 0;
#define SG_ERR_DRIVER_SENSE 0x08
if (status == SCSI_STATUS_CHECK_CONDITION && acb->task->datain.size >= 2) {
int ss;
acb->ioh->driver_status |= SG_ERR_DRIVER_SENSE;
acb->ioh->sb_len_wr = acb->task->datain.size - 2;
ss = (acb->ioh->mx_sb_len >= acb->ioh->sb_len_wr) ?
acb->ioh->mx_sb_len : acb->ioh->sb_len_wr;
memcpy(acb->ioh->sbp, &acb->task->datain.data[2], ss);
}
iscsi_schedule_bh(acb);
}
static BlockDriverAIOCB *iscsi_aio_ioctl(BlockDriverState *bs,
unsigned long int req, void *buf,
BlockDriverCompletionFunc *cb, void *opaque)
{
IscsiLun *iscsilun = bs->opaque;
struct iscsi_context *iscsi = iscsilun->iscsi;
struct iscsi_data data;
IscsiAIOCB *acb;
assert(req == SG_IO);
acb = qemu_aio_get(&iscsi_aio_pool, bs, cb, opaque);
acb->iscsilun = iscsilun;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
acb->buf = NULL;
acb->ioh = buf;
acb->task = malloc(sizeof(struct scsi_task));
if (acb->task == NULL) {
error_report("iSCSI: Failed to allocate task for scsi command. %s",
iscsi_get_error(iscsi));
qemu_aio_release(acb);
return NULL;
}
memset(acb->task, 0, sizeof(struct scsi_task));
switch (acb->ioh->dxfer_direction) {
case SG_DXFER_TO_DEV:
acb->task->xfer_dir = SCSI_XFER_WRITE;
break;
case SG_DXFER_FROM_DEV:
acb->task->xfer_dir = SCSI_XFER_READ;
break;
default:
acb->task->xfer_dir = SCSI_XFER_NONE;
break;
}
acb->task->cdb_size = acb->ioh->cmd_len;
memcpy(&acb->task->cdb[0], acb->ioh->cmdp, acb->ioh->cmd_len);
acb->task->expxferlen = acb->ioh->dxfer_len;
if (acb->task->xfer_dir == SCSI_XFER_WRITE) {
data.data = acb->ioh->dxferp;
data.size = acb->ioh->dxfer_len;
}
if (iscsi_scsi_command_async(iscsi, iscsilun->lun, acb->task,
iscsi_aio_ioctl_cb,
(acb->task->xfer_dir == SCSI_XFER_WRITE) ?
&data : NULL,
acb) != 0) {
scsi_free_scsi_task(acb->task);
qemu_aio_release(acb);
return NULL;
}
/* tell libiscsi to read straight into the buffer we got from ioctl */
if (acb->task->xfer_dir == SCSI_XFER_READ) {
scsi_task_add_data_in_buffer(acb->task,
acb->ioh->dxfer_len,
acb->ioh->dxferp);
}
iscsi_set_events(iscsilun);
return &acb->common;
}
static void ioctl_cb(void *opaque, int status)
{
int *p_status = opaque;
*p_status = status;
}
static int iscsi_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
{
IscsiLun *iscsilun = bs->opaque;
int status;
switch (req) {
case SG_GET_VERSION_NUM:
*(int *)buf = 30000;
break;
case SG_GET_SCSI_ID:
((struct sg_scsi_id *)buf)->scsi_type = iscsilun->type;
break;
case SG_IO:
status = -EINPROGRESS;
iscsi_aio_ioctl(bs, req, buf, ioctl_cb, &status);
while (status == -EINPROGRESS) {
qemu_aio_wait();
}
return 0;
default:
return -1;
}
return 0;
}
#endif
static int64_t
iscsi_getlength(BlockDriverState *bs)
{
IscsiLun *iscsilun = bs->opaque;
int64_t len;
len = iscsilun->num_blocks;
len *= iscsilun->block_size;
return len;
}
static int parse_chap(struct iscsi_context *iscsi, const char *target)
{
QemuOptsList *list;
QemuOpts *opts;
const char *user = NULL;
const char *password = NULL;
list = qemu_find_opts("iscsi");
if (!list) {
return 0;
}
opts = qemu_opts_find(list, target);
if (opts == NULL) {
opts = QTAILQ_FIRST(&list->head);
if (!opts) {
return 0;
}
}
user = qemu_opt_get(opts, "user");
if (!user) {
return 0;
}
password = qemu_opt_get(opts, "password");
if (!password) {
error_report("CHAP username specified but no password was given");
return -1;
}
if (iscsi_set_initiator_username_pwd(iscsi, user, password)) {
error_report("Failed to set initiator username and password");
return -1;
}
return 0;
}
static void parse_header_digest(struct iscsi_context *iscsi, const char *target)
{
QemuOptsList *list;
QemuOpts *opts;
const char *digest = NULL;
list = qemu_find_opts("iscsi");
if (!list) {
return;
}
opts = qemu_opts_find(list, target);
if (opts == NULL) {
opts = QTAILQ_FIRST(&list->head);
if (!opts) {
return;
}
}
digest = qemu_opt_get(opts, "header-digest");
if (!digest) {
return;
}
if (!strcmp(digest, "CRC32C")) {
iscsi_set_header_digest(iscsi, ISCSI_HEADER_DIGEST_CRC32C);
} else if (!strcmp(digest, "NONE")) {
iscsi_set_header_digest(iscsi, ISCSI_HEADER_DIGEST_NONE);
} else if (!strcmp(digest, "CRC32C-NONE")) {
iscsi_set_header_digest(iscsi, ISCSI_HEADER_DIGEST_CRC32C_NONE);
} else if (!strcmp(digest, "NONE-CRC32C")) {
iscsi_set_header_digest(iscsi, ISCSI_HEADER_DIGEST_NONE_CRC32C);
} else {
error_report("Invalid header-digest setting : %s", digest);
}
}
static char *parse_initiator_name(const char *target)
{
QemuOptsList *list;
QemuOpts *opts;
const char *name = NULL;
const char *iscsi_name = qemu_get_vm_name();
list = qemu_find_opts("iscsi");
if (list) {
opts = qemu_opts_find(list, target);
if (!opts) {
opts = QTAILQ_FIRST(&list->head);
}
if (opts) {
name = qemu_opt_get(opts, "initiator-name");
}
}
if (name) {
return g_strdup(name);
} else {
return g_strdup_printf("iqn.2008-11.org.linux-kvm%s%s",
iscsi_name ? ":" : "",
iscsi_name ? iscsi_name : "");
}
}
/*
* We support iscsi url's on the form
* iscsi://[<username>%<password>@]<host>[:<port>]/<targetname>/<lun>
*/
static int iscsi_open(BlockDriverState *bs, const char *filename, int flags)
{
IscsiLun *iscsilun = bs->opaque;
struct iscsi_context *iscsi = NULL;
struct iscsi_url *iscsi_url = NULL;
struct scsi_task *task = NULL;
struct scsi_inquiry_standard *inq = NULL;
struct scsi_readcapacity10 *rc10 = NULL;
struct scsi_readcapacity16 *rc16 = NULL;
char *initiator_name = NULL;
int ret;
if ((BDRV_SECTOR_SIZE % 512) != 0) {
error_report("iSCSI: Invalid BDRV_SECTOR_SIZE. "
"BDRV_SECTOR_SIZE(%lld) is not a multiple "
"of 512", BDRV_SECTOR_SIZE);
return -EINVAL;
}
iscsi_url = iscsi_parse_full_url(iscsi, filename);
if (iscsi_url == NULL) {
error_report("Failed to parse URL : %s", filename);
ret = -EINVAL;
goto out;
}
memset(iscsilun, 0, sizeof(IscsiLun));
initiator_name = parse_initiator_name(iscsi_url->target);
iscsi = iscsi_create_context(initiator_name);
if (iscsi == NULL) {
error_report("iSCSI: Failed to create iSCSI context.");
ret = -ENOMEM;
goto out;
}
if (iscsi_set_targetname(iscsi, iscsi_url->target)) {
error_report("iSCSI: Failed to set target name.");
ret = -EINVAL;
goto out;
}
if (iscsi_url->user != NULL) {
ret = iscsi_set_initiator_username_pwd(iscsi, iscsi_url->user,
iscsi_url->passwd);
if (ret != 0) {
error_report("Failed to set initiator username and password");
ret = -EINVAL;
goto out;
}
}
/* check if we got CHAP username/password via the options */
if (parse_chap(iscsi, iscsi_url->target) != 0) {
error_report("iSCSI: Failed to set CHAP user/password");
ret = -EINVAL;
goto out;
}
if (iscsi_set_session_type(iscsi, ISCSI_SESSION_NORMAL) != 0) {
error_report("iSCSI: Failed to set session type to normal.");
ret = -EINVAL;
goto out;
}
iscsi_set_header_digest(iscsi, ISCSI_HEADER_DIGEST_NONE_CRC32C);
/* check if we got HEADER_DIGEST via the options */
parse_header_digest(iscsi, iscsi_url->target);
if (iscsi_full_connect_sync(iscsi, iscsi_url->portal, iscsi_url->lun) != 0) {
error_report("iSCSI: Failed to connect to LUN : %s",
iscsi_get_error(iscsi));
ret = -EINVAL;
goto out;
}
iscsilun->iscsi = iscsi;
iscsilun->lun = iscsi_url->lun;
task = iscsi_inquiry_sync(iscsi, iscsilun->lun, 0, 0, 36);
if (task == NULL || task->status != SCSI_STATUS_GOOD) {
error_report("iSCSI: failed to send inquiry command.");
ret = -EINVAL;
goto out;
}
inq = scsi_datain_unmarshall(task);
if (inq == NULL) {
error_report("iSCSI: Failed to unmarshall inquiry data.");
ret = -EINVAL;
goto out;
}
iscsilun->type = inq->periperal_device_type;
scsi_free_scsi_task(task);
switch (iscsilun->type) {
case TYPE_DISK:
task = iscsi_readcapacity16_sync(iscsi, iscsilun->lun);
if (task == NULL || task->status != SCSI_STATUS_GOOD) {
error_report("iSCSI: failed to send readcapacity16 command.");
ret = -EINVAL;
goto out;
}
rc16 = scsi_datain_unmarshall(task);
if (rc16 == NULL) {
error_report("iSCSI: Failed to unmarshall readcapacity16 data.");
ret = -EINVAL;
goto out;
}
iscsilun->block_size = rc16->block_length;
iscsilun->num_blocks = rc16->returned_lba + 1;
break;
case TYPE_ROM:
task = iscsi_readcapacity10_sync(iscsi, iscsilun->lun, 0, 0);
if (task == NULL || task->status != SCSI_STATUS_GOOD) {
error_report("iSCSI: failed to send readcapacity10 command.");
ret = -EINVAL;
goto out;
}
rc10 = scsi_datain_unmarshall(task);
if (rc10 == NULL) {
error_report("iSCSI: Failed to unmarshall readcapacity10 data.");
ret = -EINVAL;
goto out;
}
iscsilun->block_size = rc10->block_size;
if (rc10->lba == 0) {
/* blank disk loaded */
iscsilun->num_blocks = 0;
} else {
iscsilun->num_blocks = rc10->lba + 1;
}
break;
default:
break;
}
bs->total_sectors = iscsilun->num_blocks *
iscsilun->block_size / BDRV_SECTOR_SIZE ;
/* Medium changer or tape. We dont have any emulation for this so this must
* be sg ioctl compatible. We force it to be sg, otherwise qemu will try
* to read from the device to guess the image format.
*/
if (iscsilun->type == TYPE_MEDIUM_CHANGER ||
iscsilun->type == TYPE_TAPE) {
bs->sg = 1;
}
ret = 0;
out:
if (initiator_name != NULL) {
g_free(initiator_name);
}
if (iscsi_url != NULL) {
iscsi_destroy_url(iscsi_url);
}
if (task != NULL) {
scsi_free_scsi_task(task);
}
if (ret) {
if (iscsi != NULL) {
iscsi_destroy_context(iscsi);
}
memset(iscsilun, 0, sizeof(IscsiLun));
}
return ret;
}
static void iscsi_close(BlockDriverState *bs)
{
IscsiLun *iscsilun = bs->opaque;
struct iscsi_context *iscsi = iscsilun->iscsi;
qemu_aio_set_fd_handler(iscsi_get_fd(iscsi), NULL, NULL, NULL, NULL);
iscsi_destroy_context(iscsi);
memset(iscsilun, 0, sizeof(IscsiLun));
}
static int iscsi_has_zero_init(BlockDriverState *bs)
{
return 0;
}
static BlockDriver bdrv_iscsi = {
.format_name = "iscsi",
.protocol_name = "iscsi",
.instance_size = sizeof(IscsiLun),
.bdrv_file_open = iscsi_open,
.bdrv_close = iscsi_close,
.bdrv_getlength = iscsi_getlength,
.bdrv_aio_readv = iscsi_aio_readv,
.bdrv_aio_writev = iscsi_aio_writev,
.bdrv_aio_flush = iscsi_aio_flush,
.bdrv_aio_discard = iscsi_aio_discard,
.bdrv_has_zero_init = iscsi_has_zero_init,
#ifdef __linux__
.bdrv_ioctl = iscsi_ioctl,
.bdrv_aio_ioctl = iscsi_aio_ioctl,
#endif
};
static void iscsi_block_init(void)
{
bdrv_register(&bdrv_iscsi);
}
block_init(iscsi_block_init);

View File

@@ -28,7 +28,6 @@
#include "qemu-common.h"
#include "nbd.h"
#include "block_int.h"
#include "module.h"
#include "qemu_socket.h"
@@ -46,25 +45,12 @@
#define logout(fmt, ...) ((void)0)
#endif
#define MAX_NBD_REQUESTS 16
#define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs))
#define INDEX_TO_HANDLE(bs, index) ((index) ^ ((uint64_t)(intptr_t)bs))
typedef struct BDRVNBDState {
int sock;
uint32_t nbdflags;
off_t size;
size_t blocksize;
char *export_name; /* An NBD server may export several devices */
CoMutex send_mutex;
CoMutex free_sema;
Coroutine *send_coroutine;
int in_flight;
Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
struct nbd_reply reply;
/* If it begins with '/', this is a UNIX domain socket. Otherwise,
* it's a string of the form <hostname|ip4|\[ip6\]>:port
*/
@@ -79,7 +65,7 @@ static int nbd_config(BDRVNBDState *s, const char *filename, int flags)
const char *unixpath;
int err = -EINVAL;
file = g_strdup(filename);
file = qemu_strdup(filename);
export_name = strstr(file, EN_OPTSTR);
if (export_name) {
@@ -88,7 +74,7 @@ static int nbd_config(BDRVNBDState *s, const char *filename, int flags)
}
export_name[0] = 0; /* truncate 'file' */
export_name += strlen(EN_OPTSTR);
s->export_name = g_strdup(export_name);
s->export_name = qemu_strdup(export_name);
}
/* extract the host_spec - fail if it's not nbd:... */
@@ -101,159 +87,22 @@ static int nbd_config(BDRVNBDState *s, const char *filename, int flags)
if (unixpath[0] != '/') { /* We demand an absolute path*/
goto out;
}
s->host_spec = g_strdup(unixpath);
s->host_spec = qemu_strdup(unixpath);
} else {
s->host_spec = g_strdup(host_spec);
s->host_spec = qemu_strdup(host_spec);
}
err = 0;
out:
g_free(file);
qemu_free(file);
if (err != 0) {
g_free(s->export_name);
g_free(s->host_spec);
qemu_free(s->export_name);
qemu_free(s->host_spec);
}
return err;
}
static void nbd_coroutine_start(BDRVNBDState *s, struct nbd_request *request)
{
int i;
/* Poor man semaphore. The free_sema is locked when no other request
* can be accepted, and unlocked after receiving one reply. */
if (s->in_flight >= MAX_NBD_REQUESTS - 1) {
qemu_co_mutex_lock(&s->free_sema);
assert(s->in_flight < MAX_NBD_REQUESTS);
}
s->in_flight++;
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i] == NULL) {
s->recv_coroutine[i] = qemu_coroutine_self();
break;
}
}
assert(i < MAX_NBD_REQUESTS);
request->handle = INDEX_TO_HANDLE(s, i);
}
static int nbd_have_request(void *opaque)
{
BDRVNBDState *s = opaque;
return s->in_flight > 0;
}
static void nbd_reply_ready(void *opaque)
{
BDRVNBDState *s = opaque;
uint64_t i;
int ret;
if (s->reply.handle == 0) {
/* No reply already in flight. Fetch a header. It is possible
* that another thread has done the same thing in parallel, so
* the socket is not readable anymore.
*/
ret = nbd_receive_reply(s->sock, &s->reply);
if (ret == -EAGAIN) {
return;
}
if (ret < 0) {
s->reply.handle = 0;
goto fail;
}
}
/* There's no need for a mutex on the receive side, because the
* handler acts as a synchronization point and ensures that only
* one coroutine is called until the reply finishes. */
i = HANDLE_TO_INDEX(s, s->reply.handle);
if (i >= MAX_NBD_REQUESTS) {
goto fail;
}
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
return;
}
fail:
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
}
}
}
static void nbd_restart_write(void *opaque)
{
BDRVNBDState *s = opaque;
qemu_coroutine_enter(s->send_coroutine, NULL);
}
static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
QEMUIOVector *qiov, int offset)
{
int rc, ret;
qemu_co_mutex_lock(&s->send_mutex);
s->send_coroutine = qemu_coroutine_self();
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write,
nbd_have_request, s);
rc = nbd_send_request(s->sock, request);
if (rc >= 0 && qiov) {
ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
return -EIO;
}
}
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL,
nbd_have_request, s);
s->send_coroutine = NULL;
qemu_co_mutex_unlock(&s->send_mutex);
return rc;
}
static void nbd_co_receive_reply(BDRVNBDState *s, struct nbd_request *request,
struct nbd_reply *reply,
QEMUIOVector *qiov, int offset)
{
int ret;
/* Wait until we're woken up by the read handler. TODO: perhaps
* peek at the next reply and avoid yielding if it's ours? */
qemu_coroutine_yield();
*reply = s->reply;
if (reply->handle != request->handle) {
reply->error = EIO;
} else {
if (qiov && reply->error == 0) {
ret = qemu_co_recvv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
reply->error = EIO;
}
}
/* Tell the read handler to read another header. */
s->reply.handle = 0;
}
}
static void nbd_coroutine_end(BDRVNBDState *s, struct nbd_request *request)
{
int i = HANDLE_TO_INDEX(s, request->handle);
s->recv_coroutine[i] = NULL;
if (s->in_flight-- == MAX_NBD_REQUESTS) {
qemu_co_mutex_unlock(&s->free_sema);
}
}
static int nbd_establish_connection(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
@@ -261,6 +110,7 @@ static int nbd_establish_connection(BlockDriverState *bs)
int ret;
off_t size;
size_t blocksize;
uint32_t nbdflags;
if (s->host_spec[0] == '/') {
sock = unix_socket_outgoing(s->host_spec);
@@ -269,25 +119,22 @@ static int nbd_establish_connection(BlockDriverState *bs)
}
/* Failed to establish connection */
if (sock < 0) {
if (sock == -1) {
logout("Failed to establish connection to NBD server\n");
return -errno;
}
/* NBD handshake */
ret = nbd_receive_negotiate(sock, s->export_name, &s->nbdflags, &size,
ret = nbd_receive_negotiate(sock, s->export_name, &nbdflags, &size,
&blocksize);
if (ret < 0) {
if (ret == -1) {
logout("Failed to negotiate with the NBD server\n");
closesocket(sock);
return ret;
return -errno;
}
/* Now that we're connected, set the socket to be non-blocking and
* kick the reply mechanism. */
/* Now that we're connected, set the socket to be non-blocking */
socket_set_nonblock(sock);
qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL,
nbd_have_request, s);
s->sock = sock;
s->size = size;
@@ -303,11 +150,11 @@ static void nbd_teardown_connection(BlockDriverState *bs)
struct nbd_request request;
request.type = NBD_CMD_DISC;
request.handle = (uint64_t)(intptr_t)bs;
request.from = 0;
request.len = 0;
nbd_send_request(s->sock, &request);
qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL, NULL);
closesocket(s->sock);
}
@@ -316,9 +163,6 @@ static int nbd_open(BlockDriverState *bs, const char* filename, int flags)
BDRVNBDState *s = bs->opaque;
int result;
qemu_co_mutex_init(&s->send_mutex);
qemu_co_mutex_init(&s->free_sema);
/* Pop the config into our state object. Exit if invalid. */
result = nbd_config(s, filename, flags);
if (result != 0) {
@@ -333,158 +177,71 @@ static int nbd_open(BlockDriverState *bs, const char* filename, int flags)
return result;
}
static int nbd_co_readv_1(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
static int nbd_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
request.type = NBD_CMD_READ;
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, qiov, offset);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
static int nbd_co_writev_1(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
request.type = NBD_CMD_WRITE;
if (!bdrv_enable_write_cache(bs) && (s->nbdflags & NBD_FLAG_SEND_FUA)) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, qiov, offset);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, NULL, 0);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
/* qemu-nbd has a limit of slightly less than 1M per request. Try to
* remain aligned to 4K. */
#define NBD_MAX_SECTORS 2040
static int nbd_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_readv_1(bs, sector_num, NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_readv_1(bs, sector_num, nb_sectors, qiov, offset);
}
static int nbd_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_writev_1(bs, sector_num, NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_writev_1(bs, sector_num, nb_sectors, qiov, offset);
}
static int nbd_co_flush(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
if (!(s->nbdflags & NBD_FLAG_SEND_FLUSH)) {
return 0;
}
request.type = NBD_CMD_FLUSH;
if (s->nbdflags & NBD_FLAG_SEND_FUA) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = 0;
request.len = 0;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, NULL, 0);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
static int nbd_co_discard(BlockDriverState *bs, int64_t sector_num,
int nb_sectors)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
if (!(s->nbdflags & NBD_FLAG_SEND_TRIM)) {
return 0;
}
request.type = NBD_CMD_TRIM;
request.handle = (uint64_t)(intptr_t)bs;
request.from = sector_num * 512;;
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, NULL, 0);
}
nbd_coroutine_end(s, &request);
return -reply.error;
if (nbd_send_request(s->sock, &request) == -1)
return -errno;
if (nbd_receive_reply(s->sock, &reply) == -1)
return -errno;
if (reply.error !=0)
return -reply.error;
if (reply.handle != request.handle)
return -EIO;
if (nbd_wr_sync(s->sock, buf, request.len, 1) != request.len)
return -EIO;
return 0;
}
static int nbd_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
request.type = NBD_CMD_WRITE;
request.handle = (uint64_t)(intptr_t)bs;
request.from = sector_num * 512;;
request.len = nb_sectors * 512;
if (nbd_send_request(s->sock, &request) == -1)
return -errno;
if (nbd_wr_sync(s->sock, (uint8_t*)buf, request.len, 0) != request.len)
return -EIO;
if (nbd_receive_reply(s->sock, &reply) == -1)
return -errno;
if (reply.error !=0)
return -reply.error;
if (reply.handle != request.handle)
return -EIO;
return 0;
}
static void nbd_close(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
g_free(s->export_name);
g_free(s->host_spec);
qemu_free(s->export_name);
qemu_free(s->host_spec);
nbd_teardown_connection(bs);
}
@@ -497,16 +254,14 @@ static int64_t nbd_getlength(BlockDriverState *bs)
}
static BlockDriver bdrv_nbd = {
.format_name = "nbd",
.instance_size = sizeof(BDRVNBDState),
.bdrv_file_open = nbd_open,
.bdrv_co_readv = nbd_co_readv,
.bdrv_co_writev = nbd_co_writev,
.bdrv_close = nbd_close,
.bdrv_co_flush_to_os = nbd_co_flush,
.bdrv_co_discard = nbd_co_discard,
.bdrv_getlength = nbd_getlength,
.protocol_name = "nbd",
.format_name = "nbd",
.instance_size = sizeof(BDRVNBDState),
.bdrv_file_open = nbd_open,
.bdrv_read = nbd_read,
.bdrv_write = nbd_write,
.bdrv_close = nbd_close,
.bdrv_getlength = nbd_getlength,
.protocol_name = "nbd",
};
static void bdrv_nbd_init(void)

View File

@@ -43,10 +43,9 @@ struct parallels_header {
uint32_t catalog_entries;
uint32_t nb_sectors;
char padding[24];
} QEMU_PACKED;
} __attribute__((packed));
typedef struct BDRVParallelsState {
CoMutex lock;
uint32_t *catalog_bitmap;
int catalog_size;
@@ -89,18 +88,17 @@ static int parallels_open(BlockDriverState *bs, int flags)
s->tracks = le32_to_cpu(ph.tracks);
s->catalog_size = le32_to_cpu(ph.catalog_entries);
s->catalog_bitmap = g_malloc(s->catalog_size * 4);
s->catalog_bitmap = qemu_malloc(s->catalog_size * 4);
if (bdrv_pread(bs->file, 64, s->catalog_bitmap, s->catalog_size * 4) !=
s->catalog_size * 4)
goto fail;
for (i = 0; i < s->catalog_size; i++)
le32_to_cpus(&s->catalog_bitmap[i]);
qemu_co_mutex_init(&s->lock);
return 0;
fail:
if (s->catalog_bitmap)
g_free(s->catalog_bitmap);
qemu_free(s->catalog_bitmap);
return -1;
}
@@ -136,21 +134,10 @@ static int parallels_read(BlockDriverState *bs, int64_t sector_num,
return 0;
}
static coroutine_fn int parallels_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVParallelsState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = parallels_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void parallels_close(BlockDriverState *bs)
{
BDRVParallelsState *s = bs->opaque;
g_free(s->catalog_bitmap);
qemu_free(s->catalog_bitmap);
}
static BlockDriver bdrv_parallels = {
@@ -158,7 +145,7 @@ static BlockDriver bdrv_parallels = {
.instance_size = sizeof(BDRVParallelsState),
.bdrv_probe = parallels_probe,
.bdrv_open = parallels_open,
.bdrv_read = parallels_co_read,
.bdrv_read = parallels_read,
.bdrv_close = parallels_close,
};

View File

@@ -26,7 +26,6 @@
#include "module.h"
#include <zlib.h>
#include "aes.h"
#include "migration.h"
/**************************************************************/
/* QEMU COW block driver with compression and encryption support */
@@ -74,8 +73,6 @@ typedef struct BDRVQcowState {
uint32_t crypt_method_header;
AES_KEY aes_encrypt_key;
AES_KEY aes_decrypt_key;
CoMutex lock;
Error *migration_blocker;
} BDRVQcowState;
static int decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset);
@@ -95,13 +92,11 @@ static int qcow_probe(const uint8_t *buf, int buf_size, const char *filename)
static int qcow_open(BlockDriverState *bs, int flags)
{
BDRVQcowState *s = bs->opaque;
int len, i, shift, ret;
int len, i, shift;
QCowHeader header;
ret = bdrv_pread(bs->file, 0, &header, sizeof(header));
if (ret < 0) {
if (bdrv_pread(bs->file, 0, &header, sizeof(header)) != sizeof(header))
goto fail;
}
be32_to_cpus(&header.magic);
be32_to_cpus(&header.version);
be64_to_cpus(&header.backing_file_offset);
@@ -111,31 +106,15 @@ static int qcow_open(BlockDriverState *bs, int flags)
be32_to_cpus(&header.crypt_method);
be64_to_cpus(&header.l1_table_offset);
if (header.magic != QCOW_MAGIC) {
ret = -EINVAL;
if (header.magic != QCOW_MAGIC || header.version != QCOW_VERSION)
goto fail;
}
if (header.version != QCOW_VERSION) {
char version[64];
snprintf(version, sizeof(version), "QCOW version %d", header.version);
qerror_report(QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "qcow", version);
ret = -ENOTSUP;
if (header.size <= 1 || header.cluster_bits < 9)
goto fail;
}
if (header.size <= 1 || header.cluster_bits < 9) {
ret = -EINVAL;
if (header.crypt_method > QCOW_CRYPT_AES)
goto fail;
}
if (header.crypt_method > QCOW_CRYPT_AES) {
ret = -EINVAL;
goto fail;
}
s->crypt_method_header = header.crypt_method;
if (s->crypt_method_header) {
if (s->crypt_method_header)
bs->encrypted = 1;
}
s->cluster_bits = header.cluster_bits;
s->cluster_size = 1 << s->cluster_bits;
s->cluster_sectors = 1 << (s->cluster_bits - 9);
@@ -149,52 +128,44 @@ static int qcow_open(BlockDriverState *bs, int flags)
s->l1_size = (header.size + (1LL << shift) - 1) >> shift;
s->l1_table_offset = header.l1_table_offset;
s->l1_table = g_malloc(s->l1_size * sizeof(uint64_t));
ret = bdrv_pread(bs->file, s->l1_table_offset, s->l1_table,
s->l1_size * sizeof(uint64_t));
if (ret < 0) {
s->l1_table = qemu_malloc(s->l1_size * sizeof(uint64_t));
if (!s->l1_table)
goto fail;
if (bdrv_pread(bs->file, s->l1_table_offset, s->l1_table, s->l1_size * sizeof(uint64_t)) !=
s->l1_size * sizeof(uint64_t))
goto fail;
}
for(i = 0;i < s->l1_size; i++) {
be64_to_cpus(&s->l1_table[i]);
}
/* alloc L2 cache */
s->l2_cache = g_malloc(s->l2_size * L2_CACHE_SIZE * sizeof(uint64_t));
s->cluster_cache = g_malloc(s->cluster_size);
s->cluster_data = g_malloc(s->cluster_size);
s->l2_cache = qemu_malloc(s->l2_size * L2_CACHE_SIZE * sizeof(uint64_t));
if (!s->l2_cache)
goto fail;
s->cluster_cache = qemu_malloc(s->cluster_size);
if (!s->cluster_cache)
goto fail;
s->cluster_data = qemu_malloc(s->cluster_size);
if (!s->cluster_data)
goto fail;
s->cluster_cache_offset = -1;
/* read the backing file name */
if (header.backing_file_offset != 0) {
len = header.backing_file_size;
if (len > 1023) {
if (len > 1023)
len = 1023;
}
ret = bdrv_pread(bs->file, header.backing_file_offset,
bs->backing_file, len);
if (ret < 0) {
if (bdrv_pread(bs->file, header.backing_file_offset, bs->backing_file, len) != len)
goto fail;
}
bs->backing_file[len] = '\0';
}
/* Disable migration when qcow images are used */
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"qcow", bs->device_name, "live migration");
migrate_add_blocker(s->migration_blocker);
qemu_co_mutex_init(&s->lock);
return 0;
fail:
g_free(s->l1_table);
g_free(s->l2_cache);
g_free(s->cluster_cache);
g_free(s->cluster_data);
return ret;
qemu_free(s->l1_table);
qemu_free(s->l2_cache);
qemu_free(s->cluster_cache);
qemu_free(s->cluster_data);
return -1;
}
static int qcow_set_key(BlockDriverState *bs, const char *key)
@@ -218,6 +189,24 @@ static int qcow_set_key(BlockDriverState *bs, const char *key)
return -1;
if (AES_set_decrypt_key(keybuf, 128, &s->aes_decrypt_key) != 0)
return -1;
#if 0
/* test */
{
uint8_t in[16];
uint8_t out[16];
uint8_t tmp[16];
for(i=0;i<16;i++)
in[i] = i;
AES_encrypt(in, tmp, &s->aes_encrypt_key);
AES_decrypt(tmp, out, &s->aes_decrypt_key);
for(i = 0; i < 16; i++)
printf(" %02x", tmp[i]);
printf("\n");
for(i = 0; i < 16; i++)
printf(" %02x", out[i]);
printf("\n");
}
#endif
return 0;
}
@@ -386,16 +375,14 @@ static uint64_t get_cluster_offset(BlockDriverState *bs,
return cluster_offset;
}
static int coroutine_fn qcow_co_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum)
static int qcow_is_allocated(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum)
{
BDRVQcowState *s = bs->opaque;
int index_in_cluster, n;
uint64_t cluster_offset;
qemu_co_mutex_lock(&s->lock);
cluster_offset = get_cluster_offset(bs, sector_num << 9, 0, 0, 0, 0);
qemu_co_mutex_unlock(&s->lock);
index_in_cluster = sector_num & (s->cluster_sectors - 1);
n = s->cluster_sectors - index_in_cluster;
if (n > nb_sectors)
@@ -453,205 +440,375 @@ static int decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset)
return 0;
}
static coroutine_fn int qcow_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
#if 0
static int qcow_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVQcowState *s = bs->opaque;
int index_in_cluster;
int ret = 0, n;
int ret, index_in_cluster, n;
uint64_t cluster_offset;
struct iovec hd_iov;
QEMUIOVector hd_qiov;
uint8_t *buf;
void *orig_buf;
if (qiov->niov > 1) {
buf = orig_buf = qemu_blockalign(bs, qiov->size);
} else {
orig_buf = NULL;
buf = (uint8_t *)qiov->iov->iov_base;
}
qemu_co_mutex_lock(&s->lock);
while (nb_sectors != 0) {
/* prepare next request */
cluster_offset = get_cluster_offset(bs, sector_num << 9,
0, 0, 0, 0);
while (nb_sectors > 0) {
cluster_offset = get_cluster_offset(bs, sector_num << 9, 0, 0, 0, 0);
index_in_cluster = sector_num & (s->cluster_sectors - 1);
n = s->cluster_sectors - index_in_cluster;
if (n > nb_sectors) {
if (n > nb_sectors)
n = nb_sectors;
}
if (!cluster_offset) {
if (bs->backing_hd) {
/* read from the base image */
hd_iov.iov_base = (void *)buf;
hd_iov.iov_len = n * 512;
qemu_iovec_init_external(&hd_qiov, &hd_iov, 1);
qemu_co_mutex_unlock(&s->lock);
ret = bdrv_co_readv(bs->backing_hd, sector_num,
n, &hd_qiov);
qemu_co_mutex_lock(&s->lock);
if (ret < 0) {
goto fail;
}
ret = bdrv_read(bs->backing_hd, sector_num, buf, n);
if (ret < 0)
return -1;
} else {
/* Note: in this case, no need to wait */
memset(buf, 0, 512 * n);
}
} else if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
/* add AIO support for compressed blocks ? */
if (decompress_cluster(bs, cluster_offset) < 0) {
goto fail;
}
memcpy(buf,
s->cluster_cache + index_in_cluster * 512, 512 * n);
if (decompress_cluster(bs, cluster_offset) < 0)
return -1;
memcpy(buf, s->cluster_cache + index_in_cluster * 512, 512 * n);
} else {
if ((cluster_offset & 511) != 0) {
goto fail;
}
hd_iov.iov_base = (void *)buf;
hd_iov.iov_len = n * 512;
qemu_iovec_init_external(&hd_qiov, &hd_iov, 1);
qemu_co_mutex_unlock(&s->lock);
ret = bdrv_co_readv(bs->file,
(cluster_offset >> 9) + index_in_cluster,
n, &hd_qiov);
qemu_co_mutex_lock(&s->lock);
if (ret < 0) {
break;
}
ret = bdrv_pread(bs->file, cluster_offset + index_in_cluster * 512, buf, n * 512);
if (ret != n * 512)
return -1;
if (s->crypt_method) {
encrypt_sectors(s, sector_num, buf, buf,
n, 0,
encrypt_sectors(s, sector_num, buf, buf, n, 0,
&s->aes_decrypt_key);
}
}
ret = 0;
nb_sectors -= n;
sector_num += n;
buf += n * 512;
}
return 0;
}
#endif
done:
qemu_co_mutex_unlock(&s->lock);
typedef struct QCowAIOCB {
BlockDriverAIOCB common;
int64_t sector_num;
QEMUIOVector *qiov;
uint8_t *buf;
void *orig_buf;
int nb_sectors;
int n;
uint64_t cluster_offset;
uint8_t *cluster_data;
struct iovec hd_iov;
bool is_write;
QEMUBH *bh;
QEMUIOVector hd_qiov;
BlockDriverAIOCB *hd_aiocb;
} QCowAIOCB;
if (qiov->niov > 1) {
qemu_iovec_from_buf(qiov, 0, orig_buf, qiov->size);
qemu_vfree(orig_buf);
}
return ret;
fail:
ret = -EIO;
goto done;
static void qcow_aio_cancel(BlockDriverAIOCB *blockacb)
{
QCowAIOCB *acb = container_of(blockacb, QCowAIOCB, common);
if (acb->hd_aiocb)
bdrv_aio_cancel(acb->hd_aiocb);
qemu_aio_release(acb);
}
static coroutine_fn int qcow_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
static AIOPool qcow_aio_pool = {
.aiocb_size = sizeof(QCowAIOCB),
.cancel = qcow_aio_cancel,
};
static QCowAIOCB *qcow_aio_setup(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque, int is_write)
{
QCowAIOCB *acb;
acb = qemu_aio_get(&qcow_aio_pool, bs, cb, opaque);
if (!acb)
return NULL;
acb->hd_aiocb = NULL;
acb->sector_num = sector_num;
acb->qiov = qiov;
acb->is_write = is_write;
if (qiov->niov > 1) {
acb->buf = acb->orig_buf = qemu_blockalign(bs, qiov->size);
if (is_write)
qemu_iovec_to_buffer(qiov, acb->buf);
} else {
acb->buf = (uint8_t *)qiov->iov->iov_base;
}
acb->nb_sectors = nb_sectors;
acb->n = 0;
acb->cluster_offset = 0;
return acb;
}
static void qcow_aio_read_cb(void *opaque, int ret);
static void qcow_aio_write_cb(void *opaque, int ret);
static void qcow_aio_rw_bh(void *opaque)
{
QCowAIOCB *acb = opaque;
qemu_bh_delete(acb->bh);
acb->bh = NULL;
if (acb->is_write) {
qcow_aio_write_cb(opaque, 0);
} else {
qcow_aio_read_cb(opaque, 0);
}
}
static int qcow_schedule_bh(QEMUBHFunc *cb, QCowAIOCB *acb)
{
if (acb->bh) {
return -EIO;
}
acb->bh = qemu_bh_new(cb, acb);
if (!acb->bh) {
return -EIO;
}
qemu_bh_schedule(acb->bh);
return 0;
}
static void qcow_aio_read_cb(void *opaque, int ret)
{
QCowAIOCB *acb = opaque;
BlockDriverState *bs = acb->common.bs;
BDRVQcowState *s = bs->opaque;
int index_in_cluster;
acb->hd_aiocb = NULL;
if (ret < 0)
goto done;
redo:
/* post process the read buffer */
if (!acb->cluster_offset) {
/* nothing to do */
} else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
/* nothing to do */
} else {
if (s->crypt_method) {
encrypt_sectors(s, acb->sector_num, acb->buf, acb->buf,
acb->n, 0,
&s->aes_decrypt_key);
}
}
acb->nb_sectors -= acb->n;
acb->sector_num += acb->n;
acb->buf += acb->n * 512;
if (acb->nb_sectors == 0) {
/* request completed */
ret = 0;
goto done;
}
/* prepare next AIO request */
acb->cluster_offset = get_cluster_offset(bs, acb->sector_num << 9,
0, 0, 0, 0);
index_in_cluster = acb->sector_num & (s->cluster_sectors - 1);
acb->n = s->cluster_sectors - index_in_cluster;
if (acb->n > acb->nb_sectors)
acb->n = acb->nb_sectors;
if (!acb->cluster_offset) {
if (bs->backing_hd) {
/* read from the base image */
acb->hd_iov.iov_base = (void *)acb->buf;
acb->hd_iov.iov_len = acb->n * 512;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
acb->hd_aiocb = bdrv_aio_readv(bs->backing_hd, acb->sector_num,
&acb->hd_qiov, acb->n, qcow_aio_read_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
} else {
/* Note: in this case, no need to wait */
memset(acb->buf, 0, 512 * acb->n);
goto redo;
}
} else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
/* add AIO support for compressed blocks ? */
if (decompress_cluster(bs, acb->cluster_offset) < 0) {
ret = -EIO;
goto done;
}
memcpy(acb->buf,
s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
goto redo;
} else {
if ((acb->cluster_offset & 511) != 0) {
ret = -EIO;
goto done;
}
acb->hd_iov.iov_base = (void *)acb->buf;
acb->hd_iov.iov_len = acb->n * 512;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
acb->hd_aiocb = bdrv_aio_readv(bs->file,
(acb->cluster_offset >> 9) + index_in_cluster,
&acb->hd_qiov, acb->n, qcow_aio_read_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
}
return;
done:
if (acb->qiov->niov > 1) {
qemu_iovec_from_buffer(acb->qiov, acb->orig_buf, acb->qiov->size);
qemu_vfree(acb->orig_buf);
}
acb->common.cb(acb->common.opaque, ret);
qemu_aio_release(acb);
}
static BlockDriverAIOCB *qcow_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
QCowAIOCB *acb;
int ret;
acb = qcow_aio_setup(bs, sector_num, qiov, nb_sectors, cb, opaque, 0);
if (!acb)
return NULL;
ret = qcow_schedule_bh(qcow_aio_rw_bh, acb);
if (ret < 0) {
if (acb->qiov->niov > 1) {
qemu_vfree(acb->orig_buf);
}
qemu_aio_release(acb);
return NULL;
}
return &acb->common;
}
static void qcow_aio_write_cb(void *opaque, int ret)
{
QCowAIOCB *acb = opaque;
BlockDriverState *bs = acb->common.bs;
BDRVQcowState *s = bs->opaque;
int index_in_cluster;
uint64_t cluster_offset;
const uint8_t *src_buf;
int ret = 0, n;
uint8_t *cluster_data = NULL;
struct iovec hd_iov;
QEMUIOVector hd_qiov;
uint8_t *buf;
void *orig_buf;
acb->hd_aiocb = NULL;
if (ret < 0)
goto done;
acb->nb_sectors -= acb->n;
acb->sector_num += acb->n;
acb->buf += acb->n * 512;
if (acb->nb_sectors == 0) {
/* request completed */
ret = 0;
goto done;
}
index_in_cluster = acb->sector_num & (s->cluster_sectors - 1);
acb->n = s->cluster_sectors - index_in_cluster;
if (acb->n > acb->nb_sectors)
acb->n = acb->nb_sectors;
cluster_offset = get_cluster_offset(bs, acb->sector_num << 9, 1, 0,
index_in_cluster,
index_in_cluster + acb->n);
if (!cluster_offset || (cluster_offset & 511) != 0) {
ret = -EIO;
goto done;
}
if (s->crypt_method) {
if (!acb->cluster_data) {
acb->cluster_data = qemu_mallocz(s->cluster_size);
if (!acb->cluster_data) {
ret = -ENOMEM;
goto done;
}
}
encrypt_sectors(s, acb->sector_num, acb->cluster_data, acb->buf,
acb->n, 1, &s->aes_encrypt_key);
src_buf = acb->cluster_data;
} else {
src_buf = acb->buf;
}
acb->hd_iov.iov_base = (void *)src_buf;
acb->hd_iov.iov_len = acb->n * 512;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
acb->hd_aiocb = bdrv_aio_writev(bs->file,
(cluster_offset >> 9) + index_in_cluster,
&acb->hd_qiov, acb->n,
qcow_aio_write_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
return;
done:
if (acb->qiov->niov > 1)
qemu_vfree(acb->orig_buf);
acb->common.cb(acb->common.opaque, ret);
qemu_aio_release(acb);
}
static BlockDriverAIOCB *qcow_aio_writev(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVQcowState *s = bs->opaque;
QCowAIOCB *acb;
int ret;
s->cluster_cache_offset = -1; /* disable compressed cache */
if (qiov->niov > 1) {
buf = orig_buf = qemu_blockalign(bs, qiov->size);
qemu_iovec_to_buf(qiov, 0, buf, qiov->size);
} else {
orig_buf = NULL;
buf = (uint8_t *)qiov->iov->iov_base;
acb = qcow_aio_setup(bs, sector_num, qiov, nb_sectors, cb, opaque, 1);
if (!acb)
return NULL;
ret = qcow_schedule_bh(qcow_aio_rw_bh, acb);
if (ret < 0) {
if (acb->qiov->niov > 1) {
qemu_vfree(acb->orig_buf);
}
qemu_aio_release(acb);
return NULL;
}
qemu_co_mutex_lock(&s->lock);
while (nb_sectors != 0) {
index_in_cluster = sector_num & (s->cluster_sectors - 1);
n = s->cluster_sectors - index_in_cluster;
if (n > nb_sectors) {
n = nb_sectors;
}
cluster_offset = get_cluster_offset(bs, sector_num << 9, 1, 0,
index_in_cluster,
index_in_cluster + n);
if (!cluster_offset || (cluster_offset & 511) != 0) {
ret = -EIO;
break;
}
if (s->crypt_method) {
if (!cluster_data) {
cluster_data = g_malloc0(s->cluster_size);
}
encrypt_sectors(s, sector_num, cluster_data, buf,
n, 1, &s->aes_encrypt_key);
src_buf = cluster_data;
} else {
src_buf = buf;
}
hd_iov.iov_base = (void *)src_buf;
hd_iov.iov_len = n * 512;
qemu_iovec_init_external(&hd_qiov, &hd_iov, 1);
qemu_co_mutex_unlock(&s->lock);
ret = bdrv_co_writev(bs->file,
(cluster_offset >> 9) + index_in_cluster,
n, &hd_qiov);
qemu_co_mutex_lock(&s->lock);
if (ret < 0) {
break;
}
ret = 0;
nb_sectors -= n;
sector_num += n;
buf += n * 512;
}
qemu_co_mutex_unlock(&s->lock);
if (qiov->niov > 1) {
qemu_vfree(orig_buf);
}
g_free(cluster_data);
return ret;
return &acb->common;
}
static void qcow_close(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
g_free(s->l1_table);
g_free(s->l2_cache);
g_free(s->cluster_cache);
g_free(s->cluster_data);
migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
qemu_free(s->l1_table);
qemu_free(s->l2_cache);
qemu_free(s->cluster_cache);
qemu_free(s->cluster_data);
}
static int qcow_create(const char *filename, QEMUOptionParameter *options)
{
int header_size, backing_filename_len, l1_size, shift, i;
int fd, header_size, backing_filename_len, l1_size, i, shift;
QCowHeader header;
uint8_t *tmp;
uint64_t tmp;
int64_t total_size = 0;
const char *backing_file = NULL;
int flags = 0;
int ret;
BlockDriverState *qcow_bs;
/* Read out options */
while (options && options->name) {
@@ -665,21 +822,9 @@ static int qcow_create(const char *filename, QEMUOptionParameter *options)
options++;
}
ret = bdrv_create_file(filename, options);
if (ret < 0) {
return ret;
}
ret = bdrv_file_open(&qcow_bs, filename, BDRV_O_RDWR);
if (ret < 0) {
return ret;
}
ret = bdrv_truncate(qcow_bs, 0);
if (ret < 0) {
goto exit;
}
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, 0644);
if (fd < 0)
return -errno;
memset(&header, 0, sizeof(header));
header.magic = cpu_to_be32(QCOW_MAGIC);
header.version = cpu_to_be32(QCOW_VERSION);
@@ -715,34 +860,33 @@ static int qcow_create(const char *filename, QEMUOptionParameter *options)
}
/* write all the data */
ret = bdrv_pwrite(qcow_bs, 0, &header, sizeof(header));
ret = qemu_write_full(fd, &header, sizeof(header));
if (ret != sizeof(header)) {
ret = -errno;
goto exit;
}
if (backing_file) {
ret = bdrv_pwrite(qcow_bs, sizeof(header),
backing_file, backing_filename_len);
ret = qemu_write_full(fd, backing_file, backing_filename_len);
if (ret != backing_filename_len) {
ret = -errno;
goto exit;
}
}
lseek(fd, header_size, SEEK_SET);
tmp = 0;
for(i = 0;i < l1_size; i++) {
ret = qemu_write_full(fd, &tmp, sizeof(tmp));
if (ret != sizeof(tmp)) {
ret = -errno;
goto exit;
}
}
tmp = g_malloc0(BDRV_SECTOR_SIZE);
for (i = 0; i < ((sizeof(uint64_t)*l1_size + BDRV_SECTOR_SIZE - 1)/
BDRV_SECTOR_SIZE); i++) {
ret = bdrv_pwrite(qcow_bs, header_size +
BDRV_SECTOR_SIZE*i, tmp, BDRV_SECTOR_SIZE);
if (ret != BDRV_SECTOR_SIZE) {
g_free(tmp);
goto exit;
}
}
g_free(tmp);
ret = 0;
exit:
bdrv_delete(qcow_bs);
close(fd);
return ret;
}
@@ -781,7 +925,9 @@ static int qcow_write_compressed(BlockDriverState *bs, int64_t sector_num,
if (nb_sectors != s->cluster_sectors)
return -EINVAL;
out_buf = g_malloc(s->cluster_size + (s->cluster_size / 1000) + 128);
out_buf = qemu_malloc(s->cluster_size + (s->cluster_size / 1000) + 128);
if (!out_buf)
return -1;
/* best compression, small window, no zlib header */
memset(&strm, 0, sizeof(strm));
@@ -789,8 +935,8 @@ static int qcow_write_compressed(BlockDriverState *bs, int64_t sector_num,
Z_DEFLATED, -12,
9, Z_DEFAULT_STRATEGY);
if (ret != 0) {
ret = -EINVAL;
goto fail;
qemu_free(out_buf);
return -1;
}
strm.avail_in = s->cluster_size;
@@ -800,9 +946,9 @@ static int qcow_write_compressed(BlockDriverState *bs, int64_t sector_num,
ret = deflate(&strm, Z_FINISH);
if (ret != Z_STREAM_END && ret != Z_OK) {
qemu_free(out_buf);
deflateEnd(&strm);
ret = -EINVAL;
goto fail;
return -1;
}
out_len = strm.next_out - out_buf;
@@ -810,29 +956,30 @@ static int qcow_write_compressed(BlockDriverState *bs, int64_t sector_num,
if (ret != Z_STREAM_END || out_len >= s->cluster_size) {
/* could not compress: write normal cluster */
ret = bdrv_write(bs, sector_num, buf, s->cluster_sectors);
if (ret < 0) {
goto fail;
}
bdrv_write(bs, sector_num, buf, s->cluster_sectors);
} else {
cluster_offset = get_cluster_offset(bs, sector_num << 9, 2,
out_len, 0, 0);
if (cluster_offset == 0) {
ret = -EIO;
goto fail;
}
cluster_offset &= s->cluster_offset_mask;
ret = bdrv_pwrite(bs->file, cluster_offset, out_buf, out_len);
if (ret < 0) {
goto fail;
if (bdrv_pwrite(bs->file, cluster_offset, out_buf, out_len) != out_len) {
qemu_free(out_buf);
return -1;
}
}
ret = 0;
fail:
g_free(out_buf);
return ret;
qemu_free(out_buf);
return 0;
}
static int qcow_flush(BlockDriverState *bs)
{
return bdrv_flush(bs->file);
}
static BlockDriverAIOCB *qcow_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
return bdrv_aio_flush(bs->file, cb, opaque);
}
static int qcow_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
@@ -869,15 +1016,15 @@ static BlockDriver bdrv_qcow = {
.bdrv_open = qcow_open,
.bdrv_close = qcow_close,
.bdrv_create = qcow_create,
.bdrv_co_readv = qcow_co_readv,
.bdrv_co_writev = qcow_co_writev,
.bdrv_co_is_allocated = qcow_co_is_allocated,
.bdrv_set_key = qcow_set_key,
.bdrv_make_empty = qcow_make_empty,
.bdrv_write_compressed = qcow_write_compressed,
.bdrv_get_info = qcow_get_info,
.bdrv_flush = qcow_flush,
.bdrv_is_allocated = qcow_is_allocated,
.bdrv_set_key = qcow_set_key,
.bdrv_make_empty = qcow_make_empty,
.bdrv_aio_readv = qcow_aio_readv,
.bdrv_aio_writev = qcow_aio_writev,
.bdrv_aio_flush = qcow_aio_flush,
.bdrv_write_compressed = qcow_write_compressed,
.bdrv_get_info = qcow_get_info,
.create_options = qcow_create_options,
};

View File

@@ -25,7 +25,6 @@
#include "block_int.h"
#include "qemu-common.h"
#include "qcow2.h"
#include "trace.h"
typedef struct Qcow2CachedTable {
void* table;
@@ -40,17 +39,20 @@ struct Qcow2Cache {
struct Qcow2Cache* depends;
int size;
bool depends_on_flush;
bool writethrough;
};
Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables)
Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables,
bool writethrough)
{
BDRVQcowState *s = bs->opaque;
Qcow2Cache *c;
int i;
c = g_malloc0(sizeof(*c));
c = qemu_mallocz(sizeof(*c));
c->size = num_tables;
c->entries = g_malloc0(sizeof(*c->entries) * num_tables);
c->entries = qemu_mallocz(sizeof(*c->entries) * num_tables);
c->writethrough = writethrough;
for (i = 0; i < c->size; i++) {
c->entries[i].table = qemu_blockalign(bs, s->cluster_size);
@@ -68,8 +70,8 @@ int qcow2_cache_destroy(BlockDriverState* bs, Qcow2Cache *c)
qemu_vfree(c->entries[i].table);
}
g_free(c->entries);
g_free(c);
qemu_free(c->entries);
qemu_free(c);
return 0;
}
@@ -98,9 +100,6 @@ static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
return 0;
}
trace_qcow2_cache_entry_flush(qemu_coroutine_self(),
c == s->l2_table_cache, i);
if (c->depends) {
ret = qcow2_cache_flush_dependency(bs, c);
} else if (c->depends_on_flush) {
@@ -133,13 +132,10 @@ static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
int qcow2_cache_flush(BlockDriverState *bs, Qcow2Cache *c)
{
BDRVQcowState *s = bs->opaque;
int result = 0;
int ret;
int i;
trace_qcow2_cache_flush(qemu_coroutine_self(), c == s->l2_table_cache);
for (i = 0; i < c->size; i++) {
ret = qcow2_cache_entry_flush(bs, c, i);
if (ret < 0 && result != -ENOSPC) {
@@ -222,9 +218,6 @@ static int qcow2_cache_do_get(BlockDriverState *bs, Qcow2Cache *c,
int i;
int ret;
trace_qcow2_cache_get(qemu_coroutine_self(), c == s->l2_table_cache,
offset, read_from_disk);
/* Check if the table is already cached */
for (i = 0; i < c->size; i++) {
if (c->entries[i].offset == offset) {
@@ -234,8 +227,6 @@ static int qcow2_cache_do_get(BlockDriverState *bs, Qcow2Cache *c,
/* If not, write a table back and replace it */
i = qcow2_cache_find_entry_to_replace(c);
trace_qcow2_cache_get_replace_entry(qemu_coroutine_self(),
c == s->l2_table_cache, i);
if (i < 0) {
return i;
}
@@ -245,8 +236,6 @@ static int qcow2_cache_do_get(BlockDriverState *bs, Qcow2Cache *c,
return ret;
}
trace_qcow2_cache_get_read(qemu_coroutine_self(),
c == s->l2_table_cache, i);
c->entries[i].offset = 0;
if (read_from_disk) {
if (c == s->l2_table_cache) {
@@ -269,10 +258,6 @@ found:
c->entries[i].cache_hits++;
c->entries[i].ref++;
*table = c->entries[i].table;
trace_qcow2_cache_get_done(qemu_coroutine_self(),
c == s->l2_table_cache, i);
return 0;
}
@@ -304,7 +289,12 @@ found:
*table = NULL;
assert(c->entries[i].ref >= 0);
return 0;
if (c->writethrough) {
return qcow2_cache_entry_flush(bs, c, i);
} else {
return 0;
}
}
void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table)
@@ -321,3 +311,16 @@ void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table)
found:
c->entries[i].dirty = true;
}
bool qcow2_cache_set_writethrough(BlockDriverState *bs, Qcow2Cache *c,
bool enable)
{
bool old = c->writethrough;
if (!old && enable) {
qcow2_cache_flush(bs, c);
}
c->writethrough = enable;
return old;
}

File diff suppressed because it is too large Load Diff

View File

@@ -41,7 +41,7 @@ int qcow2_refcount_init(BlockDriverState *bs)
int ret, refcount_table_size2, i;
refcount_table_size2 = s->refcount_table_size * sizeof(uint64_t);
s->refcount_table = g_malloc(refcount_table_size2);
s->refcount_table = qemu_malloc(refcount_table_size2);
if (s->refcount_table_size > 0) {
BLKDBG_EVENT(bs->file, BLKDBG_REFTABLE_LOAD);
ret = bdrv_pread(bs->file, s->refcount_table_offset,
@@ -59,7 +59,7 @@ int qcow2_refcount_init(BlockDriverState *bs)
void qcow2_refcount_close(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
g_free(s->refcount_table);
qemu_free(s->refcount_table);
}
@@ -167,7 +167,7 @@ static int alloc_refcount_block(BlockDriverState *bs,
if (refcount_table_index < s->refcount_table_size) {
uint64_t refcount_block_offset =
s->refcount_table[refcount_table_index] & REFT_OFFSET_MASK;
s->refcount_table[refcount_table_index];
/* If it's already there, we're done */
if (refcount_block_offset) {
@@ -301,8 +301,7 @@ static int alloc_refcount_block(BlockDriverState *bs,
uint64_t last_table_size;
uint64_t blocks_clusters;
do {
uint64_t table_clusters =
size_to_clusters(s, table_size * sizeof(uint64_t));
uint64_t table_clusters = size_to_clusters(s, table_size);
blocks_clusters = 1 +
((table_clusters + refcount_block_clusters - 1)
/ refcount_block_clusters);
@@ -324,8 +323,8 @@ static int alloc_refcount_block(BlockDriverState *bs,
uint64_t meta_offset = (blocks_used * refcount_block_clusters) *
s->cluster_size;
uint64_t table_offset = meta_offset + blocks_clusters * s->cluster_size;
uint16_t *new_blocks = g_malloc0(blocks_clusters * s->cluster_size);
uint64_t *new_table = g_malloc0(table_size * sizeof(uint64_t));
uint16_t *new_blocks = qemu_mallocz(blocks_clusters * s->cluster_size);
uint64_t *new_table = qemu_mallocz(table_size * sizeof(uint64_t));
assert(meta_offset >= (s->free_cluster_index * s->cluster_size));
@@ -350,7 +349,7 @@ static int alloc_refcount_block(BlockDriverState *bs,
BLKDBG_EVENT(bs->file, BLKDBG_REFBLOCK_ALLOC_WRITE_BLOCKS);
ret = bdrv_pwrite_sync(bs->file, meta_offset, new_blocks,
blocks_clusters * s->cluster_size);
g_free(new_blocks);
qemu_free(new_blocks);
if (ret < 0) {
goto fail_table;
}
@@ -368,7 +367,7 @@ static int alloc_refcount_block(BlockDriverState *bs,
}
for(i = 0; i < table_size; i++) {
be64_to_cpus(&new_table[i]);
cpu_to_be64s(&new_table[i]);
}
/* Hook up the new refcount table in the qcow2 header */
@@ -386,7 +385,7 @@ static int alloc_refcount_block(BlockDriverState *bs,
uint64_t old_table_offset = s->refcount_table_offset;
uint64_t old_table_size = s->refcount_table_size;
g_free(s->refcount_table);
qemu_free(s->refcount_table);
s->refcount_table = new_table;
s->refcount_table_size = table_size;
s->refcount_table_offset = table_offset;
@@ -401,10 +400,10 @@ static int alloc_refcount_block(BlockDriverState *bs,
return ret;
}
return 0;
return new_block;
fail_table:
g_free(new_table);
qemu_free(new_table);
fail_block:
if (*refcount_block != NULL) {
qcow2_cache_put(bs, s->refcount_block_cache, (void**) refcount_block);
@@ -423,7 +422,7 @@ static int QEMU_WARN_UNUSED_RESULT update_refcount(BlockDriverState *bs,
int ret;
#ifdef DEBUG_ALLOC2
fprintf(stderr, "update_refcount: offset=%" PRId64 " size=%" PRId64 " addend=%d\n",
printf("update_refcount: offset=%" PRId64 " size=%" PRId64 " addend=%d\n",
offset, length, addend);
#endif
if (length < 0) {
@@ -557,7 +556,7 @@ retry:
}
}
#ifdef DEBUG_ALLOC2
fprintf(stderr, "alloc_clusters: size=%" PRId64 " -> %" PRId64 "\n",
printf("alloc_clusters: size=%" PRId64 " -> %" PRId64 "\n",
size,
(s->free_cluster_index - nb_clusters) << s->cluster_bits);
#endif
@@ -583,40 +582,6 @@ int64_t qcow2_alloc_clusters(BlockDriverState *bs, int64_t size)
return offset;
}
int qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
int nb_clusters)
{
BDRVQcowState *s = bs->opaque;
uint64_t cluster_index;
uint64_t old_free_cluster_index;
int i, refcount, ret;
/* Check how many clusters there are free */
cluster_index = offset >> s->cluster_bits;
for(i = 0; i < nb_clusters; i++) {
refcount = get_refcount(bs, cluster_index++);
if (refcount < 0) {
return refcount;
} else if (refcount != 0) {
break;
}
}
/* And then allocate them */
old_free_cluster_index = s->free_cluster_index;
s->free_cluster_index = cluster_index + i;
ret = update_refcount(bs, offset, i << s->cluster_bits, 1);
if (ret < 0) {
return ret;
}
s->free_cluster_index = old_free_cluster_index;
return i;
}
/* only used to allocate compressed sectors. We try to allocate
contiguous sectors. size must be <= cluster_size */
int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size)
@@ -628,11 +593,10 @@ int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size)
BLKDBG_EVENT(bs->file, BLKDBG_CLUSTER_ALLOC_BYTES);
assert(size > 0 && size <= s->cluster_size);
if (s->free_byte_offset == 0) {
offset = qcow2_alloc_clusters(bs, s->cluster_size);
if (offset < 0) {
return offset;
s->free_byte_offset = qcow2_alloc_clusters(bs, s->cluster_size);
if (s->free_byte_offset < 0) {
return s->free_byte_offset;
}
s->free_byte_offset = offset;
}
redo:
free_in_cluster = s->cluster_size -
@@ -681,35 +645,32 @@ void qcow2_free_clusters(BlockDriverState *bs,
}
/*
* Free a cluster using its L2 entry (handles clusters of all types, e.g.
* normal cluster, compressed cluster, etc.)
* free_any_clusters
*
* free clusters according to its type: compressed or not
*
*/
void qcow2_free_any_clusters(BlockDriverState *bs,
uint64_t l2_entry, int nb_clusters)
uint64_t cluster_offset, int nb_clusters)
{
BDRVQcowState *s = bs->opaque;
switch (qcow2_get_cluster_type(l2_entry)) {
case QCOW2_CLUSTER_COMPRESSED:
{
int nb_csectors;
nb_csectors = ((l2_entry >> s->csize_shift) &
s->csize_mask) + 1;
qcow2_free_clusters(bs,
(l2_entry & s->cluster_offset_mask) & ~511,
nb_csectors * 512);
}
break;
case QCOW2_CLUSTER_NORMAL:
qcow2_free_clusters(bs, l2_entry & L2E_OFFSET_MASK,
nb_clusters << s->cluster_bits);
break;
case QCOW2_CLUSTER_UNALLOCATED:
case QCOW2_CLUSTER_ZERO:
break;
default:
abort();
/* free the cluster */
if (cluster_offset & QCOW_OFLAG_COMPRESSED) {
int nb_csectors;
nb_csectors = ((cluster_offset >> s->csize_shift) &
s->csize_mask) + 1;
qcow2_free_clusters(bs,
(cluster_offset & s->cluster_offset_mask) & ~511,
nb_csectors * 512);
return;
}
qcow2_free_clusters(bs, cluster_offset, nb_clusters << s->cluster_bits);
return;
}
@@ -719,6 +680,24 @@ void qcow2_free_any_clusters(BlockDriverState *bs,
void qcow2_create_refcount_update(QCowCreateState *s, int64_t offset,
int64_t size)
{
int refcount;
int64_t start, last, cluster_offset;
uint16_t *p;
start = offset & ~(s->cluster_size - 1);
last = (offset + size - 1) & ~(s->cluster_size - 1);
for(cluster_offset = start; cluster_offset <= last;
cluster_offset += s->cluster_size) {
p = &s->refcount_block[cluster_offset >> s->cluster_bits];
refcount = be16_to_cpu(*p);
refcount++;
*p = cpu_to_be16(refcount);
}
}
/* update the refcounts of snapshots and the copied flag */
int qcow2_update_snapshot_refcount(BlockDriverState *bs,
int64_t l1_table_offset, int l1_size, int addend)
@@ -728,17 +707,20 @@ int qcow2_update_snapshot_refcount(BlockDriverState *bs,
int64_t old_offset, old_l2_offset;
int i, j, l1_modified = 0, nb_csectors, refcount;
int ret;
bool old_l2_writethrough, old_refcount_writethrough;
/* Switch caches to writeback mode during update */
old_l2_writethrough =
qcow2_cache_set_writethrough(bs, s->l2_table_cache, false);
old_refcount_writethrough =
qcow2_cache_set_writethrough(bs, s->refcount_block_cache, false);
l2_table = NULL;
l1_table = NULL;
l1_size2 = l1_size * sizeof(uint64_t);
/* WARNING: qcow2_snapshot_goto relies on this function not using the
* l1_table_offset when it is the current s->l1_table_offset! Be careful
* when changing this! */
if (l1_table_offset != s->l1_table_offset) {
if (l1_size2 != 0) {
l1_table = g_malloc0(align_offset(l1_size2, 512));
l1_table = qemu_mallocz(align_offset(l1_size2, 512));
} else {
l1_table = NULL;
}
@@ -762,7 +744,7 @@ int qcow2_update_snapshot_refcount(BlockDriverState *bs,
l2_offset = l1_table[i];
if (l2_offset) {
old_l2_offset = l2_offset;
l2_offset &= L1E_OFFSET_MASK;
l2_offset &= ~QCOW_OFLAG_COPIED;
ret = qcow2_cache_get(bs, s->l2_table_cache, l2_offset,
(void**) &l2_table);
@@ -794,11 +776,10 @@ int qcow2_update_snapshot_refcount(BlockDriverState *bs,
/* compressed clusters are never modified */
refcount = 2;
} else {
uint64_t cluster_index = (offset & L2E_OFFSET_MASK) >> s->cluster_bits;
if (addend != 0) {
refcount = update_cluster_refcount(bs, cluster_index, addend);
refcount = update_cluster_refcount(bs, offset >> s->cluster_bits, addend);
} else {
refcount = get_refcount(bs, cluster_index);
refcount = get_refcount(bs, offset >> s->cluster_bits);
}
if (refcount < 0) {
@@ -851,8 +832,12 @@ fail:
qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
}
/* Update L1 only if it isn't deleted anyway (addend = -1) */
if (addend >= 0 && l1_modified) {
/* Enable writethrough cache mode again */
qcow2_cache_set_writethrough(bs, s->l2_table_cache, old_l2_writethrough);
qcow2_cache_set_writethrough(bs, s->refcount_block_cache,
old_refcount_writethrough);
if (l1_modified) {
for(i = 0; i < l1_size; i++)
cpu_to_be64s(&l1_table[i]);
if (bdrv_pwrite_sync(bs->file, l1_table_offset, l1_table,
@@ -862,7 +847,7 @@ fail:
be64_to_cpus(&l1_table[i]);
}
if (l1_allocated)
g_free(l1_table);
qemu_free(l1_table);
return ret;
}
@@ -931,91 +916,75 @@ static int check_refcounts_l2(BlockDriverState *bs, BdrvCheckResult *res,
int check_copied)
{
BDRVQcowState *s = bs->opaque;
uint64_t *l2_table, l2_entry;
uint64_t *l2_table, offset;
int i, l2_size, nb_csectors, refcount;
/* Read L2 table from disk */
l2_size = s->l2_size * sizeof(uint64_t);
l2_table = g_malloc(l2_size);
l2_table = qemu_malloc(l2_size);
if (bdrv_pread(bs->file, l2_offset, l2_table, l2_size) != l2_size)
goto fail;
/* Do the actual checks */
for(i = 0; i < s->l2_size; i++) {
l2_entry = be64_to_cpu(l2_table[i]);
switch (qcow2_get_cluster_type(l2_entry)) {
case QCOW2_CLUSTER_COMPRESSED:
/* Compressed clusters don't have QCOW_OFLAG_COPIED */
if (l2_entry & QCOW_OFLAG_COPIED) {
fprintf(stderr, "ERROR: cluster %" PRId64 ": "
"copied flag must never be set for compressed "
"clusters\n", l2_entry >> s->cluster_bits);
l2_entry &= ~QCOW_OFLAG_COPIED;
res->corruptions++;
}
/* Mark cluster as used */
nb_csectors = ((l2_entry >> s->csize_shift) &
s->csize_mask) + 1;
l2_entry &= s->cluster_offset_mask;
inc_refcounts(bs, res, refcount_table, refcount_table_size,
l2_entry & ~511, nb_csectors * 512);
break;
case QCOW2_CLUSTER_ZERO:
if ((l2_entry & L2E_OFFSET_MASK) == 0) {
break;
}
/* fall through */
case QCOW2_CLUSTER_NORMAL:
{
/* QCOW_OFLAG_COPIED must be set iff refcount == 1 */
uint64_t offset = l2_entry & L2E_OFFSET_MASK;
if (check_copied) {
refcount = get_refcount(bs, offset >> s->cluster_bits);
if (refcount < 0) {
fprintf(stderr, "Can't get refcount for offset %"
PRIx64 ": %s\n", l2_entry, strerror(-refcount));
goto fail;
offset = be64_to_cpu(l2_table[i]);
if (offset != 0) {
if (offset & QCOW_OFLAG_COMPRESSED) {
/* Compressed clusters don't have QCOW_OFLAG_COPIED */
if (offset & QCOW_OFLAG_COPIED) {
fprintf(stderr, "ERROR: cluster %" PRId64 ": "
"copied flag must never be set for compressed "
"clusters\n", offset >> s->cluster_bits);
offset &= ~QCOW_OFLAG_COPIED;
res->corruptions++;
}
if ((refcount == 1) != ((l2_entry & QCOW_OFLAG_COPIED) != 0)) {
fprintf(stderr, "ERROR OFLAG_COPIED: offset=%"
PRIx64 " refcount=%d\n", l2_entry, refcount);
/* Mark cluster as used */
nb_csectors = ((offset >> s->csize_shift) &
s->csize_mask) + 1;
offset &= s->cluster_offset_mask;
inc_refcounts(bs, res, refcount_table, refcount_table_size,
offset & ~511, nb_csectors * 512);
} else {
/* QCOW_OFLAG_COPIED must be set iff refcount == 1 */
if (check_copied) {
uint64_t entry = offset;
offset &= ~QCOW_OFLAG_COPIED;
refcount = get_refcount(bs, offset >> s->cluster_bits);
if (refcount < 0) {
fprintf(stderr, "Can't get refcount for offset %"
PRIx64 ": %s\n", entry, strerror(-refcount));
goto fail;
}
if ((refcount == 1) != ((entry & QCOW_OFLAG_COPIED) != 0)) {
fprintf(stderr, "ERROR OFLAG_COPIED: offset=%"
PRIx64 " refcount=%d\n", entry, refcount);
res->corruptions++;
}
}
/* Mark cluster as used */
offset &= ~QCOW_OFLAG_COPIED;
inc_refcounts(bs, res, refcount_table,refcount_table_size,
offset, s->cluster_size);
/* Correct offsets are cluster aligned */
if (offset & (s->cluster_size - 1)) {
fprintf(stderr, "ERROR offset=%" PRIx64 ": Cluster is not "
"properly aligned; L2 entry corrupted.\n", offset);
res->corruptions++;
}
}
/* Mark cluster as used */
inc_refcounts(bs, res, refcount_table,refcount_table_size,
offset, s->cluster_size);
/* Correct offsets are cluster aligned */
if (offset & (s->cluster_size - 1)) {
fprintf(stderr, "ERROR offset=%" PRIx64 ": Cluster is not "
"properly aligned; L2 entry corrupted.\n", offset);
res->corruptions++;
}
break;
}
case QCOW2_CLUSTER_UNALLOCATED:
break;
default:
abort();
}
}
g_free(l2_table);
qemu_free(l2_table);
return 0;
fail:
fprintf(stderr, "ERROR: I/O error in check_refcounts_l2\n");
g_free(l2_table);
qemu_free(l2_table);
return -EIO;
}
@@ -1048,7 +1017,7 @@ static int check_refcounts_l1(BlockDriverState *bs,
if (l1_size2 == 0) {
l1_table = NULL;
} else {
l1_table = g_malloc(l1_size2);
l1_table = qemu_malloc(l1_size2);
if (bdrv_pread(bs->file, l1_table_offset,
l1_table, l1_size2) != l1_size2)
goto fail;
@@ -1077,7 +1046,7 @@ static int check_refcounts_l1(BlockDriverState *bs,
}
/* Mark L2 table as used */
l2_offset &= L1E_OFFSET_MASK;
l2_offset &= ~QCOW_OFLAG_COPIED;
inc_refcounts(bs, res, refcount_table, refcount_table_size,
l2_offset, s->cluster_size);
@@ -1096,13 +1065,13 @@ static int check_refcounts_l1(BlockDriverState *bs,
}
}
}
g_free(l1_table);
qemu_free(l1_table);
return 0;
fail:
fprintf(stderr, "ERROR: I/O error in check_refcounts_l1\n");
res->check_errors++;
g_free(l1_table);
qemu_free(l1_table);
return -EIO;
}
@@ -1112,19 +1081,18 @@ fail:
* Returns 0 if no errors are found, the number of errors in case the image is
* detected as corrupted, and -errno when an internal error occurred.
*/
int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix)
int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res)
{
BDRVQcowState *s = bs->opaque;
int64_t size, i;
int nb_clusters, refcount1, refcount2;
int64_t size;
int nb_clusters, refcount1, refcount2, i;
QCowSnapshot *sn;
uint16_t *refcount_table;
int ret;
size = bdrv_getlength(bs->file);
nb_clusters = size_to_clusters(s, size);
refcount_table = g_malloc0(nb_clusters * sizeof(uint16_t));
refcount_table = qemu_mallocz(nb_clusters * sizeof(uint16_t));
/* header */
inc_refcounts(bs, res, refcount_table, nb_clusters,
@@ -1161,15 +1129,14 @@ int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
/* Refcount blocks are cluster aligned */
if (offset & (s->cluster_size - 1)) {
fprintf(stderr, "ERROR refcount block %" PRId64 " is not "
fprintf(stderr, "ERROR refcount block %d is not "
"cluster aligned; refcount table entry corrupted\n", i);
res->corruptions++;
continue;
}
if (cluster >= nb_clusters) {
fprintf(stderr, "ERROR refcount block %" PRId64
" is outside image\n", i);
fprintf(stderr, "ERROR refcount block %d is outside image\n", i);
res->corruptions++;
continue;
}
@@ -1178,8 +1145,7 @@ int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
inc_refcounts(bs, res, refcount_table, nb_clusters,
offset, s->cluster_size);
if (refcount_table[cluster] != 1) {
fprintf(stderr, "ERROR refcount block %" PRId64
" refcount=%d\n",
fprintf(stderr, "ERROR refcount block %d refcount=%d\n",
i, refcount_table[cluster]);
res->corruptions++;
}
@@ -1190,7 +1156,7 @@ int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
for(i = 0; i < nb_clusters; i++) {
refcount1 = get_refcount(bs, i);
if (refcount1 < 0) {
fprintf(stderr, "Can't get refcount for cluster %" PRId64 ": %s\n",
fprintf(stderr, "Can't get refcount for cluster %d: %s\n",
i, strerror(-refcount1));
res->check_errors++;
continue;
@@ -1198,31 +1164,9 @@ int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
refcount2 = refcount_table[i];
if (refcount1 != refcount2) {
/* Check if we're allowed to fix the mismatch */
int *num_fixed = NULL;
if (refcount1 > refcount2 && (fix & BDRV_FIX_LEAKS)) {
num_fixed = &res->leaks_fixed;
} else if (refcount1 < refcount2 && (fix & BDRV_FIX_ERRORS)) {
num_fixed = &res->corruptions_fixed;
}
fprintf(stderr, "%s cluster %" PRId64 " refcount=%d reference=%d\n",
num_fixed != NULL ? "Repairing" :
refcount1 < refcount2 ? "ERROR" :
"Leaked",
fprintf(stderr, "%s cluster %d refcount=%d reference=%d\n",
refcount1 < refcount2 ? "ERROR" : "Leaked",
i, refcount1, refcount2);
if (num_fixed) {
ret = update_refcount(bs, i << s->cluster_bits, 1,
refcount2 - refcount1);
if (ret >= 0) {
(*num_fixed)++;
continue;
}
}
/* And if we couldn't, print an error */
if (refcount1 < refcount2) {
res->corruptions++;
} else {
@@ -1234,7 +1178,7 @@ int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
ret = 0;
fail:
g_free(refcount_table);
qemu_free(refcount_table);
return ret;
}

View File

@@ -26,7 +26,7 @@
#include "block_int.h"
#include "block/qcow2.h"
typedef struct QEMU_PACKED QCowSnapshotHeader {
typedef struct __attribute__((packed)) QCowSnapshotHeader {
/* header is 8 byte aligned */
uint64_t l1_table_offset;
@@ -46,21 +46,16 @@ typedef struct QEMU_PACKED QCowSnapshotHeader {
/* name follows */
} QCowSnapshotHeader;
typedef struct QEMU_PACKED QCowSnapshotExtraData {
uint64_t vm_state_size_large;
uint64_t disk_size;
} QCowSnapshotExtraData;
void qcow2_free_snapshots(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
int i;
for(i = 0; i < s->nb_snapshots; i++) {
g_free(s->snapshots[i].name);
g_free(s->snapshots[i].id_str);
qemu_free(s->snapshots[i].name);
qemu_free(s->snapshots[i].id_str);
}
g_free(s->snapshots);
qemu_free(s->snapshots);
s->snapshots = NULL;
s->nb_snapshots = 0;
}
@@ -69,12 +64,10 @@ int qcow2_read_snapshots(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
QCowSnapshotHeader h;
QCowSnapshotExtraData extra;
QCowSnapshot *sn;
int i, id_str_size, name_size;
int64_t offset;
uint32_t extra_data_size;
int ret;
if (!s->nb_snapshots) {
s->snapshots = NULL;
@@ -83,16 +76,11 @@ int qcow2_read_snapshots(BlockDriverState *bs)
}
offset = s->snapshots_offset;
s->snapshots = g_malloc0(s->nb_snapshots * sizeof(QCowSnapshot));
s->snapshots = qemu_mallocz(s->nb_snapshots * sizeof(QCowSnapshot));
for(i = 0; i < s->nb_snapshots; i++) {
/* Read statically sized part of the snapshot header */
offset = align_offset(offset, 8);
ret = bdrv_pread(bs->file, offset, &h, sizeof(h));
if (ret < 0) {
if (bdrv_pread(bs->file, offset, &h, sizeof(h)) != sizeof(h))
goto fail;
}
offset += sizeof(h);
sn = s->snapshots + i;
sn->l1_table_offset = be64_to_cpu(h.l1_table_offset);
@@ -106,49 +94,25 @@ int qcow2_read_snapshots(BlockDriverState *bs)
id_str_size = be16_to_cpu(h.id_str_size);
name_size = be16_to_cpu(h.name_size);
/* Read extra data */
ret = bdrv_pread(bs->file, offset, &extra,
MIN(sizeof(extra), extra_data_size));
if (ret < 0) {
goto fail;
}
offset += extra_data_size;
if (extra_data_size >= 8) {
sn->vm_state_size = be64_to_cpu(extra.vm_state_size_large);
}
if (extra_data_size >= 16) {
sn->disk_size = be64_to_cpu(extra.disk_size);
} else {
sn->disk_size = bs->total_sectors * BDRV_SECTOR_SIZE;
}
/* Read snapshot ID */
sn->id_str = g_malloc(id_str_size + 1);
ret = bdrv_pread(bs->file, offset, sn->id_str, id_str_size);
if (ret < 0) {
sn->id_str = qemu_malloc(id_str_size + 1);
if (bdrv_pread(bs->file, offset, sn->id_str, id_str_size) != id_str_size)
goto fail;
}
offset += id_str_size;
sn->id_str[id_str_size] = '\0';
/* Read snapshot name */
sn->name = g_malloc(name_size + 1);
ret = bdrv_pread(bs->file, offset, sn->name, name_size);
if (ret < 0) {
sn->name = qemu_malloc(name_size + 1);
if (bdrv_pread(bs->file, offset, sn->name, name_size) != name_size)
goto fail;
}
offset += name_size;
sn->name[name_size] = '\0';
}
s->snapshots_size = offset - s->snapshots_offset;
return 0;
fail:
fail:
qcow2_free_snapshots(bs);
return ret;
return -1;
}
/* add at the end of the file a new list of snapshots */
@@ -157,14 +121,10 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
BDRVQcowState *s = bs->opaque;
QCowSnapshot *sn;
QCowSnapshotHeader h;
QCowSnapshotExtraData extra;
int i, name_size, id_str_size, snapshots_size;
struct {
uint32_t nb_snapshots;
uint64_t snapshots_offset;
} QEMU_PACKED header_data;
uint64_t data64;
uint32_t data32;
int64_t offset, snapshots_offset;
int ret;
/* compute the size of the snapshots */
offset = 0;
@@ -172,13 +132,11 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
sn = s->snapshots + i;
offset = align_offset(offset, 8);
offset += sizeof(h);
offset += sizeof(extra);
offset += strlen(sn->id_str);
offset += strlen(sn->name);
}
snapshots_size = offset;
/* Allocate space for the new snapshot list */
snapshots_offset = qcow2_alloc_clusters(bs, snapshots_size);
bdrv_flush(bs->file);
offset = snapshots_offset;
@@ -186,86 +144,49 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
return offset;
}
/* Write all snapshots to the new list */
for(i = 0; i < s->nb_snapshots; i++) {
sn = s->snapshots + i;
memset(&h, 0, sizeof(h));
h.l1_table_offset = cpu_to_be64(sn->l1_table_offset);
h.l1_size = cpu_to_be32(sn->l1_size);
/* If it doesn't fit in 32 bit, older implementations should treat it
* as a disk-only snapshot rather than truncate the VM state */
if (sn->vm_state_size <= 0xffffffff) {
h.vm_state_size = cpu_to_be32(sn->vm_state_size);
}
h.vm_state_size = cpu_to_be32(sn->vm_state_size);
h.date_sec = cpu_to_be32(sn->date_sec);
h.date_nsec = cpu_to_be32(sn->date_nsec);
h.vm_clock_nsec = cpu_to_be64(sn->vm_clock_nsec);
h.extra_data_size = cpu_to_be32(sizeof(extra));
memset(&extra, 0, sizeof(extra));
extra.vm_state_size_large = cpu_to_be64(sn->vm_state_size);
extra.disk_size = cpu_to_be64(sn->disk_size);
id_str_size = strlen(sn->id_str);
name_size = strlen(sn->name);
h.id_str_size = cpu_to_be16(id_str_size);
h.name_size = cpu_to_be16(name_size);
offset = align_offset(offset, 8);
ret = bdrv_pwrite(bs->file, offset, &h, sizeof(h));
if (ret < 0) {
if (bdrv_pwrite_sync(bs->file, offset, &h, sizeof(h)) < 0)
goto fail;
}
offset += sizeof(h);
ret = bdrv_pwrite(bs->file, offset, &extra, sizeof(extra));
if (ret < 0) {
if (bdrv_pwrite_sync(bs->file, offset, sn->id_str, id_str_size) < 0)
goto fail;
}
offset += sizeof(extra);
ret = bdrv_pwrite(bs->file, offset, sn->id_str, id_str_size);
if (ret < 0) {
goto fail;
}
offset += id_str_size;
ret = bdrv_pwrite(bs->file, offset, sn->name, name_size);
if (ret < 0) {
if (bdrv_pwrite_sync(bs->file, offset, sn->name, name_size) < 0)
goto fail;
}
offset += name_size;
}
/*
* Update the header to point to the new snapshot table. This requires the
* new table and its refcounts to be stable on disk.
*/
ret = bdrv_flush(bs);
if (ret < 0) {
/* update the various header fields */
data64 = cpu_to_be64(snapshots_offset);
if (bdrv_pwrite_sync(bs->file, offsetof(QCowHeader, snapshots_offset),
&data64, sizeof(data64)) < 0)
goto fail;
}
QEMU_BUILD_BUG_ON(offsetof(QCowHeader, snapshots_offset) !=
offsetof(QCowHeader, nb_snapshots) + sizeof(header_data.nb_snapshots));
header_data.nb_snapshots = cpu_to_be32(s->nb_snapshots);
header_data.snapshots_offset = cpu_to_be64(snapshots_offset);
ret = bdrv_pwrite_sync(bs->file, offsetof(QCowHeader, nb_snapshots),
&header_data, sizeof(header_data));
if (ret < 0) {
data32 = cpu_to_be32(s->nb_snapshots);
if (bdrv_pwrite_sync(bs->file, offsetof(QCowHeader, nb_snapshots),
&data32, sizeof(data32)) < 0)
goto fail;
}
/* free the old snapshot table */
qcow2_free_clusters(bs, s->snapshots_offset, s->snapshots_size);
s->snapshots_offset = snapshots_offset;
s->snapshots_size = snapshots_size;
return 0;
fail:
return ret;
fail:
return -1;
}
static void find_new_snapshot_id(BlockDriverState *bs,
@@ -315,107 +236,80 @@ static int find_snapshot_by_id_or_name(BlockDriverState *bs, const char *name)
int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
{
BDRVQcowState *s = bs->opaque;
QCowSnapshot *new_snapshot_list = NULL;
QCowSnapshot *old_snapshot_list = NULL;
QCowSnapshot sn1, *sn = &sn1;
QCowSnapshot *snapshots1, sn1, *sn = &sn1;
int i, ret;
uint64_t *l1_table = NULL;
int64_t l1_table_offset;
memset(sn, 0, sizeof(*sn));
/* Generate an ID if it wasn't passed */
if (sn_info->id_str[0] == '\0') {
/* compute a new id */
find_new_snapshot_id(bs, sn_info->id_str, sizeof(sn_info->id_str));
}
/* Check that the ID is unique */
if (find_snapshot_by_id(bs, sn_info->id_str) >= 0) {
return -EEXIST;
}
/* check that the ID is unique */
if (find_snapshot_by_id(bs, sn_info->id_str) >= 0)
return -ENOENT;
/* Populate sn with passed data */
sn->id_str = g_strdup(sn_info->id_str);
sn->name = g_strdup(sn_info->name);
sn->disk_size = bs->total_sectors * BDRV_SECTOR_SIZE;
sn->id_str = qemu_strdup(sn_info->id_str);
if (!sn->id_str)
goto fail;
sn->name = qemu_strdup(sn_info->name);
if (!sn->name)
goto fail;
sn->vm_state_size = sn_info->vm_state_size;
sn->date_sec = sn_info->date_sec;
sn->date_nsec = sn_info->date_nsec;
sn->vm_clock_nsec = sn_info->vm_clock_nsec;
/* Allocate the L1 table of the snapshot and copy the current one there. */
ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 1);
if (ret < 0)
goto fail;
/* create the L1 table of the snapshot */
l1_table_offset = qcow2_alloc_clusters(bs, s->l1_size * sizeof(uint64_t));
if (l1_table_offset < 0) {
ret = l1_table_offset;
goto fail;
}
bdrv_flush(bs->file);
sn->l1_table_offset = l1_table_offset;
sn->l1_size = s->l1_size;
l1_table = g_malloc(s->l1_size * sizeof(uint64_t));
if (s->l1_size != 0) {
l1_table = qemu_malloc(s->l1_size * sizeof(uint64_t));
} else {
l1_table = NULL;
}
for(i = 0; i < s->l1_size; i++) {
l1_table[i] = cpu_to_be64(s->l1_table[i]);
}
ret = bdrv_pwrite(bs->file, sn->l1_table_offset, l1_table,
s->l1_size * sizeof(uint64_t));
if (ret < 0) {
if (bdrv_pwrite_sync(bs->file, sn->l1_table_offset,
l1_table, s->l1_size * sizeof(uint64_t)) < 0)
goto fail;
}
g_free(l1_table);
qemu_free(l1_table);
l1_table = NULL;
/*
* Increase the refcounts of all clusters and make sure everything is
* stable on disk before updating the snapshot table to contain a pointer
* to the new L1 table.
*/
ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 1);
if (ret < 0) {
goto fail;
}
ret = bdrv_flush(bs);
if (ret < 0) {
goto fail;
}
/* Append the new snapshot to the snapshot list */
new_snapshot_list = g_malloc((s->nb_snapshots + 1) * sizeof(QCowSnapshot));
snapshots1 = qemu_malloc((s->nb_snapshots + 1) * sizeof(QCowSnapshot));
if (s->snapshots) {
memcpy(new_snapshot_list, s->snapshots,
s->nb_snapshots * sizeof(QCowSnapshot));
old_snapshot_list = s->snapshots;
memcpy(snapshots1, s->snapshots, s->nb_snapshots * sizeof(QCowSnapshot));
qemu_free(s->snapshots);
}
s->snapshots = new_snapshot_list;
s->snapshots = snapshots1;
s->snapshots[s->nb_snapshots++] = *sn;
ret = qcow2_write_snapshots(bs);
if (ret < 0) {
g_free(s->snapshots);
s->snapshots = old_snapshot_list;
if (qcow2_write_snapshots(bs) < 0)
goto fail;
}
g_free(old_snapshot_list);
#ifdef DEBUG_ALLOC
{
BdrvCheckResult result = {0};
qcow2_check_refcounts(bs, &result, 0);
}
qcow2_check_refcounts(bs);
#endif
return 0;
fail:
g_free(sn->id_str);
g_free(sn->name);
g_free(l1_table);
return ret;
fail:
qemu_free(sn->name);
qemu_free(l1_table);
return -1;
}
/* copy the snapshot 'snapshot_name' into the current disk image */
@@ -425,165 +319,78 @@ int qcow2_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
QCowSnapshot *sn;
int i, snapshot_index;
int cur_l1_bytes, sn_l1_bytes;
int ret;
uint64_t *sn_l1_table = NULL;
/* Search the snapshot */
snapshot_index = find_snapshot_by_id_or_name(bs, snapshot_id);
if (snapshot_index < 0) {
if (snapshot_index < 0)
return -ENOENT;
}
sn = &s->snapshots[snapshot_index];
if (sn->disk_size != bs->total_sectors * BDRV_SECTOR_SIZE) {
error_report("qcow2: Loading snapshots with different disk "
"size is not implemented");
ret = -ENOTSUP;
if (qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, -1) < 0)
goto fail;
}
/*
* Make sure that the current L1 table is big enough to contain the whole
* L1 table of the snapshot. If the snapshot L1 table is smaller, the
* current one must be padded with zeros.
*/
ret = qcow2_grow_l1_table(bs, sn->l1_size, true);
if (ret < 0) {
if (qcow2_grow_l1_table(bs, sn->l1_size, true) < 0)
goto fail;
}
cur_l1_bytes = s->l1_size * sizeof(uint64_t);
sn_l1_bytes = sn->l1_size * sizeof(uint64_t);
/*
* Copy the snapshot L1 table to the current L1 table.
*
* Before overwriting the old current L1 table on disk, make sure to
* increase all refcounts for the clusters referenced by the new one.
* Decrease the refcount referenced by the old one only when the L1
* table is overwritten.
*/
sn_l1_table = g_malloc0(cur_l1_bytes);
ret = bdrv_pread(bs->file, sn->l1_table_offset, sn_l1_table, sn_l1_bytes);
if (ret < 0) {
goto fail;
if (cur_l1_bytes > sn_l1_bytes) {
memset(s->l1_table + sn->l1_size, 0, cur_l1_bytes - sn_l1_bytes);
}
ret = qcow2_update_snapshot_refcount(bs, sn->l1_table_offset,
sn->l1_size, 1);
if (ret < 0) {
/* copy the snapshot l1 table to the current l1 table */
if (bdrv_pread(bs->file, sn->l1_table_offset,
s->l1_table, sn_l1_bytes) < 0)
goto fail;
}
ret = bdrv_pwrite_sync(bs->file, s->l1_table_offset, sn_l1_table,
cur_l1_bytes);
if (ret < 0) {
if (bdrv_pwrite_sync(bs->file, s->l1_table_offset,
s->l1_table, cur_l1_bytes) < 0)
goto fail;
}
/*
* Decrease refcount of clusters of current L1 table.
*
* At this point, the in-memory s->l1_table points to the old L1 table,
* whereas on disk we already have the new one.
*
* qcow2_update_snapshot_refcount special cases the current L1 table to use
* the in-memory data instead of really using the offset to load a new one,
* which is why this works.
*/
ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset,
s->l1_size, -1);
/*
* Now update the in-memory L1 table to be in sync with the on-disk one. We
* need to do this even if updating refcounts failed.
*/
for(i = 0;i < s->l1_size; i++) {
s->l1_table[i] = be64_to_cpu(sn_l1_table[i]);
be64_to_cpus(&s->l1_table[i]);
}
if (ret < 0) {
if (qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 1) < 0)
goto fail;
}
g_free(sn_l1_table);
sn_l1_table = NULL;
/*
* Update QCOW_OFLAG_COPIED in the active L1 table (it may have changed
* when we decreased the refcount of the old snapshot.
*/
ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 0);
if (ret < 0) {
goto fail;
}
#ifdef DEBUG_ALLOC
{
BdrvCheckResult result = {0};
qcow2_check_refcounts(bs, &result, 0);
}
qcow2_check_refcounts(bs);
#endif
return 0;
fail:
g_free(sn_l1_table);
return ret;
fail:
return -EIO;
}
int qcow2_snapshot_delete(BlockDriverState *bs, const char *snapshot_id)
{
BDRVQcowState *s = bs->opaque;
QCowSnapshot sn;
QCowSnapshot *sn;
int snapshot_index, ret;
/* Search the snapshot */
snapshot_index = find_snapshot_by_id_or_name(bs, snapshot_id);
if (snapshot_index < 0) {
if (snapshot_index < 0)
return -ENOENT;
}
sn = s->snapshots[snapshot_index];
sn = &s->snapshots[snapshot_index];
/* Remove it from the snapshot list */
memmove(s->snapshots + snapshot_index,
s->snapshots + snapshot_index + 1,
(s->nb_snapshots - snapshot_index - 1) * sizeof(sn));
ret = qcow2_update_snapshot_refcount(bs, sn->l1_table_offset, sn->l1_size, -1);
if (ret < 0)
return ret;
/* must update the copied flag on the current cluster offsets */
ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 0);
if (ret < 0)
return ret;
qcow2_free_clusters(bs, sn->l1_table_offset, sn->l1_size * sizeof(uint64_t));
qemu_free(sn->id_str);
qemu_free(sn->name);
memmove(sn, sn + 1, (s->nb_snapshots - snapshot_index - 1) * sizeof(*sn));
s->nb_snapshots--;
ret = qcow2_write_snapshots(bs);
if (ret < 0) {
/* XXX: restore snapshot if error ? */
return ret;
}
/*
* The snapshot is now unused, clean up. If we fail after this point, we
* won't recover but just leak clusters.
*/
g_free(sn.id_str);
g_free(sn.name);
/*
* Now decrease the refcounts of clusters referenced by the snapshot and
* free the L1 table.
*/
ret = qcow2_update_snapshot_refcount(bs, sn.l1_table_offset,
sn.l1_size, -1);
if (ret < 0) {
return ret;
}
qcow2_free_clusters(bs, sn.l1_table_offset, sn.l1_size * sizeof(uint64_t));
/* must update the copied flag on the current cluster offsets */
ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 0);
if (ret < 0) {
return ret;
}
#ifdef DEBUG_ALLOC
{
BdrvCheckResult result = {0};
qcow2_check_refcounts(bs, &result, 0);
}
qcow2_check_refcounts(bs);
#endif
return 0;
}
@@ -600,7 +407,7 @@ int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab)
return s->nb_snapshots;
}
sn_tab = g_malloc0(s->nb_snapshots * sizeof(QEMUSnapshotInfo));
sn_tab = qemu_mallocz(s->nb_snapshots * sizeof(QEMUSnapshotInfo));
for(i = 0; i < s->nb_snapshots; i++) {
sn_info = sn_tab + i;
sn = s->snapshots + i;
@@ -619,42 +426,32 @@ int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab)
int qcow2_snapshot_load_tmp(BlockDriverState *bs, const char *snapshot_name)
{
int i, snapshot_index;
int i, snapshot_index, l1_size2;
BDRVQcowState *s = bs->opaque;
QCowSnapshot *sn;
uint64_t *new_l1_table;
int new_l1_bytes;
int ret;
assert(bs->read_only);
/* Search the snapshot */
snapshot_index = find_snapshot_by_id_or_name(bs, snapshot_name);
if (snapshot_index < 0) {
return -ENOENT;
}
sn = &s->snapshots[snapshot_index];
/* Allocate and read in the snapshot's L1 table */
new_l1_bytes = s->l1_size * sizeof(uint64_t);
new_l1_table = g_malloc0(align_offset(new_l1_bytes, 512));
ret = bdrv_pread(bs->file, sn->l1_table_offset, new_l1_table, new_l1_bytes);
if (ret < 0) {
g_free(new_l1_table);
return ret;
s->l1_size = sn->l1_size;
l1_size2 = s->l1_size * sizeof(uint64_t);
if (s->l1_table != NULL) {
qemu_free(s->l1_table);
}
/* Switch the L1 table */
g_free(s->l1_table);
s->l1_size = sn->l1_size;
s->l1_table_offset = sn->l1_table_offset;
s->l1_table = new_l1_table;
s->l1_table = qemu_mallocz(align_offset(l1_size2, 512));
if (bdrv_pread(bs->file, sn->l1_table_offset,
s->l1_table, l1_size2) != l1_size2) {
return -1;
}
for(i = 0;i < s->l1_size; i++) {
be64_to_cpus(&s->l1_table[i]);
}
return 0;
}

File diff suppressed because it is too large Load Diff

View File

@@ -26,13 +26,13 @@
#define BLOCK_QCOW2_H
#include "aes.h"
#include "qemu-coroutine.h"
//#define DEBUG_ALLOC
//#define DEBUG_ALLOC2
//#define DEBUG_EXT
#define QCOW_MAGIC (('Q' << 24) | ('F' << 16) | ('I' << 8) | 0xfb)
#define QCOW_VERSION 2
#define QCOW_CRYPT_NONE 0
#define QCOW_CRYPT_AES 1
@@ -43,8 +43,6 @@
#define QCOW_OFLAG_COPIED (1LL << 63)
/* indicate that the cluster is compressed (they never have the copied flag) */
#define QCOW_OFLAG_COMPRESSED (1LL << 62)
/* The cluster reads as all zeros */
#define QCOW_OFLAG_ZERO (1LL << 0)
#define REFCOUNT_SHIFT 1 /* refcount size is 2 bytes */
@@ -72,14 +70,6 @@ typedef struct QCowHeader {
uint32_t refcount_table_clusters;
uint32_t nb_snapshots;
uint64_t snapshots_offset;
/* The following fields are only valid for version >= 3 */
uint64_t incompatible_features;
uint64_t compatible_features;
uint64_t autoclear_features;
uint32_t refcount_order;
uint32_t header_length;
} QCowHeader;
typedef struct QCowSnapshot {
@@ -87,8 +77,7 @@ typedef struct QCowSnapshot {
uint32_t l1_size;
char *id_str;
char *name;
uint64_t disk_size;
uint64_t vm_state_size;
uint32_t vm_state_size;
uint32_t date_sec;
uint32_t date_nsec;
uint64_t vm_clock_nsec;
@@ -97,41 +86,6 @@ typedef struct QCowSnapshot {
struct Qcow2Cache;
typedef struct Qcow2Cache Qcow2Cache;
typedef struct Qcow2UnknownHeaderExtension {
uint32_t magic;
uint32_t len;
QLIST_ENTRY(Qcow2UnknownHeaderExtension) next;
uint8_t data[];
} Qcow2UnknownHeaderExtension;
enum {
QCOW2_FEAT_TYPE_INCOMPATIBLE = 0,
QCOW2_FEAT_TYPE_COMPATIBLE = 1,
QCOW2_FEAT_TYPE_AUTOCLEAR = 2,
};
/* Incompatible feature bits */
enum {
QCOW2_INCOMPAT_DIRTY_BITNR = 0,
QCOW2_INCOMPAT_DIRTY = 1 << QCOW2_INCOMPAT_DIRTY_BITNR,
QCOW2_INCOMPAT_MASK = QCOW2_INCOMPAT_DIRTY,
};
/* Compatible feature bits */
enum {
QCOW2_COMPAT_LAZY_REFCOUNTS_BITNR = 0,
QCOW2_COMPAT_LAZY_REFCOUNTS = 1 << QCOW2_COMPAT_LAZY_REFCOUNTS_BITNR,
QCOW2_COMPAT_FEAT_MASK = QCOW2_COMPAT_LAZY_REFCOUNTS,
};
typedef struct Qcow2Feature {
uint8_t type;
uint8_t bit;
char name[46];
} QEMU_PACKED Qcow2Feature;
typedef struct BDRVQcowState {
int cluster_bits;
int cluster_size;
@@ -160,8 +114,6 @@ typedef struct BDRVQcowState {
int64_t free_cluster_index;
int64_t free_byte_offset;
CoMutex lock;
uint32_t crypt_method; /* current crypt method, 0 if no key yet */
uint32_t crypt_method_header;
AES_KEY aes_encrypt_key;
@@ -170,17 +122,6 @@ typedef struct BDRVQcowState {
int snapshots_size;
int nb_snapshots;
QCowSnapshot *snapshots;
int flags;
int qcow_version;
uint64_t incompatible_features;
uint64_t compatible_features;
uint64_t autoclear_features;
size_t unknown_header_fields_size;
void* unknown_header_fields;
QLIST_HEAD(, Qcow2UnknownHeaderExtension) unknown_header_ext;
} BDRVQcowState;
/* XXX: use std qcow open function ? */
@@ -201,28 +142,15 @@ typedef struct QCowL2Meta
{
uint64_t offset;
uint64_t cluster_offset;
uint64_t alloc_offset;
int n_start;
int nb_available;
int nb_clusters;
CoQueue dependent_requests;
struct QCowL2Meta *depends_on;
QLIST_HEAD(QCowAioDependencies, QCowAIOCB) dependent_requests;
QLIST_ENTRY(QCowL2Meta) next_in_flight;
} QCowL2Meta;
enum {
QCOW2_CLUSTER_UNALLOCATED,
QCOW2_CLUSTER_NORMAL,
QCOW2_CLUSTER_COMPRESSED,
QCOW2_CLUSTER_ZERO
};
#define L1E_OFFSET_MASK 0x00ffffffffffff00ULL
#define L2E_OFFSET_MASK 0x00ffffffffffff00ULL
#define L2E_COMPRESSED_OFFSET_SIZE_MASK 0x3fffffffffffffffULL
#define REFT_OFFSET_MASK 0xffffffffffffff00ULL
static inline int size_to_clusters(BDRVQcowState *s, int64_t size)
{
return (size + (s->cluster_size - 1)) >> s->cluster_bits;
@@ -240,50 +168,30 @@ static inline int64_t align_offset(int64_t offset, int n)
return offset;
}
static inline int qcow2_get_cluster_type(uint64_t l2_entry)
{
if (l2_entry & QCOW_OFLAG_COMPRESSED) {
return QCOW2_CLUSTER_COMPRESSED;
} else if (l2_entry & QCOW_OFLAG_ZERO) {
return QCOW2_CLUSTER_ZERO;
} else if (!(l2_entry & L2E_OFFSET_MASK)) {
return QCOW2_CLUSTER_UNALLOCATED;
} else {
return QCOW2_CLUSTER_NORMAL;
}
}
/* Check whether refcounts are eager or lazy */
static inline bool qcow2_need_accurate_refcounts(BDRVQcowState *s)
{
return !(s->incompatible_features & QCOW2_INCOMPAT_DIRTY);
}
// FIXME Need qcow2_ prefix to global functions
/* qcow2.c functions */
int qcow2_backing_read1(BlockDriverState *bs, QEMUIOVector *qiov,
int64_t sector_num, int nb_sectors);
int qcow2_update_header(BlockDriverState *bs);
/* qcow2-refcount.c functions */
int qcow2_refcount_init(BlockDriverState *bs);
void qcow2_refcount_close(BlockDriverState *bs);
int64_t qcow2_alloc_clusters(BlockDriverState *bs, int64_t size);
int qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
int nb_clusters);
int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size);
void qcow2_free_clusters(BlockDriverState *bs,
int64_t offset, int64_t size);
void qcow2_free_any_clusters(BlockDriverState *bs,
uint64_t cluster_offset, int nb_clusters);
void qcow2_create_refcount_update(QCowCreateState *s, int64_t offset,
int64_t size);
int qcow2_update_snapshot_refcount(BlockDriverState *bs,
int64_t l1_table_offset, int l1_size, int addend);
int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix);
int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res);
/* qcow2-cluster.c functions */
int qcow2_grow_l1_table(BlockDriverState *bs, int min_size, bool exact_size);
@@ -305,7 +213,6 @@ uint64_t qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs,
int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m);
int qcow2_discard_clusters(BlockDriverState *bs, uint64_t offset,
int nb_sectors);
int qcow2_zero_clusters(BlockDriverState *bs, uint64_t offset, int nb_sectors);
/* qcow2-snapshot.c functions */
int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info);
@@ -318,8 +225,11 @@ void qcow2_free_snapshots(BlockDriverState *bs);
int qcow2_read_snapshots(BlockDriverState *bs);
/* qcow2-cache.c functions */
Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables);
Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables,
bool writethrough);
int qcow2_cache_destroy(BlockDriverState* bs, Qcow2Cache *c);
bool qcow2_cache_set_writethrough(BlockDriverState *bs, Qcow2Cache *c,
bool enable);
void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table);
int qcow2_cache_flush(BlockDriverState *bs, Qcow2Cache *c);

View File

@@ -68,7 +68,6 @@ static unsigned int qed_check_l2_table(QEDCheck *check, QEDTable *table)
{
BDRVQEDState *s = check->s;
unsigned int i, num_invalid = 0;
uint64_t last_offset = 0;
for (i = 0; i < s->table_nelems; i++) {
uint64_t offset = table->offsets[i];
@@ -77,17 +76,11 @@ static unsigned int qed_check_l2_table(QEDCheck *check, QEDTable *table)
qed_offset_is_zero_cluster(offset)) {
continue;
}
check->result->bfi.allocated_clusters++;
if (last_offset && (last_offset + s->header.cluster_size != offset)) {
check->result->bfi.fragmented_clusters++;
}
last_offset = offset;
/* Detect invalid cluster offset */
if (!qed_check_cluster_offset(s, offset)) {
if (check->fix) {
table->offsets[i] = 0;
check->result->corruptions_fixed++;
} else {
check->result->corruptions++;
}
@@ -128,7 +121,6 @@ static int qed_check_l1_table(QEDCheck *check, QEDTable *table)
/* Clear invalid offset */
if (check->fix) {
table->offsets[i] = 0;
check->result->corruptions_fixed++;
} else {
check->result->corruptions++;
}
@@ -194,28 +186,6 @@ static void qed_check_for_leaks(QEDCheck *check)
}
}
/**
* Mark an image clean once it passes check or has been repaired
*/
static void qed_check_mark_clean(BDRVQEDState *s, BdrvCheckResult *result)
{
/* Skip if there were unfixable corruptions or I/O errors */
if (result->corruptions > 0 || result->check_errors > 0) {
return;
}
/* Skip if image is already marked clean */
if (!(s->header.features & QED_F_NEED_CHECK)) {
return;
}
/* Ensure fixes reach storage before clearing check bit */
bdrv_flush(s->bs);
s->header.features &= ~QED_F_NEED_CHECK;
qed_write_header_sync(s);
}
int qed_check(BDRVQEDState *s, BdrvCheckResult *result, bool fix)
{
QEDCheck check = {
@@ -227,22 +197,15 @@ int qed_check(BDRVQEDState *s, BdrvCheckResult *result, bool fix)
};
int ret;
check.used_clusters = g_malloc0(((check.nclusters + 31) / 32) *
check.used_clusters = qemu_mallocz(((check.nclusters + 31) / 32) *
sizeof(check.used_clusters[0]));
check.result->bfi.total_clusters =
(s->header.image_size + s->header.cluster_size - 1) /
s->header.cluster_size;
ret = qed_check_l1_table(&check, s->l1_table);
if (ret == 0) {
/* Only check for leaks if entire image was scanned successfully */
qed_check_for_leaks(&check);
if (fix) {
qed_check_mark_clean(s, result);
}
}
g_free(check.used_clusters);
qemu_free(check.used_clusters);
return ret;
}

View File

@@ -108,7 +108,7 @@ static void qed_find_cluster_cb(void *opaque, int ret)
out:
find_cluster_cb->cb(find_cluster_cb->opaque, ret, offset, len);
g_free(find_cluster_cb);
qemu_free(find_cluster_cb);
}
/**
@@ -152,7 +152,7 @@ void qed_find_cluster(BDRVQEDState *s, QEDRequest *request, uint64_t pos,
return;
}
find_cluster_cb = g_malloc(sizeof(*find_cluster_cb));
find_cluster_cb = qemu_malloc(sizeof(*find_cluster_cb));
find_cluster_cb->s = s;
find_cluster_cb->pos = pos;
find_cluster_cb->len = len;

View File

@@ -15,7 +15,7 @@
void *gencb_alloc(size_t len, BlockDriverCompletionFunc *cb, void *opaque)
{
GenericCB *gencb = g_malloc(len);
GenericCB *gencb = qemu_malloc(len);
gencb->cb = cb;
gencb->opaque = opaque;
return gencb;
@@ -27,6 +27,6 @@ void gencb_complete(void *opaque, int ret)
BlockDriverCompletionFunc *cb = gencb->cb;
void *user_opaque = gencb->opaque;
g_free(gencb);
qemu_free(gencb);
cb(user_opaque, ret);
}

View File

@@ -74,7 +74,7 @@ void qed_free_l2_cache(L2TableCache *l2_cache)
QTAILQ_FOREACH_SAFE(entry, &l2_cache->entries, node, next_entry) {
qemu_vfree(entry->table);
g_free(entry);
qemu_free(entry);
}
}
@@ -89,7 +89,7 @@ CachedL2Table *qed_alloc_l2_cache_entry(L2TableCache *l2_cache)
{
CachedL2Table *entry;
entry = g_malloc0(sizeof(*entry));
entry = qemu_mallocz(sizeof(*entry));
entry->ref++;
trace_qed_alloc_l2_cache_entry(l2_cache, entry);
@@ -111,7 +111,7 @@ void qed_unref_l2_cache_entry(CachedL2Table *entry)
trace_qed_unref_l2_cache_entry(entry, entry->ref);
if (entry->ref == 0) {
qemu_vfree(entry->table);
g_free(entry);
qemu_free(entry);
}
}
@@ -161,25 +161,11 @@ void qed_commit_l2_cache_entry(L2TableCache *l2_cache, CachedL2Table *l2_table)
return;
}
/* Evict an unused cache entry so we have space. If all entries are in use
* we can grow the cache temporarily and we try to shrink back down later.
*/
if (l2_cache->n_entries >= MAX_L2_CACHE_SIZE) {
CachedL2Table *next;
QTAILQ_FOREACH_SAFE(entry, &l2_cache->entries, node, next) {
if (entry->ref > 1) {
continue;
}
QTAILQ_REMOVE(&l2_cache->entries, entry, node);
l2_cache->n_entries--;
qed_unref_l2_cache_entry(entry);
/* Stop evicting when we've shrunk back to max size */
if (l2_cache->n_entries < MAX_L2_CACHE_SIZE) {
break;
}
}
entry = QTAILQ_FIRST(&l2_cache->entries);
QTAILQ_REMOVE(&l2_cache->entries, entry, node);
l2_cache->n_entries--;
qed_unref_l2_cache_entry(entry);
}
l2_cache->n_entries++;

View File

@@ -29,7 +29,7 @@ static void qed_read_table_cb(void *opaque, int ret)
{
QEDReadTableCB *read_table_cb = opaque;
QEDTable *table = read_table_cb->table;
int noffsets = read_table_cb->qiov.size / sizeof(uint64_t);
int noffsets = read_table_cb->iov.iov_len / sizeof(uint64_t);
int i;
/* Handle I/O error */
@@ -54,6 +54,7 @@ static void qed_read_table(BDRVQEDState *s, uint64_t offset, QEDTable *table,
QEDReadTableCB *read_table_cb = gencb_alloc(sizeof(*read_table_cb),
cb, opaque);
QEMUIOVector *qiov = &read_table_cb->qiov;
BlockDriverAIOCB *aiocb;
trace_qed_read_table(s, offset, table);
@@ -63,9 +64,12 @@ static void qed_read_table(BDRVQEDState *s, uint64_t offset, QEDTable *table,
read_table_cb->iov.iov_len = s->header.cluster_size * s->header.table_size,
qemu_iovec_init_external(qiov, &read_table_cb->iov, 1);
bdrv_aio_readv(s->bs->file, offset / BDRV_SECTOR_SIZE, qiov,
qiov->size / BDRV_SECTOR_SIZE,
qed_read_table_cb, read_table_cb);
aiocb = bdrv_aio_readv(s->bs->file, offset / BDRV_SECTOR_SIZE, qiov,
read_table_cb->iov.iov_len / BDRV_SECTOR_SIZE,
qed_read_table_cb, read_table_cb);
if (!aiocb) {
qed_read_table_cb(read_table_cb, -EIO);
}
}
typedef struct {
@@ -123,6 +127,7 @@ static void qed_write_table(BDRVQEDState *s, uint64_t offset, QEDTable *table,
BlockDriverCompletionFunc *cb, void *opaque)
{
QEDWriteTableCB *write_table_cb;
BlockDriverAIOCB *aiocb;
unsigned int sector_mask = BDRV_SECTOR_SIZE / sizeof(uint64_t) - 1;
unsigned int start, end, i;
size_t len_bytes;
@@ -153,10 +158,13 @@ static void qed_write_table(BDRVQEDState *s, uint64_t offset, QEDTable *table,
/* Adjust for offset into table */
offset += start * sizeof(uint64_t);
bdrv_aio_writev(s->bs->file, offset / BDRV_SECTOR_SIZE,
&write_table_cb->qiov,
write_table_cb->qiov.size / BDRV_SECTOR_SIZE,
qed_write_table_cb, write_table_cb);
aiocb = bdrv_aio_writev(s->bs->file, offset / BDRV_SECTOR_SIZE,
&write_table_cb->qiov,
write_table_cb->iov.iov_len / BDRV_SECTOR_SIZE,
qed_write_table_cb, write_table_cb);
if (!aiocb) {
qed_write_table_cb(write_table_cb, -EIO);
}
}
/**
@@ -171,12 +179,16 @@ int qed_read_l1_table_sync(BDRVQEDState *s)
{
int ret = -EINPROGRESS;
async_context_push();
qed_read_table(s, s->header.l1_table_offset,
s->l1_table, qed_sync_cb, &ret);
while (ret == -EINPROGRESS) {
qemu_aio_wait();
}
async_context_pop();
return ret;
}
@@ -193,11 +205,15 @@ int qed_write_l1_table_sync(BDRVQEDState *s, unsigned int index,
{
int ret = -EINPROGRESS;
async_context_push();
qed_write_l1_table(s, index, n, qed_sync_cb, &ret);
while (ret == -EINPROGRESS) {
qemu_aio_wait();
}
async_context_pop();
return ret;
}
@@ -214,21 +230,21 @@ static void qed_read_l2_table_cb(void *opaque, int ret)
QEDRequest *request = read_l2_table_cb->request;
BDRVQEDState *s = read_l2_table_cb->s;
CachedL2Table *l2_table = request->l2_table;
uint64_t l2_offset = read_l2_table_cb->l2_offset;
if (ret) {
/* can't trust loaded L2 table anymore */
qed_unref_l2_cache_entry(l2_table);
request->l2_table = NULL;
} else {
l2_table->offset = l2_offset;
l2_table->offset = read_l2_table_cb->l2_offset;
qed_commit_l2_cache_entry(&s->l2_cache, l2_table);
/* This is guaranteed to succeed because we just committed the entry
* to the cache.
*/
request->l2_table = qed_find_l2_cache_entry(&s->l2_cache, l2_offset);
request->l2_table = qed_find_l2_cache_entry(&s->l2_cache,
l2_table->offset);
assert(request->l2_table != NULL);
}
@@ -266,11 +282,14 @@ int qed_read_l2_table_sync(BDRVQEDState *s, QEDRequest *request, uint64_t offset
{
int ret = -EINPROGRESS;
async_context_push();
qed_read_l2_table(s, request, offset, qed_sync_cb, &ret);
while (ret == -EINPROGRESS) {
qemu_aio_wait();
}
async_context_pop();
return ret;
}
@@ -288,10 +307,13 @@ int qed_write_l2_table_sync(BDRVQEDState *s, QEDRequest *request,
{
int ret = -EINPROGRESS;
async_context_push();
qed_write_l2_table(s, request, index, n, flush, qed_sync_cb, &ret);
while (ret == -EINPROGRESS) {
qemu_aio_wait();
}
async_context_pop();
return ret;
}

View File

@@ -16,7 +16,6 @@
#include "trace.h"
#include "qed.h"
#include "qerror.h"
#include "migration.h"
static void qed_aio_cancel(BlockDriverAIOCB *blockacb)
{
@@ -89,7 +88,7 @@ static void qed_header_cpu_to_le(const QEDHeader *cpu, QEDHeader *le)
le->backing_filename_size = cpu_to_le32(cpu->backing_filename_size);
}
int qed_write_header_sync(BDRVQEDState *s)
static int qed_write_header_sync(BDRVQEDState *s)
{
QEDHeader le;
int ret;
@@ -123,6 +122,7 @@ static void qed_write_header_read_cb(void *opaque, int ret)
{
QEDWriteHeaderCB *write_header_cb = opaque;
BDRVQEDState *s = write_header_cb->s;
BlockDriverAIOCB *acb;
if (ret) {
qed_write_header_cb(write_header_cb, ret);
@@ -132,9 +132,12 @@ static void qed_write_header_read_cb(void *opaque, int ret)
/* Update header */
qed_header_cpu_to_le(&s->header, (QEDHeader *)write_header_cb->buf);
bdrv_aio_writev(s->bs->file, 0, &write_header_cb->qiov,
write_header_cb->nsectors, qed_write_header_cb,
write_header_cb);
acb = bdrv_aio_writev(s->bs->file, 0, &write_header_cb->qiov,
write_header_cb->nsectors, qed_write_header_cb,
write_header_cb);
if (!acb) {
qed_write_header_cb(write_header_cb, -EIO);
}
}
/**
@@ -152,6 +155,7 @@ static void qed_write_header(BDRVQEDState *s, BlockDriverCompletionFunc cb,
* them, and write back.
*/
BlockDriverAIOCB *acb;
int nsectors = (sizeof(QEDHeader) + BDRV_SECTOR_SIZE - 1) /
BDRV_SECTOR_SIZE;
size_t len = nsectors * BDRV_SECTOR_SIZE;
@@ -165,8 +169,11 @@ static void qed_write_header(BDRVQEDState *s, BlockDriverCompletionFunc cb,
write_header_cb->iov.iov_len = len;
qemu_iovec_init_external(&write_header_cb->qiov, &write_header_cb->iov, 1);
bdrv_aio_readv(s->bs->file, 0, &write_header_cb->qiov, nsectors,
qed_write_header_read_cb, write_header_cb);
acb = bdrv_aio_readv(s->bs->file, 0, &write_header_cb->qiov, nsectors,
qed_write_header_read_cb, write_header_cb);
if (!acb) {
qed_write_header_cb(write_header_cb, -EIO);
}
}
static uint64_t qed_max_image_size(uint32_t cluster_size, uint32_t table_size)
@@ -367,12 +374,6 @@ static void qed_cancel_need_check_timer(BDRVQEDState *s)
qemu_del_timer(s->need_check_timer);
}
static void bdrv_qed_rebind(BlockDriverState *bs)
{
BDRVQEDState *s = bs->opaque;
s->bs = bs;
}
static int bdrv_qed_open(BlockDriverState *bs, int flags)
{
BDRVQEDState *s = bs->opaque;
@@ -387,6 +388,7 @@ static int bdrv_qed_open(BlockDriverState *bs, int flags)
if (ret < 0) {
return ret;
}
ret = 0; /* ret should always be 0 or -errno */
qed_header_le_to_cpu(&le_header, &s->header);
if (s->header.magic != QED_MAGIC) {
@@ -456,7 +458,7 @@ static int bdrv_qed_open(BlockDriverState *bs, int flags)
* feature is no longer valid.
*/
if ((s->header.autoclear_features & ~QED_AUTOCLEAR_FEATURE_MASK) != 0 &&
!bdrv_is_read_only(bs->file) && !(flags & BDRV_O_INCOMING)) {
!bdrv_is_read_only(bs->file)) {
s->header.autoclear_features &= QED_AUTOCLEAR_FEATURE_MASK;
ret = qed_write_header_sync(s);
@@ -477,20 +479,26 @@ static int bdrv_qed_open(BlockDriverState *bs, int flags)
}
/* If image was not closed cleanly, check consistency */
if (!(flags & BDRV_O_CHECK) && (s->header.features & QED_F_NEED_CHECK)) {
if (s->header.features & QED_F_NEED_CHECK) {
/* Read-only images cannot be fixed. There is no risk of corruption
* since write operations are not possible. Therefore, allow
* potentially inconsistent images to be opened read-only. This can
* aid data recovery from an otherwise inconsistent image.
*/
if (!bdrv_is_read_only(bs->file) &&
!(flags & BDRV_O_INCOMING)) {
if (!bdrv_is_read_only(bs->file)) {
BdrvCheckResult result = {0};
ret = qed_check(s, &result, true);
if (ret) {
goto out;
}
if (!result.corruptions && !result.check_errors) {
/* Ensure fixes reach storage before clearing check bit */
bdrv_flush(s->bs);
s->header.features &= ~QED_F_NEED_CHECK;
qed_write_header_sync(s);
}
}
}
@@ -525,6 +533,11 @@ static void bdrv_qed_close(BlockDriverState *bs)
qemu_vfree(s->l1_table);
}
static int bdrv_qed_flush(BlockDriverState *bs)
{
return bdrv_flush(bs->file);
}
static int qed_create(const char *filename, uint32_t cluster_size,
uint64_t image_size, uint32_t table_size,
const char *backing_file, const char *backing_fmt)
@@ -582,7 +595,7 @@ static int qed_create(const char *filename, uint32_t cluster_size,
goto out;
}
l1_table = g_malloc0(l1_size);
l1_table = qemu_mallocz(l1_size);
ret = bdrv_pwrite(bs, header.l1_table_offset, l1_table, l1_size);
if (ret < 0) {
goto out;
@@ -590,7 +603,7 @@ static int qed_create(const char *filename, uint32_t cluster_size,
ret = 0; /* success */
out:
g_free(l1_table);
qemu_free(l1_table);
bdrv_delete(bs);
return ret;
}
@@ -644,7 +657,6 @@ static int bdrv_qed_create(const char *filename, QEMUOptionParameter *options)
}
typedef struct {
Coroutine *co;
int is_allocated;
int *pnum;
} QEDIsAllocatedCB;
@@ -654,14 +666,10 @@ static void qed_is_allocated_cb(void *opaque, int ret, uint64_t offset, size_t l
QEDIsAllocatedCB *cb = opaque;
*cb->pnum = len / BDRV_SECTOR_SIZE;
cb->is_allocated = (ret == QED_CLUSTER_FOUND || ret == QED_CLUSTER_ZERO);
if (cb->co) {
qemu_coroutine_enter(cb->co, NULL);
}
}
static int coroutine_fn bdrv_qed_co_is_allocated(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors, int *pnum)
static int bdrv_qed_is_allocated(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum)
{
BDRVQEDState *s = bs->opaque;
uint64_t pos = (uint64_t)sector_num * BDRV_SECTOR_SIZE;
@@ -672,14 +680,16 @@ static int coroutine_fn bdrv_qed_co_is_allocated(BlockDriverState *bs,
};
QEDRequest request = { .l2_table = NULL };
async_context_push();
qed_find_cluster(s, &request, pos, len, qed_is_allocated_cb, &cb);
/* Now sleep if the callback wasn't invoked immediately */
while (cb.is_allocated == -1) {
cb.co = qemu_coroutine_self();
qemu_coroutine_yield();
qemu_aio_wait();
}
async_context_pop();
qed_unref_l2_cache_entry(request.l2_table);
return cb.is_allocated;
@@ -711,6 +721,7 @@ static void qed_read_backing_file(BDRVQEDState *s, uint64_t pos,
QEMUIOVector *qiov,
BlockDriverCompletionFunc *cb, void *opaque)
{
BlockDriverAIOCB *aiocb;
uint64_t backing_length = 0;
size_t size;
@@ -729,7 +740,7 @@ static void qed_read_backing_file(BDRVQEDState *s, uint64_t pos,
/* Zero all sectors if reading beyond the end of the backing file */
if (pos >= backing_length ||
pos + qiov->size > backing_length) {
qemu_iovec_memset(qiov, 0, 0, qiov->size);
qemu_iovec_memset(qiov, 0, qiov->size);
}
/* Complete now if there are no backing file sectors to read */
@@ -741,9 +752,12 @@ static void qed_read_backing_file(BDRVQEDState *s, uint64_t pos,
/* If the read straddles the end of the backing file, shorten it */
size = MIN((uint64_t)backing_length - pos, qiov->size);
BLKDBG_EVENT(s->bs->file, BLKDBG_READ_BACKING_AIO);
bdrv_aio_readv(s->bs->backing_hd, pos / BDRV_SECTOR_SIZE,
qiov, size / BDRV_SECTOR_SIZE, cb, opaque);
BLKDBG_EVENT(s->bs->file, BLKDBG_READ_BACKING);
aiocb = bdrv_aio_readv(s->bs->backing_hd, pos / BDRV_SECTOR_SIZE,
qiov, size / BDRV_SECTOR_SIZE, cb, opaque);
if (!aiocb) {
cb(opaque, -EIO);
}
}
typedef struct {
@@ -765,6 +779,7 @@ static void qed_copy_from_backing_file_write(void *opaque, int ret)
{
CopyFromBackingFileCB *copy_cb = opaque;
BDRVQEDState *s = copy_cb->s;
BlockDriverAIOCB *aiocb;
if (ret) {
qed_copy_from_backing_file_cb(copy_cb, ret);
@@ -772,9 +787,13 @@ static void qed_copy_from_backing_file_write(void *opaque, int ret)
}
BLKDBG_EVENT(s->bs->file, BLKDBG_COW_WRITE);
bdrv_aio_writev(s->bs->file, copy_cb->offset / BDRV_SECTOR_SIZE,
&copy_cb->qiov, copy_cb->qiov.size / BDRV_SECTOR_SIZE,
qed_copy_from_backing_file_cb, copy_cb);
aiocb = bdrv_aio_writev(s->bs->file, copy_cb->offset / BDRV_SECTOR_SIZE,
&copy_cb->qiov,
copy_cb->qiov.size / BDRV_SECTOR_SIZE,
qed_copy_from_backing_file_cb, copy_cb);
if (!aiocb) {
qed_copy_from_backing_file_cb(copy_cb, -EIO);
}
}
/**
@@ -866,12 +885,6 @@ static void qed_aio_complete(QEDAIOCB *acb, int ret)
qemu_iovec_destroy(&acb->cur_qiov);
qed_unref_l2_cache_entry(acb->request.l2_table);
/* Free the buffer we may have allocated for zero writes */
if (acb->flags & QED_AIOCB_ZERO) {
qemu_vfree(acb->qiov->iov[0].iov_base);
acb->qiov->iov[0].iov_base = NULL;
}
/* Arrange for a bh to invoke the completion function */
acb->bh_ret = ret;
acb->bh = qemu_bh_new(qed_aio_complete_bh, acb);
@@ -902,14 +915,14 @@ static void qed_commit_l2_update(void *opaque, int ret)
QEDAIOCB *acb = opaque;
BDRVQEDState *s = acb_to_s(acb);
CachedL2Table *l2_table = acb->request.l2_table;
uint64_t l2_offset = l2_table->offset;
qed_commit_l2_cache_entry(&s->l2_cache, l2_table);
/* This is guaranteed to succeed because we just committed the entry to the
* cache.
*/
acb->request.l2_table = qed_find_l2_cache_entry(&s->l2_cache, l2_offset);
acb->request.l2_table = qed_find_l2_cache_entry(&s->l2_cache,
l2_table->offset);
assert(acb->request.l2_table != NULL);
qed_aio_next_io(opaque, ret);
@@ -938,8 +951,9 @@ static void qed_aio_write_l1_update(void *opaque, int ret)
/**
* Update L2 table with new cluster offsets and write them out
*/
static void qed_aio_write_l2_update(QEDAIOCB *acb, int ret, uint64_t offset)
static void qed_aio_write_l2_update(void *opaque, int ret)
{
QEDAIOCB *acb = opaque;
BDRVQEDState *s = acb_to_s(acb);
bool need_alloc = acb->find_cluster_ret == QED_CLUSTER_L1;
int index;
@@ -955,7 +969,7 @@ static void qed_aio_write_l2_update(QEDAIOCB *acb, int ret, uint64_t offset)
index = qed_l2_index(s, acb->cur_pos);
qed_update_l2_table(s, acb->request.l2_table->table, index, acb->cur_nclusters,
offset);
acb->cur_cluster);
if (need_alloc) {
/* Write out the whole new L2 table */
@@ -972,12 +986,6 @@ err:
qed_aio_complete(acb, ret);
}
static void qed_aio_write_l2_update_cb(void *opaque, int ret)
{
QEDAIOCB *acb = opaque;
qed_aio_write_l2_update(acb, ret, acb->cur_cluster);
}
/**
* Flush new data clusters before updating the L2 table
*
@@ -992,7 +1000,7 @@ static void qed_aio_write_flush_before_l2_update(void *opaque, int ret)
QEDAIOCB *acb = opaque;
BDRVQEDState *s = acb_to_s(acb);
if (!bdrv_aio_flush(s->bs->file, qed_aio_write_l2_update_cb, opaque)) {
if (!bdrv_aio_flush(s->bs->file, qed_aio_write_l2_update, opaque)) {
qed_aio_complete(acb, -EIO);
}
}
@@ -1007,6 +1015,7 @@ static void qed_aio_write_main(void *opaque, int ret)
uint64_t offset = acb->cur_cluster +
qed_offset_into_cluster(s, acb->cur_pos);
BlockDriverCompletionFunc *next_fn;
BlockDriverAIOCB *file_acb;
trace_qed_aio_write_main(s, acb, ret, offset, acb->cur_qiov.size);
@@ -1021,14 +1030,18 @@ static void qed_aio_write_main(void *opaque, int ret)
if (s->bs->backing_hd) {
next_fn = qed_aio_write_flush_before_l2_update;
} else {
next_fn = qed_aio_write_l2_update_cb;
next_fn = qed_aio_write_l2_update;
}
}
BLKDBG_EVENT(s->bs->file, BLKDBG_WRITE_AIO);
bdrv_aio_writev(s->bs->file, offset / BDRV_SECTOR_SIZE,
&acb->cur_qiov, acb->cur_qiov.size / BDRV_SECTOR_SIZE,
next_fn, acb);
file_acb = bdrv_aio_writev(s->bs->file, offset / BDRV_SECTOR_SIZE,
&acb->cur_qiov,
acb->cur_qiov.size / BDRV_SECTOR_SIZE,
next_fn, acb);
if (!file_acb) {
qed_aio_complete(acb, -EIO);
}
}
/**
@@ -1083,18 +1096,6 @@ static bool qed_should_set_need_check(BDRVQEDState *s)
return !(s->header.features & QED_F_NEED_CHECK);
}
static void qed_aio_write_zero_cluster(void *opaque, int ret)
{
QEDAIOCB *acb = opaque;
if (ret) {
qed_aio_complete(acb, ret);
return;
}
qed_aio_write_l2_update(acb, 0, 1);
}
/**
* Write new data cluster
*
@@ -1106,7 +1107,6 @@ static void qed_aio_write_zero_cluster(void *opaque, int ret)
static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len)
{
BDRVQEDState *s = acb_to_s(acb);
BlockDriverCompletionFunc *cb;
/* Cancel timer when the first allocating request comes in */
if (QSIMPLEQ_EMPTY(&s->allocating_write_reqs)) {
@@ -1124,26 +1124,14 @@ static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len)
acb->cur_nclusters = qed_bytes_to_clusters(s,
qed_offset_into_cluster(s, acb->cur_pos) + len);
qemu_iovec_concat(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len);
if (acb->flags & QED_AIOCB_ZERO) {
/* Skip ahead if the clusters are already zero */
if (acb->find_cluster_ret == QED_CLUSTER_ZERO) {
qed_aio_next_io(acb, 0);
return;
}
cb = qed_aio_write_zero_cluster;
} else {
cb = qed_aio_write_prefill;
acb->cur_cluster = qed_alloc_clusters(s, acb->cur_nclusters);
}
acb->cur_cluster = qed_alloc_clusters(s, acb->cur_nclusters);
qemu_iovec_copy(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len);
if (qed_should_set_need_check(s)) {
s->header.features |= QED_F_NEED_CHECK;
qed_write_header(s, cb, acb);
qed_write_header(s, qed_aio_write_prefill, acb);
} else {
cb(acb, 0);
qed_aio_write_prefill(acb, 0);
}
}
@@ -1158,19 +1146,9 @@ static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len)
*/
static void qed_aio_write_inplace(QEDAIOCB *acb, uint64_t offset, size_t len)
{
/* Allocate buffer for zero writes */
if (acb->flags & QED_AIOCB_ZERO) {
struct iovec *iov = acb->qiov->iov;
if (!iov->iov_base) {
iov->iov_base = qemu_blockalign(acb->common.bs, iov->iov_len);
memset(iov->iov_base, 0, iov->iov_len);
}
}
/* Calculate the I/O vector */
acb->cur_cluster = offset;
qemu_iovec_concat(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len);
qemu_iovec_copy(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len);
/* Do the actual write */
qed_aio_write_main(acb, 0);
@@ -1230,6 +1208,7 @@ static void qed_aio_read_data(void *opaque, int ret,
QEDAIOCB *acb = opaque;
BDRVQEDState *s = acb_to_s(acb);
BlockDriverState *bs = acb->common.bs;
BlockDriverAIOCB *file_acb;
/* Adjust offset into cluster */
offset += qed_offset_into_cluster(s, acb->cur_pos);
@@ -1240,11 +1219,11 @@ static void qed_aio_read_data(void *opaque, int ret,
goto err;
}
qemu_iovec_concat(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len);
qemu_iovec_copy(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len);
/* Handle zero cluster and backing file reads */
if (ret == QED_CLUSTER_ZERO) {
qemu_iovec_memset(&acb->cur_qiov, 0, 0, acb->cur_qiov.size);
qemu_iovec_memset(&acb->cur_qiov, 0, acb->cur_qiov.size);
qed_aio_next_io(acb, 0);
return;
} else if (ret != QED_CLUSTER_FOUND) {
@@ -1254,9 +1233,14 @@ static void qed_aio_read_data(void *opaque, int ret,
}
BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
bdrv_aio_readv(bs->file, offset / BDRV_SECTOR_SIZE,
&acb->cur_qiov, acb->cur_qiov.size / BDRV_SECTOR_SIZE,
qed_aio_next_io, acb);
file_acb = bdrv_aio_readv(bs->file, offset / BDRV_SECTOR_SIZE,
&acb->cur_qiov,
acb->cur_qiov.size / BDRV_SECTOR_SIZE,
qed_aio_next_io, acb);
if (!file_acb) {
ret = -EIO;
goto err;
}
return;
err:
@@ -1270,8 +1254,8 @@ static void qed_aio_next_io(void *opaque, int ret)
{
QEDAIOCB *acb = opaque;
BDRVQEDState *s = acb_to_s(acb);
QEDFindClusterFunc *io_fn = (acb->flags & QED_AIOCB_WRITE) ?
qed_aio_write_data : qed_aio_read_data;
QEDFindClusterFunc *io_fn =
acb->is_write ? qed_aio_write_data : qed_aio_read_data;
trace_qed_aio_next_io(s, acb, ret, acb->cur_pos + acb->cur_qiov.size);
@@ -1301,14 +1285,14 @@ static BlockDriverAIOCB *qed_aio_setup(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque, int flags)
void *opaque, bool is_write)
{
QEDAIOCB *acb = qemu_aio_get(&qed_aio_pool, bs, cb, opaque);
trace_qed_aio_setup(bs->opaque, acb, sector_num, nb_sectors,
opaque, flags);
opaque, is_write);
acb->flags = flags;
acb->is_write = is_write;
acb->finished = NULL;
acb->qiov = qiov;
acb->qiov_offset = 0;
@@ -1328,7 +1312,7 @@ static BlockDriverAIOCB *bdrv_qed_aio_readv(BlockDriverState *bs,
BlockDriverCompletionFunc *cb,
void *opaque)
{
return qed_aio_setup(bs, sector_num, qiov, nb_sectors, cb, opaque, 0);
return qed_aio_setup(bs, sector_num, qiov, nb_sectors, cb, opaque, false);
}
static BlockDriverAIOCB *bdrv_qed_aio_writev(BlockDriverState *bs,
@@ -1337,66 +1321,14 @@ static BlockDriverAIOCB *bdrv_qed_aio_writev(BlockDriverState *bs,
BlockDriverCompletionFunc *cb,
void *opaque)
{
return qed_aio_setup(bs, sector_num, qiov, nb_sectors, cb,
opaque, QED_AIOCB_WRITE);
return qed_aio_setup(bs, sector_num, qiov, nb_sectors, cb, opaque, true);
}
typedef struct {
Coroutine *co;
int ret;
bool done;
} QEDWriteZeroesCB;
static void coroutine_fn qed_co_write_zeroes_cb(void *opaque, int ret)
static BlockDriverAIOCB *bdrv_qed_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb,
void *opaque)
{
QEDWriteZeroesCB *cb = opaque;
cb->done = true;
cb->ret = ret;
if (cb->co) {
qemu_coroutine_enter(cb->co, NULL);
}
}
static int coroutine_fn bdrv_qed_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors)
{
BlockDriverAIOCB *blockacb;
BDRVQEDState *s = bs->opaque;
QEDWriteZeroesCB cb = { .done = false };
QEMUIOVector qiov;
struct iovec iov;
/* Refuse if there are untouched backing file sectors */
if (bs->backing_hd) {
if (qed_offset_into_cluster(s, sector_num * BDRV_SECTOR_SIZE) != 0) {
return -ENOTSUP;
}
if (qed_offset_into_cluster(s, nb_sectors * BDRV_SECTOR_SIZE) != 0) {
return -ENOTSUP;
}
}
/* Zero writes start without an I/O buffer. If a buffer becomes necessary
* then it will be allocated during request processing.
*/
iov.iov_base = NULL,
iov.iov_len = nb_sectors * BDRV_SECTOR_SIZE,
qemu_iovec_init_external(&qiov, &iov, 1);
blockacb = qed_aio_setup(bs, sector_num, &qiov, nb_sectors,
qed_co_write_zeroes_cb, &cb,
QED_AIOCB_WRITE | QED_AIOCB_ZERO);
if (!blockacb) {
return -EIO;
}
if (!cb.done) {
cb.co = qemu_coroutine_self();
qemu_coroutine_yield();
}
assert(cb.done);
return cb.ret;
return bdrv_aio_flush(bs->file, cb, opaque);
}
static int bdrv_qed_truncate(BlockDriverState *bs, int64_t offset)
@@ -1436,7 +1368,6 @@ static int bdrv_qed_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
memset(bdi, 0, sizeof(*bdi));
bdi->cluster_size = s->header.cluster_size;
bdi->is_dirty = s->header.features & QED_F_NEED_CHECK;
return 0;
}
@@ -1492,41 +1423,29 @@ static int bdrv_qed_change_backing_file(BlockDriverState *bs,
}
/* Prepare new header */
buffer = g_malloc(buffer_len);
buffer = qemu_malloc(buffer_len);
qed_header_cpu_to_le(&new_header, &le_header);
memcpy(buffer, &le_header, sizeof(le_header));
buffer_len = sizeof(le_header);
if (backing_file) {
memcpy(buffer + buffer_len, backing_file, backing_file_len);
buffer_len += backing_file_len;
}
memcpy(buffer + buffer_len, backing_file, backing_file_len);
buffer_len += backing_file_len;
/* Write new header */
ret = bdrv_pwrite_sync(bs->file, 0, buffer, buffer_len);
g_free(buffer);
qemu_free(buffer);
if (ret == 0) {
memcpy(&s->header, &new_header, sizeof(new_header));
}
return ret;
}
static void bdrv_qed_invalidate_cache(BlockDriverState *bs)
static int bdrv_qed_check(BlockDriverState *bs, BdrvCheckResult *result)
{
BDRVQEDState *s = bs->opaque;
bdrv_qed_close(bs);
memset(s, 0, sizeof(BDRVQEDState));
bdrv_qed_open(bs, bs->open_flags);
}
static int bdrv_qed_check(BlockDriverState *bs, BdrvCheckResult *result,
BdrvCheckMode fix)
{
BDRVQEDState *s = bs->opaque;
return qed_check(s, result, !!fix);
return qed_check(s, result, false);
}
static QEMUOptionParameter qed_create_options[] = {
@@ -1561,20 +1480,19 @@ static BlockDriver bdrv_qed = {
.create_options = qed_create_options,
.bdrv_probe = bdrv_qed_probe,
.bdrv_rebind = bdrv_qed_rebind,
.bdrv_open = bdrv_qed_open,
.bdrv_close = bdrv_qed_close,
.bdrv_create = bdrv_qed_create,
.bdrv_co_is_allocated = bdrv_qed_co_is_allocated,
.bdrv_flush = bdrv_qed_flush,
.bdrv_is_allocated = bdrv_qed_is_allocated,
.bdrv_make_empty = bdrv_qed_make_empty,
.bdrv_aio_readv = bdrv_qed_aio_readv,
.bdrv_aio_writev = bdrv_qed_aio_writev,
.bdrv_co_write_zeroes = bdrv_qed_co_write_zeroes,
.bdrv_aio_flush = bdrv_qed_aio_flush,
.bdrv_truncate = bdrv_qed_truncate,
.bdrv_getlength = bdrv_qed_getlength,
.bdrv_get_info = bdrv_qed_get_info,
.bdrv_change_backing_file = bdrv_qed_change_backing_file,
.bdrv_invalidate_cache = bdrv_qed_invalidate_cache,
.bdrv_check = bdrv_qed_check,
};

View File

@@ -123,17 +123,12 @@ typedef struct QEDRequest {
CachedL2Table *l2_table;
} QEDRequest;
enum {
QED_AIOCB_WRITE = 0x0001, /* read or write? */
QED_AIOCB_ZERO = 0x0002, /* zero write, used with QED_AIOCB_WRITE */
};
typedef struct QEDAIOCB {
BlockDriverAIOCB common;
QEMUBH *bh;
int bh_ret; /* final return status for completion bh */
QSIMPLEQ_ENTRY(QEDAIOCB) next; /* next request */
int flags; /* QED_AIOCB_* bits ORed together */
bool is_write; /* false - read, true - write */
bool *finished; /* signal for cancel completion */
uint64_t end_pos; /* request end on block device, in bytes */
@@ -210,11 +205,6 @@ typedef struct {
void *gencb_alloc(size_t len, BlockDriverCompletionFunc *cb, void *opaque);
void gencb_complete(void *opaque, int ret);
/**
* Header functions
*/
int qed_write_header_sync(BDRVQEDState *s);
/**
* L2 cache functions
*/

View File

@@ -9,8 +9,6 @@
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#ifndef QEMU_RAW_POSIX_AIO_H
#define QEMU_RAW_POSIX_AIO_H

View File

@@ -29,7 +29,7 @@
#include "module.h"
#include "block/raw-posix-aio.h"
#if defined(__APPLE__) && (__MACH__)
#ifdef CONFIG_COCOA
#include <paths.h>
#include <sys/param.h>
#include <IOKit/IOKitLib.h>
@@ -52,10 +52,6 @@
#include <sys/param.h>
#include <linux/cdrom.h>
#include <linux/fd.h>
#include <linux/fs.h>
#endif
#ifdef CONFIG_FIEMAP
#include <linux/fiemap.h>
#endif
#if defined (__FreeBSD__) || defined(__FreeBSD_kernel__)
#include <sys/disk.h>
@@ -234,19 +230,13 @@ static int raw_open_common(BlockDriverState *bs, const char *filename,
}
}
/* We're falling back to POSIX AIO in some cases so init always */
if (paio_init() < 0) {
goto out_free_buf;
}
#ifdef CONFIG_LINUX_AIO
/*
* Currently Linux do AIO only for files opened with O_DIRECT
* specified so check NOCACHE flag too
*/
if ((bdrv_flags & (BDRV_O_NOCACHE|BDRV_O_NATIVE_AIO)) ==
(BDRV_O_NOCACHE|BDRV_O_NATIVE_AIO)) {
/* We're falling back to POSIX AIO in some cases */
paio_init();
s->aio_ctx = laio_init();
if (!s->aio_ctx) {
goto out_free_buf;
@@ -255,6 +245,9 @@ static int raw_open_common(BlockDriverState *bs, const char *filename,
} else
#endif
{
if (paio_init() < 0) {
goto out_free_buf;
}
#ifdef CONFIG_LINUX_AIO
s->use_aio = 0;
#endif
@@ -271,7 +264,7 @@ static int raw_open_common(BlockDriverState *bs, const char *filename,
out_free_buf:
qemu_vfree(s->aligned_buf);
out_close:
qemu_close(fd);
close(fd);
return -errno;
}
@@ -300,6 +293,273 @@ static int raw_open(BlockDriverState *bs, const char *filename, int flags)
#endif
*/
/*
* offset and count are in bytes, but must be multiples of 512 for files
* opened with O_DIRECT. buf must be aligned to 512 bytes then.
*
* This function may be called without alignment if the caller ensures
* that O_DIRECT is not in effect.
*/
static int raw_pread_aligned(BlockDriverState *bs, int64_t offset,
uint8_t *buf, int count)
{
BDRVRawState *s = bs->opaque;
int ret;
ret = fd_open(bs);
if (ret < 0)
return ret;
ret = pread(s->fd, buf, count, offset);
if (ret == count)
return ret;
/* Allow reads beyond the end (needed for pwrite) */
if ((ret == 0) && bs->growable) {
int64_t size = raw_getlength(bs);
if (offset >= size) {
memset(buf, 0, count);
return count;
}
}
DEBUG_BLOCK_PRINT("raw_pread(%d:%s, %" PRId64 ", %p, %d) [%" PRId64
"] read failed %d : %d = %s\n",
s->fd, bs->filename, offset, buf, count,
bs->total_sectors, ret, errno, strerror(errno));
/* Try harder for CDrom. */
if (s->type != FTYPE_FILE) {
ret = pread(s->fd, buf, count, offset);
if (ret == count)
return ret;
ret = pread(s->fd, buf, count, offset);
if (ret == count)
return ret;
DEBUG_BLOCK_PRINT("raw_pread(%d:%s, %" PRId64 ", %p, %d) [%" PRId64
"] retry read failed %d : %d = %s\n",
s->fd, bs->filename, offset, buf, count,
bs->total_sectors, ret, errno, strerror(errno));
}
return (ret < 0) ? -errno : ret;
}
/*
* offset and count are in bytes, but must be multiples of the sector size
* for files opened with O_DIRECT. buf must be aligned to sector size bytes
* then.
*
* This function may be called without alignment if the caller ensures
* that O_DIRECT is not in effect.
*/
static int raw_pwrite_aligned(BlockDriverState *bs, int64_t offset,
const uint8_t *buf, int count)
{
BDRVRawState *s = bs->opaque;
int ret;
ret = fd_open(bs);
if (ret < 0)
return -errno;
ret = pwrite(s->fd, buf, count, offset);
if (ret == count)
return ret;
DEBUG_BLOCK_PRINT("raw_pwrite(%d:%s, %" PRId64 ", %p, %d) [%" PRId64
"] write failed %d : %d = %s\n",
s->fd, bs->filename, offset, buf, count,
bs->total_sectors, ret, errno, strerror(errno));
return (ret < 0) ? -errno : ret;
}
/*
* offset and count are in bytes and possibly not aligned. For files opened
* with O_DIRECT, necessary alignments are ensured before calling
* raw_pread_aligned to do the actual read.
*/
static int raw_pread(BlockDriverState *bs, int64_t offset,
uint8_t *buf, int count)
{
BDRVRawState *s = bs->opaque;
unsigned sector_mask = bs->buffer_alignment - 1;
int size, ret, shift, sum;
sum = 0;
if (s->aligned_buf != NULL) {
if (offset & sector_mask) {
/* align offset on a sector size bytes boundary */
shift = offset & sector_mask;
size = (shift + count + sector_mask) & ~sector_mask;
if (size > s->aligned_buf_size)
size = s->aligned_buf_size;
ret = raw_pread_aligned(bs, offset - shift, s->aligned_buf, size);
if (ret < 0)
return ret;
size = bs->buffer_alignment - shift;
if (size > count)
size = count;
memcpy(buf, s->aligned_buf + shift, size);
buf += size;
offset += size;
count -= size;
sum += size;
if (count == 0)
return sum;
}
if (count & sector_mask || (uintptr_t) buf & sector_mask) {
/* read on aligned buffer */
while (count) {
size = (count + sector_mask) & ~sector_mask;
if (size > s->aligned_buf_size)
size = s->aligned_buf_size;
ret = raw_pread_aligned(bs, offset, s->aligned_buf, size);
if (ret < 0) {
return ret;
} else if (ret == 0) {
fprintf(stderr, "raw_pread: read beyond end of file\n");
abort();
}
size = ret;
if (size > count)
size = count;
memcpy(buf, s->aligned_buf, size);
buf += size;
offset += size;
count -= size;
sum += size;
}
return sum;
}
}
return raw_pread_aligned(bs, offset, buf, count) + sum;
}
static int raw_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
ret = raw_pread(bs, sector_num * BDRV_SECTOR_SIZE, buf,
nb_sectors * BDRV_SECTOR_SIZE);
if (ret == (nb_sectors * BDRV_SECTOR_SIZE))
ret = 0;
return ret;
}
/*
* offset and count are in bytes and possibly not aligned. For files opened
* with O_DIRECT, necessary alignments are ensured before calling
* raw_pwrite_aligned to do the actual write.
*/
static int raw_pwrite(BlockDriverState *bs, int64_t offset,
const uint8_t *buf, int count)
{
BDRVRawState *s = bs->opaque;
unsigned sector_mask = bs->buffer_alignment - 1;
int size, ret, shift, sum;
sum = 0;
if (s->aligned_buf != NULL) {
if (offset & sector_mask) {
/* align offset on a sector size bytes boundary */
shift = offset & sector_mask;
ret = raw_pread_aligned(bs, offset - shift, s->aligned_buf,
bs->buffer_alignment);
if (ret < 0)
return ret;
size = bs->buffer_alignment - shift;
if (size > count)
size = count;
memcpy(s->aligned_buf + shift, buf, size);
ret = raw_pwrite_aligned(bs, offset - shift, s->aligned_buf,
bs->buffer_alignment);
if (ret < 0)
return ret;
buf += size;
offset += size;
count -= size;
sum += size;
if (count == 0)
return sum;
}
if (count & sector_mask || (uintptr_t) buf & sector_mask) {
while ((size = (count & ~sector_mask)) != 0) {
if (size > s->aligned_buf_size)
size = s->aligned_buf_size;
memcpy(s->aligned_buf, buf, size);
ret = raw_pwrite_aligned(bs, offset, s->aligned_buf, size);
if (ret < 0)
return ret;
buf += ret;
offset += ret;
count -= ret;
sum += ret;
}
/* here, count < sector_size because (count & ~sector_mask) == 0 */
if (count) {
ret = raw_pread_aligned(bs, offset, s->aligned_buf,
bs->buffer_alignment);
if (ret < 0)
return ret;
memcpy(s->aligned_buf, buf, count);
ret = raw_pwrite_aligned(bs, offset, s->aligned_buf,
bs->buffer_alignment);
if (ret < 0)
return ret;
if (count < ret)
ret = count;
sum += ret;
}
return sum;
}
}
return raw_pwrite_aligned(bs, offset, buf, count) + sum;
}
static int raw_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
int ret;
ret = raw_pwrite(bs, sector_num * BDRV_SECTOR_SIZE, buf,
nb_sectors * BDRV_SECTOR_SIZE);
if (ret == (nb_sectors * BDRV_SECTOR_SIZE))
ret = 0;
return ret;
}
/*
* Check if all memory in this vector is sector aligned.
*/
@@ -327,7 +587,7 @@ static BlockDriverAIOCB *raw_aio_submit(BlockDriverState *bs,
/*
* If O_DIRECT is used the buffer needs to be aligned on a sector
* boundary. Check if this is the case or tell the low-level
* boundary. Check if this is the case or telll the low-level
* driver that it needs to copy the buffer.
*/
if (s->aligned_buf) {
@@ -376,7 +636,7 @@ static void raw_close(BlockDriverState *bs)
{
BDRVRawState *s = bs->opaque;
if (s->fd >= 0) {
qemu_close(s->fd);
close(s->fd);
s->fd = -1;
if (s->aligned_buf != NULL)
qemu_vfree(s->aligned_buf);
@@ -386,24 +646,10 @@ static void raw_close(BlockDriverState *bs)
static int raw_truncate(BlockDriverState *bs, int64_t offset)
{
BDRVRawState *s = bs->opaque;
struct stat st;
if (fstat(s->fd, &st)) {
return -errno;
}
if (S_ISREG(st.st_mode)) {
if (ftruncate(s->fd, offset) < 0) {
return -errno;
}
} else if (S_ISCHR(st.st_mode) || S_ISBLK(st.st_mode)) {
if (offset > raw_getlength(bs)) {
return -EINVAL;
}
} else {
if (s->type != FTYPE_FILE)
return -ENOTSUP;
}
if (ftruncate(s->fd, offset) < 0)
return -errno;
return 0;
}
@@ -509,7 +755,7 @@ again:
}
if (size == 0)
#endif
#if defined(__APPLE__) && defined(__MACH__)
#ifdef CONFIG_COCOA
size = LONG_LONG_MAX;
#else
size = lseek(fd, 0LL, SEEK_END);
@@ -572,119 +818,25 @@ static int raw_create(const char *filename, QEMUOptionParameter *options)
options++;
}
fd = qemu_open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY,
0644);
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY,
0644);
if (fd < 0) {
result = -errno;
} else {
if (ftruncate(fd, total_size * BDRV_SECTOR_SIZE) != 0) {
result = -errno;
}
if (qemu_close(fd) != 0) {
if (close(fd) != 0) {
result = -errno;
}
}
return result;
}
/*
* Returns true iff the specified sector is present in the disk image. Drivers
* not implementing the functionality are assumed to not support backing files,
* hence all their sectors are reported as allocated.
*
* If 'sector_num' is beyond the end of the disk image the return value is 0
* and 'pnum' is set to 0.
*
* 'pnum' is set to the number of sectors (including and immediately following
* the specified sector) that are known to be in the same
* allocated/unallocated state.
*
* 'nb_sectors' is the max value 'pnum' should be set to. If nb_sectors goes
* beyond the end of the disk image it will be clamped.
*/
static int coroutine_fn raw_co_is_allocated(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors, int *pnum)
static int raw_flush(BlockDriverState *bs)
{
off_t start, data, hole;
int ret;
ret = fd_open(bs);
if (ret < 0) {
return ret;
}
start = sector_num * BDRV_SECTOR_SIZE;
#ifdef CONFIG_FIEMAP
BDRVRawState *s = bs->opaque;
struct {
struct fiemap fm;
struct fiemap_extent fe;
} f;
f.fm.fm_start = start;
f.fm.fm_length = (int64_t)nb_sectors * BDRV_SECTOR_SIZE;
f.fm.fm_flags = 0;
f.fm.fm_extent_count = 1;
f.fm.fm_reserved = 0;
if (ioctl(s->fd, FS_IOC_FIEMAP, &f) == -1) {
/* Assume everything is allocated. */
*pnum = nb_sectors;
return 1;
}
if (f.fm.fm_mapped_extents == 0) {
/* No extents found, data is beyond f.fm.fm_start + f.fm.fm_length.
* f.fm.fm_start + f.fm.fm_length must be clamped to the file size!
*/
off_t length = lseek(s->fd, 0, SEEK_END);
hole = f.fm.fm_start;
data = MIN(f.fm.fm_start + f.fm.fm_length, length);
} else {
data = f.fe.fe_logical;
hole = f.fe.fe_logical + f.fe.fe_length;
}
#elif defined SEEK_HOLE && defined SEEK_DATA
BDRVRawState *s = bs->opaque;
hole = lseek(s->fd, start, SEEK_HOLE);
if (hole == -1) {
/* -ENXIO indicates that sector_num was past the end of the file.
* There is a virtual hole there. */
assert(errno != -ENXIO);
/* Most likely EINVAL. Assume everything is allocated. */
*pnum = nb_sectors;
return 1;
}
if (hole > start) {
data = start;
} else {
/* On a hole. We need another syscall to find its end. */
data = lseek(s->fd, start, SEEK_DATA);
if (data == -1) {
data = lseek(s->fd, 0, SEEK_END);
}
}
#else
*pnum = nb_sectors;
return 1;
#endif
if (data <= start) {
/* On a data extent, compute sectors to the end of the extent. */
*pnum = MIN(nb_sectors, (hole - start) / BDRV_SECTOR_SIZE);
return 1;
} else {
/* On a hole, compute sectors to the beginning of the next extent. */
*pnum = MIN(nb_sectors, (data - start) / BDRV_SECTOR_SIZE);
return 0;
}
return qemu_fdatasync(s->fd);
}
#ifdef CONFIG_XFS
@@ -706,8 +858,7 @@ static int xfs_discard(BDRVRawState *s, int64_t sector_num, int nb_sectors)
}
#endif
static coroutine_fn int raw_co_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors)
static int raw_discard(BlockDriverState *bs, int64_t sector_num, int nb_sectors)
{
#ifdef CONFIG_XFS
BDRVRawState *s = bs->opaque;
@@ -735,10 +886,12 @@ static BlockDriver bdrv_file = {
.instance_size = sizeof(BDRVRawState),
.bdrv_probe = NULL, /* no probe for protocols */
.bdrv_file_open = raw_open,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_close = raw_close,
.bdrv_create = raw_create,
.bdrv_co_discard = raw_co_discard,
.bdrv_co_is_allocated = raw_co_is_allocated,
.bdrv_flush = raw_flush,
.bdrv_discard = raw_discard,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
@@ -755,7 +908,7 @@ static BlockDriver bdrv_file = {
/***********************************************/
/* host device */
#if defined(__APPLE__) && defined(__MACH__)
#ifdef CONFIG_COCOA
static kern_return_t FindEjectableCDMedia( io_iterator_t *mediaIterator );
static kern_return_t GetBSDPath( io_iterator_t mediaIterator, char *bsdPath, CFIndex maxPathSize );
@@ -833,7 +986,7 @@ static int hdev_open(BlockDriverState *bs, const char *filename, int flags)
{
BDRVRawState *s = bs->opaque;
#if defined(__APPLE__) && defined(__MACH__)
#ifdef CONFIG_COCOA
if (strstart(filename, "/dev/cdrom", NULL)) {
kern_return_t kernResult;
io_iterator_t mediaIterator;
@@ -846,11 +999,11 @@ static int hdev_open(BlockDriverState *bs, const char *filename, int flags)
if ( bsdPath[ 0 ] != '\0' ) {
strcat(bsdPath,"s0");
/* some CDs don't have a partition 0 */
fd = qemu_open(bsdPath, O_RDONLY | O_BINARY | O_LARGEFILE);
fd = open(bsdPath, O_RDONLY | O_BINARY | O_LARGEFILE);
if (fd < 0) {
bsdPath[strlen(bsdPath)-1] = '1';
} else {
qemu_close(fd);
close(fd);
}
filename = bsdPath;
}
@@ -889,7 +1042,7 @@ static int fd_open(BlockDriverState *bs)
last_media_present = (s->fd >= 0);
if (s->fd >= 0 &&
(get_clock() - s->fd_open_time) >= FD_OPEN_TIMEOUT) {
qemu_close(s->fd);
close(s->fd);
s->fd = -1;
#ifdef DEBUG_FLOPPY
printf("Floppy closed\n");
@@ -903,7 +1056,7 @@ static int fd_open(BlockDriverState *bs)
#endif
return -EIO;
}
s->fd = qemu_open(bs->filename, s->open_flags & ~O_NONBLOCK);
s->fd = open(bs->filename, s->open_flags & ~O_NONBLOCK);
if (s->fd < 0) {
s->fd_error_time = get_clock();
s->fd_got_error = 1;
@@ -977,7 +1130,7 @@ static int hdev_create(const char *filename, QEMUOptionParameter *options)
options++;
}
fd = qemu_open(filename, O_WRONLY | O_BINARY);
fd = open(filename, O_WRONLY | O_BINARY);
if (fd < 0)
return -errno;
@@ -988,7 +1141,7 @@ static int hdev_create(const char *filename, QEMUOptionParameter *options)
else if (lseek(fd, 0, SEEK_END) < total_size * BDRV_SECTOR_SIZE)
ret = -ENOSPC;
qemu_close(fd);
close(fd);
return ret;
}
@@ -1007,12 +1160,14 @@ static BlockDriver bdrv_host_device = {
.bdrv_create = hdev_create,
.create_options = raw_create_options,
.bdrv_has_zero_init = hdev_has_zero_init,
.bdrv_flush = raw_flush,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_truncate = raw_truncate,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_getlength = raw_getlength,
.bdrv_get_allocated_file_size
= raw_get_allocated_file_size,
@@ -1038,7 +1193,7 @@ static int floppy_open(BlockDriverState *bs, const char *filename, int flags)
return ret;
/* close fd so that we can reopen it as needed */
qemu_close(s->fd);
close(s->fd);
s->fd = -1;
s->fd_media_changed = 1;
@@ -1052,12 +1207,10 @@ static int floppy_probe_device(const char *filename)
struct floppy_struct fdparam;
struct stat st;
if (strstart(filename, "/dev/fd", NULL) &&
!strstart(filename, "/dev/fdset/", NULL)) {
if (strstart(filename, "/dev/fd", NULL))
prio = 50;
}
fd = qemu_open(filename, O_RDONLY | O_NONBLOCK);
fd = open(filename, O_RDONLY | O_NONBLOCK);
if (fd < 0) {
goto out;
}
@@ -1072,7 +1225,7 @@ static int floppy_probe_device(const char *filename)
prio = 100;
outc:
qemu_close(fd);
close(fd);
out:
return prio;
}
@@ -1101,21 +1254,23 @@ static int floppy_media_changed(BlockDriverState *bs)
return ret;
}
static void floppy_eject(BlockDriverState *bs, bool eject_flag)
static int floppy_eject(BlockDriverState *bs, int eject_flag)
{
BDRVRawState *s = bs->opaque;
int fd;
if (s->fd >= 0) {
qemu_close(s->fd);
close(s->fd);
s->fd = -1;
}
fd = qemu_open(bs->filename, s->open_flags | O_NONBLOCK);
fd = open(bs->filename, s->open_flags | O_NONBLOCK);
if (fd >= 0) {
if (ioctl(fd, FDEJECT, 0) < 0)
perror("FDEJECT");
qemu_close(fd);
close(fd);
}
return 0;
}
static BlockDriver bdrv_host_floppy = {
@@ -1128,12 +1283,14 @@ static BlockDriver bdrv_host_floppy = {
.bdrv_create = hdev_create,
.create_options = raw_create_options,
.bdrv_has_zero_init = hdev_has_zero_init,
.bdrv_flush = raw_flush,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_truncate = raw_truncate,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_getlength = raw_getlength,
.bdrv_get_allocated_file_size
= raw_get_allocated_file_size,
@@ -1160,7 +1317,7 @@ static int cdrom_probe_device(const char *filename)
int prio = 0;
struct stat st;
fd = qemu_open(filename, O_RDONLY | O_NONBLOCK);
fd = open(filename, O_RDONLY | O_NONBLOCK);
if (fd < 0) {
goto out;
}
@@ -1175,7 +1332,7 @@ static int cdrom_probe_device(const char *filename)
prio = 100;
outc:
qemu_close(fd);
close(fd);
out:
return prio;
}
@@ -1191,7 +1348,7 @@ static int cdrom_is_inserted(BlockDriverState *bs)
return 0;
}
static void cdrom_eject(BlockDriverState *bs, bool eject_flag)
static int cdrom_eject(BlockDriverState *bs, int eject_flag)
{
BDRVRawState *s = bs->opaque;
@@ -1202,9 +1359,11 @@ static void cdrom_eject(BlockDriverState *bs, bool eject_flag)
if (ioctl(s->fd, CDROMCLOSETRAY, NULL) < 0)
perror("CDROMEJECT");
}
return 0;
}
static void cdrom_lock_medium(BlockDriverState *bs, bool locked)
static int cdrom_set_locked(BlockDriverState *bs, int locked)
{
BDRVRawState *s = bs->opaque;
@@ -1215,6 +1374,8 @@ static void cdrom_lock_medium(BlockDriverState *bs, bool locked)
*/
/* perror("CDROM_LOCKDOOR"); */
}
return 0;
}
static BlockDriver bdrv_host_cdrom = {
@@ -1227,12 +1388,14 @@ static BlockDriver bdrv_host_cdrom = {
.bdrv_create = hdev_create,
.create_options = raw_create_options,
.bdrv_has_zero_init = hdev_has_zero_init,
.bdrv_flush = raw_flush,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_truncate = raw_truncate,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_getlength = raw_getlength,
.bdrv_get_allocated_file_size
= raw_get_allocated_file_size,
@@ -1240,7 +1403,7 @@ static BlockDriver bdrv_host_cdrom = {
/* removable device support */
.bdrv_is_inserted = cdrom_is_inserted,
.bdrv_eject = cdrom_eject,
.bdrv_lock_medium = cdrom_lock_medium,
.bdrv_set_locked = cdrom_set_locked,
/* generic scsi device */
.bdrv_ioctl = hdev_ioctl,
@@ -1260,7 +1423,7 @@ static int cdrom_open(BlockDriverState *bs, const char *filename, int flags)
if (ret)
return ret;
/* make sure the door isn't locked at this time */
/* make sure the door isnt locked at this time */
ioctl(s->fd, CDIOCALLOW);
return 0;
}
@@ -1283,15 +1446,15 @@ static int cdrom_reopen(BlockDriverState *bs)
* FreeBSD seems to not notice sometimes...
*/
if (s->fd >= 0)
qemu_close(s->fd);
fd = qemu_open(bs->filename, s->open_flags, 0644);
close(s->fd);
fd = open(bs->filename, s->open_flags, 0644);
if (fd < 0) {
s->fd = -1;
return -EIO;
}
s->fd = fd;
/* make sure the door isn't locked at this time */
/* make sure the door isnt locked at this time */
ioctl(s->fd, CDIOCALLOW);
return 0;
}
@@ -1301,12 +1464,12 @@ static int cdrom_is_inserted(BlockDriverState *bs)
return raw_getlength(bs) > 0;
}
static void cdrom_eject(BlockDriverState *bs, bool eject_flag)
static int cdrom_eject(BlockDriverState *bs, int eject_flag)
{
BDRVRawState *s = bs->opaque;
if (s->fd < 0)
return;
return -ENOTSUP;
(void) ioctl(s->fd, CDIOCALLOW);
@@ -1318,15 +1481,17 @@ static void cdrom_eject(BlockDriverState *bs, bool eject_flag)
perror("CDIOCCLOSE");
}
cdrom_reopen(bs);
if (cdrom_reopen(bs) < 0)
return -ENOTSUP;
return 0;
}
static void cdrom_lock_medium(BlockDriverState *bs, bool locked)
static int cdrom_set_locked(BlockDriverState *bs, int locked)
{
BDRVRawState *s = bs->opaque;
if (s->fd < 0)
return;
return -ENOTSUP;
if (ioctl(s->fd, (locked ? CDIOCPREVENT : CDIOCALLOW)) < 0) {
/*
* Note: an error can happen if the distribution automatically
@@ -1334,6 +1499,8 @@ static void cdrom_lock_medium(BlockDriverState *bs, bool locked)
*/
/* perror("CDROM_LOCKDOOR"); */
}
return 0;
}
static BlockDriver bdrv_host_cdrom = {
@@ -1346,12 +1513,14 @@ static BlockDriver bdrv_host_cdrom = {
.bdrv_create = hdev_create,
.create_options = raw_create_options,
.bdrv_has_zero_init = hdev_has_zero_init,
.bdrv_flush = raw_flush,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_truncate = raw_truncate,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_getlength = raw_getlength,
.bdrv_get_allocated_file_size
= raw_get_allocated_file_size,
@@ -1359,7 +1528,7 @@ static BlockDriver bdrv_host_cdrom = {
/* removable device support */
.bdrv_is_inserted = cdrom_is_inserted,
.bdrv_eject = cdrom_eject,
.bdrv_lock_medium = cdrom_lock_medium,
.bdrv_set_locked = cdrom_set_locked,
};
#endif /* __FreeBSD__ */

View File

@@ -41,7 +41,6 @@ typedef struct BDRVRawState {
int qemu_ftruncate64(int fd, int64_t length)
{
LARGE_INTEGER li;
DWORD dw;
LONG high;
HANDLE h;
BOOL res;
@@ -54,15 +53,12 @@ int qemu_ftruncate64(int fd, int64_t length)
/* get current position, ftruncate do not change position */
li.HighPart = 0;
li.LowPart = SetFilePointer (h, 0, &li.HighPart, FILE_CURRENT);
if (li.LowPart == INVALID_SET_FILE_POINTER && GetLastError() != NO_ERROR) {
if (li.LowPart == 0xffffffffUL && GetLastError() != NO_ERROR)
return -1;
}
high = length >> 32;
dw = SetFilePointer(h, (DWORD) length, &high, FILE_BEGIN);
if (dw == INVALID_SET_FILE_POINTER && GetLastError() != NO_ERROR) {
if (!SetFilePointer(h, (DWORD) length, &high, FILE_BEGIN))
return -1;
}
res = SetEndOfFile(h);
/* back to old position */
@@ -255,13 +251,13 @@ static int raw_create(const char *filename, QEMUOptionParameter *options)
options++;
}
fd = qemu_open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY,
0644);
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY,
0644);
if (fd < 0)
return -EIO;
set_sparse(fd);
ftruncate(fd, total_size * 512);
qemu_close(fd);
close(fd);
return 0;
}
@@ -281,11 +277,9 @@ static BlockDriver bdrv_file = {
.bdrv_file_open = raw_open,
.bdrv_close = raw_close,
.bdrv_create = raw_create,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_co_flush_to_disk = raw_flush,
.bdrv_flush = raw_flush,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_truncate = raw_truncate,
.bdrv_getlength = raw_getlength,
.bdrv_get_allocated_file_size
@@ -399,6 +393,41 @@ static int hdev_open(BlockDriverState *bs, const char *filename, int flags)
return 0;
}
#if 0
/***********************************************/
/* removable device additional commands */
static int raw_is_inserted(BlockDriverState *bs)
{
return 1;
}
static int raw_media_changed(BlockDriverState *bs)
{
return -ENOTSUP;
}
static int raw_eject(BlockDriverState *bs, int eject_flag)
{
DWORD ret_count;
if (s->type == FTYPE_FILE)
return -ENOTSUP;
if (eject_flag) {
DeviceIoControl(s->hfile, IOCTL_STORAGE_EJECT_MEDIA,
NULL, 0, NULL, 0, &lpBytesReturned, NULL);
} else {
DeviceIoControl(s->hfile, IOCTL_STORAGE_LOAD_MEDIA,
NULL, 0, NULL, 0, &lpBytesReturned, NULL);
}
}
static int raw_set_locked(BlockDriverState *bs, int locked)
{
return -ENOTSUP;
}
#endif
static int hdev_has_zero_init(BlockDriverState *bs)
{
return 0;
@@ -411,12 +440,11 @@ static BlockDriver bdrv_host_device = {
.bdrv_probe_device = hdev_probe_device,
.bdrv_file_open = hdev_open,
.bdrv_close = raw_close,
.bdrv_flush = raw_flush,
.bdrv_has_zero_init = hdev_has_zero_init,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_co_flush_to_disk = raw_flush,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_getlength = raw_getlength,
.bdrv_get_allocated_file_size
= raw_get_allocated_file_size,

View File

@@ -9,29 +9,45 @@ static int raw_open(BlockDriverState *bs, int flags)
return 0;
}
static int coroutine_fn raw_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
static int raw_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
return bdrv_co_readv(bs->file, sector_num, nb_sectors, qiov);
return bdrv_read(bs->file, sector_num, buf, nb_sectors);
}
static int coroutine_fn raw_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
static int raw_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
BLKDBG_EVENT(bs->file, BLKDBG_WRITE_AIO);
return bdrv_co_writev(bs->file, sector_num, nb_sectors, qiov);
return bdrv_write(bs->file, sector_num, buf, nb_sectors);
}
static BlockDriverAIOCB *raw_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
return bdrv_aio_readv(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
}
static BlockDriverAIOCB *raw_aio_writev(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
return bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
}
static void raw_close(BlockDriverState *bs)
{
}
static int coroutine_fn raw_co_is_allocated(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors, int *pnum)
static int raw_flush(BlockDriverState *bs)
{
return bdrv_co_is_allocated(bs->file, sector_num, nb_sectors, pnum);
return bdrv_flush(bs->file);
}
static BlockDriverAIOCB *raw_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
return bdrv_aio_flush(bs->file, cb, opaque);
}
static int64_t raw_getlength(BlockDriverState *bs)
@@ -49,10 +65,9 @@ static int raw_probe(const uint8_t *buf, int buf_size, const char *filename)
return 1; /* everything can be opened as raw image */
}
static int coroutine_fn raw_co_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors)
static int raw_discard(BlockDriverState *bs, int64_t sector_num, int nb_sectors)
{
return bdrv_co_discard(bs->file, sector_num, nb_sectors);
return bdrv_discard(bs->file, sector_num, nb_sectors);
}
static int raw_is_inserted(BlockDriverState *bs)
@@ -60,19 +75,15 @@ static int raw_is_inserted(BlockDriverState *bs)
return bdrv_is_inserted(bs->file);
}
static int raw_media_changed(BlockDriverState *bs)
static int raw_eject(BlockDriverState *bs, int eject_flag)
{
return bdrv_media_changed(bs->file);
return bdrv_eject(bs->file, eject_flag);
}
static void raw_eject(BlockDriverState *bs, bool eject_flag)
static int raw_set_locked(BlockDriverState *bs, int locked)
{
bdrv_eject(bs->file, eject_flag);
}
static void raw_lock_medium(BlockDriverState *bs, bool locked)
{
bdrv_lock_medium(bs->file, locked);
bdrv_set_locked(bs->file, locked);
return 0;
}
static int raw_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
@@ -109,26 +120,26 @@ static int raw_has_zero_init(BlockDriverState *bs)
static BlockDriver bdrv_raw = {
.format_name = "raw",
/* It's really 0, but we need to make g_malloc() happy */
/* It's really 0, but we need to make qemu_malloc() happy */
.instance_size = 1,
.bdrv_open = raw_open,
.bdrv_close = raw_close,
.bdrv_co_readv = raw_co_readv,
.bdrv_co_writev = raw_co_writev,
.bdrv_co_is_allocated = raw_co_is_allocated,
.bdrv_co_discard = raw_co_discard,
.bdrv_read = raw_read,
.bdrv_write = raw_write,
.bdrv_flush = raw_flush,
.bdrv_probe = raw_probe,
.bdrv_getlength = raw_getlength,
.bdrv_truncate = raw_truncate,
.bdrv_is_inserted = raw_is_inserted,
.bdrv_media_changed = raw_media_changed,
.bdrv_eject = raw_eject,
.bdrv_lock_medium = raw_lock_medium,
.bdrv_aio_readv = raw_aio_readv,
.bdrv_aio_writev = raw_aio_writev,
.bdrv_aio_flush = raw_aio_flush,
.bdrv_discard = raw_discard,
.bdrv_is_inserted = raw_is_inserted,
.bdrv_eject = raw_eject,
.bdrv_set_locked = raw_set_locked,
.bdrv_ioctl = raw_ioctl,
.bdrv_aio_ioctl = raw_aio_ioctl,

View File

@@ -7,50 +7,43 @@
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include <inttypes.h>
#include "qemu-common.h"
#include "qemu-error.h"
#include "block_int.h"
#include <rbd/librbd.h>
/*
* When specifying the image filename use:
*
* rbd:poolname/devicename[@snapshotname][:option1=value1[:option2=value2...]]
*
* poolname must be the name of an existing rados pool.
* poolname must be the name of an existing rados pool
*
* devicename is the name of the rbd image.
* devicename is the basename for all objects used to
* emulate the raw device.
*
* Each option given is used to configure rados, and may be any valid
* Ceph option, "id", or "conf".
* Each option given is used to configure rados, and may be
* any Ceph option, or "conf". The "conf" option specifies
* a Ceph configuration file to read.
*
* The "id" option indicates what user we should authenticate as to
* the Ceph cluster. If it is excluded we will use the Ceph default
* (normally 'admin').
* Metadata information (image size, ...) is stored in an
* object with the name "devicename.rbd".
*
* The "conf" option specifies a Ceph configuration file to read. If
* it is not specified, we will read from the default Ceph locations
* (e.g., /etc/ceph/ceph.conf). To avoid reading _any_ configuration
* file, specify conf=/dev/null.
* The raw device is split into 4MB sized objects by default.
* The sequencenumber is encoded in a 12 byte long hex-string,
* and is attached to the devicename, separated by a dot.
* e.g. "devicename.1234567890ab"
*
* Configuration values containing :, @, or = can be escaped with a
* leading "\".
*/
/* rbd_aio_discard added in 0.1.2 */
#if LIBRBD_VERSION_CODE >= LIBRBD_VERSION(0, 1, 2)
#define LIBRBD_SUPPORTS_DISCARD
#else
#undef LIBRBD_SUPPORTS_DISCARD
#endif
#define OBJ_MAX_SIZE (1UL << OBJ_DEFAULT_OBJ_ORDER)
#define RBD_MAX_CONF_NAME_SIZE 128
@@ -60,19 +53,13 @@
#define RBD_MAX_SNAP_NAME_SIZE 128
#define RBD_MAX_SNAPS 100
typedef enum {
RBD_AIO_READ,
RBD_AIO_WRITE,
RBD_AIO_DISCARD
} RBDAIOCmd;
typedef struct RBDAIOCB {
BlockDriverAIOCB common;
QEMUBH *bh;
int ret;
QEMUIOVector *qiov;
char *bounce;
RBDAIOCmd cmd;
int write;
int64_t sector_num;
int error;
struct BDRVRBDState *s;
@@ -117,15 +104,8 @@ static int qemu_rbd_next_tok(char *dst, int dst_len,
*p = NULL;
if (delim != '\0') {
for (end = src; *end; ++end) {
if (*end == delim) {
break;
}
if (*end == '\\' && end[1] != '\0') {
end++;
}
}
if (*end == delim) {
end = strchr(src, delim);
if (end) {
*p = end + 1;
*end = '\0';
}
@@ -144,19 +124,6 @@ static int qemu_rbd_next_tok(char *dst, int dst_len,
return 0;
}
static void qemu_rbd_unescape(char *src)
{
char *p;
for (p = src; *src; ++src, ++p) {
if (*src == '\\' && src[1] != '\0') {
src++;
}
*p = *src;
}
*p = '\0';
}
static int qemu_rbd_parsename(const char *filename,
char *pool, int pool_len,
char *snap, int snap_len,
@@ -171,7 +138,7 @@ static int qemu_rbd_parsename(const char *filename,
return -EINVAL;
}
buf = g_strdup(start);
buf = qemu_strdup(start);
p = buf;
*snap = '\0';
*conf = '\0';
@@ -181,7 +148,6 @@ static int qemu_rbd_parsename(const char *filename,
ret = -EINVAL;
goto done;
}
qemu_rbd_unescape(pool);
if (strchr(p, '@')) {
ret = qemu_rbd_next_tok(name, name_len, p, '@', "object name", &p);
@@ -189,11 +155,9 @@ static int qemu_rbd_parsename(const char *filename,
goto done;
}
ret = qemu_rbd_next_tok(snap, snap_len, p, ':', "snap name", &p);
qemu_rbd_unescape(snap);
} else {
ret = qemu_rbd_next_tok(name, name_len, p, ':', "object name", &p);
}
qemu_rbd_unescape(name);
if (ret < 0 || !p) {
goto done;
}
@@ -201,38 +165,10 @@ static int qemu_rbd_parsename(const char *filename,
ret = qemu_rbd_next_tok(conf, conf_len, p, '\0', "configuration", &p);
done:
g_free(buf);
qemu_free(buf);
return ret;
}
static char *qemu_rbd_parse_clientname(const char *conf, char *clientname)
{
const char *p = conf;
while (*p) {
int len;
const char *end = strchr(p, ':');
if (end) {
len = end - p;
} else {
len = strlen(p);
}
if (strncmp(p, "id=", 3) == 0) {
len -= 3;
strncpy(clientname, p + 3, len);
clientname[len] = '\0';
return clientname;
}
if (end == NULL) {
break;
}
p = end + 1;
}
return NULL;
}
static int qemu_rbd_set_conf(rados_t cluster, const char *conf)
{
char *p, *buf;
@@ -240,7 +176,7 @@ static int qemu_rbd_set_conf(rados_t cluster, const char *conf)
char value[RBD_MAX_CONF_VAL_SIZE];
int ret = 0;
buf = g_strdup(conf);
buf = qemu_strdup(conf);
p = buf;
while (p) {
@@ -249,7 +185,6 @@ static int qemu_rbd_set_conf(rados_t cluster, const char *conf)
if (ret < 0) {
break;
}
qemu_rbd_unescape(name);
if (!p) {
error_report("conf option %s has no value", name);
@@ -262,27 +197,24 @@ static int qemu_rbd_set_conf(rados_t cluster, const char *conf)
if (ret < 0) {
break;
}
qemu_rbd_unescape(value);
if (strcmp(name, "conf") == 0) {
ret = rados_conf_read_file(cluster, value);
if (ret < 0) {
error_report("error reading conf file %s", value);
break;
}
} else if (strcmp(name, "id") == 0) {
/* ignore, this is parsed by qemu_rbd_parse_clientname() */
} else {
if (strcmp(name, "conf")) {
ret = rados_conf_set(cluster, name, value);
if (ret < 0) {
error_report("invalid conf option %s", name);
ret = -EINVAL;
break;
}
} else {
ret = rados_conf_read_file(cluster, value);
if (ret < 0) {
error_report("error reading conf file %s", value);
break;
}
}
}
g_free(buf);
qemu_free(buf);
return ret;
}
@@ -295,8 +227,6 @@ static int qemu_rbd_create(const char *filename, QEMUOptionParameter *options)
char name[RBD_MAX_IMAGE_NAME_SIZE];
char snap_buf[RBD_MAX_SNAP_NAME_SIZE];
char conf[RBD_MAX_CONF_SIZE];
char clientname_buf[RBD_MAX_CONF_SIZE];
char *clientname;
rados_t cluster;
rados_ioctx_t io_ctx;
int ret;
@@ -329,15 +259,17 @@ static int qemu_rbd_create(const char *filename, QEMUOptionParameter *options)
options++;
}
clientname = qemu_rbd_parse_clientname(conf, clientname_buf);
if (rados_create(&cluster, clientname) < 0) {
if (rados_create(&cluster, NULL) < 0) {
error_report("error initializing");
return -EIO;
}
if (strstr(conf, "conf=") == NULL) {
/* try default location, but ignore failure */
rados_conf_read_file(cluster, NULL);
if (rados_conf_read_file(cluster, NULL) < 0) {
error_report("error reading config file");
rados_shutdown(cluster);
return -EIO;
}
}
if (conf[0] != '\0' &&
@@ -384,8 +316,7 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb)
r = rcb->ret;
if (acb->cmd == RBD_AIO_WRITE ||
acb->cmd == RBD_AIO_DISCARD) {
if (acb->write) {
if (r < 0) {
acb->ret = r;
acb->error = 1;
@@ -410,7 +341,7 @@ static void qemu_rbd_complete_aio(RADOSCB *rcb)
acb->bh = qemu_bh_new(rbd_aio_bh_cb, acb);
qemu_bh_schedule(acb->bh);
done:
g_free(rcb);
qemu_free(rcb);
}
/*
@@ -427,14 +358,15 @@ static void qemu_rbd_aio_event_reader(void *opaque)
char *p = (char *)&s->event_rcb;
/* now read the rcb pointer that was sent from a non qemu thread */
ret = read(s->fds[RBD_FD_READ], p + s->event_reader_pos,
sizeof(s->event_rcb) - s->event_reader_pos);
if (ret > 0) {
s->event_reader_pos += ret;
if (s->event_reader_pos == sizeof(s->event_rcb)) {
s->event_reader_pos = 0;
qemu_rbd_complete_aio(s->event_rcb);
s->qemu_aio_count--;
if ((ret = read(s->fds[RBD_FD_READ], p + s->event_reader_pos,
sizeof(s->event_rcb) - s->event_reader_pos)) > 0) {
if (ret > 0) {
s->event_reader_pos += ret;
if (s->event_reader_pos == sizeof(s->event_rcb)) {
s->event_reader_pos = 0;
qemu_rbd_complete_aio(s->event_rcb);
s->qemu_aio_count--;
}
}
}
} while (ret < 0 && errno == EINTR);
@@ -453,8 +385,6 @@ static int qemu_rbd_open(BlockDriverState *bs, const char *filename, int flags)
char pool[RBD_MAX_POOL_NAME_SIZE];
char snap_buf[RBD_MAX_SNAP_NAME_SIZE];
char conf[RBD_MAX_CONF_SIZE];
char clientname_buf[RBD_MAX_CONF_SIZE];
char *clientname;
int r;
if (qemu_rbd_parsename(filename, pool, sizeof(pool),
@@ -463,67 +393,55 @@ static int qemu_rbd_open(BlockDriverState *bs, const char *filename, int flags)
conf, sizeof(conf)) < 0) {
return -EINVAL;
}
s->snap = NULL;
if (snap_buf[0] != '\0') {
s->snap = qemu_strdup(snap_buf);
}
clientname = qemu_rbd_parse_clientname(conf, clientname_buf);
r = rados_create(&s->cluster, clientname);
r = rados_create(&s->cluster, NULL);
if (r < 0) {
error_report("error initializing");
return r;
}
s->snap = NULL;
if (snap_buf[0] != '\0') {
s->snap = g_strdup(snap_buf);
}
/*
* Fallback to more conservative semantics if setting cache
* options fails. Ignore errors from setting rbd_cache because the
* only possible error is that the option does not exist, and
* librbd defaults to no caching. If write through caching cannot
* be set up, fall back to no caching.
*/
if (flags & BDRV_O_NOCACHE) {
rados_conf_set(s->cluster, "rbd_cache", "false");
} else {
rados_conf_set(s->cluster, "rbd_cache", "true");
if (!(flags & BDRV_O_CACHE_WB)) {
r = rados_conf_set(s->cluster, "rbd_cache_max_dirty", "0");
if (r < 0) {
rados_conf_set(s->cluster, "rbd_cache", "false");
}
}
}
if (strstr(conf, "conf=") == NULL) {
/* try default location, but ignore failure */
rados_conf_read_file(s->cluster, NULL);
r = rados_conf_read_file(s->cluster, NULL);
if (r < 0) {
error_report("error reading config file");
rados_shutdown(s->cluster);
return r;
}
}
if (conf[0] != '\0') {
r = qemu_rbd_set_conf(s->cluster, conf);
if (r < 0) {
error_report("error setting config options");
goto failed_shutdown;
rados_shutdown(s->cluster);
return r;
}
}
r = rados_connect(s->cluster);
if (r < 0) {
error_report("error connecting");
goto failed_shutdown;
rados_shutdown(s->cluster);
return r;
}
r = rados_ioctx_create(s->cluster, pool, &s->io_ctx);
if (r < 0) {
error_report("error opening pool %s", pool);
goto failed_shutdown;
rados_shutdown(s->cluster);
return r;
}
r = rbd_open(s->io_ctx, s->name, &s->image, s->snap);
if (r < 0) {
error_report("error reading header from %s", s->name);
goto failed_open;
rados_ioctx_destroy(s->io_ctx);
rados_shutdown(s->cluster);
return r;
}
bs->read_only = (s->snap != NULL);
@@ -537,18 +455,15 @@ static int qemu_rbd_open(BlockDriverState *bs, const char *filename, int flags)
fcntl(s->fds[0], F_SETFL, O_NONBLOCK);
fcntl(s->fds[1], F_SETFL, O_NONBLOCK);
qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], qemu_rbd_aio_event_reader,
NULL, qemu_rbd_aio_flush_cb, s);
NULL, qemu_rbd_aio_flush_cb, NULL, s);
return 0;
failed:
rbd_close(s->image);
failed_open:
rados_ioctx_destroy(s->io_ctx);
failed_shutdown:
rados_shutdown(s->cluster);
g_free(s->snap);
return r;
}
@@ -558,11 +473,12 @@ static void qemu_rbd_close(BlockDriverState *bs)
close(s->fds[0]);
close(s->fds[1]);
qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], NULL, NULL, NULL, NULL);
qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], NULL , NULL, NULL, NULL,
NULL);
rbd_close(s->image);
rados_ioctx_destroy(s->io_ctx);
g_free(s->snap);
qemu_free(s->snap);
rados_shutdown(s->cluster);
}
@@ -628,7 +544,7 @@ static void rbd_finish_aiocb(rbd_completion_t c, RADOSCB *rcb)
ret = qemu_rbd_send_pipe(rcb->s, rcb);
if (ret < 0) {
error_report("failed writing to acb->s->fds");
g_free(rcb);
qemu_free(rcb);
}
}
@@ -638,8 +554,8 @@ static void rbd_aio_bh_cb(void *opaque)
{
RBDAIOCB *acb = opaque;
if (acb->cmd == RBD_AIO_READ) {
qemu_iovec_from_buf(acb->qiov, 0, acb->bounce, acb->qiov->size);
if (!acb->write) {
qemu_iovec_from_buffer(acb->qiov, acb->bounce, acb->qiov->size);
}
qemu_vfree(acb->bounce);
acb->common.cb(acb->common.opaque, (acb->ret > 0 ? 0 : acb->ret));
@@ -649,25 +565,12 @@ static void rbd_aio_bh_cb(void *opaque)
qemu_aio_release(acb);
}
static int rbd_aio_discard_wrapper(rbd_image_t image,
uint64_t off,
uint64_t len,
rbd_completion_t comp)
{
#ifdef LIBRBD_SUPPORTS_DISCARD
return rbd_aio_discard(image, off, len, comp);
#else
return -ENOTSUP;
#endif
}
static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov,
int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque,
RBDAIOCmd cmd)
static BlockDriverAIOCB *rbd_aio_rw_vector(BlockDriverState *bs,
int64_t sector_num,
QEMUIOVector *qiov,
int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque, int write)
{
RBDAIOCB *acb;
RADOSCB *rcb;
@@ -679,21 +582,20 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
BDRVRBDState *s = bs->opaque;
acb = qemu_aio_get(&rbd_aio_pool, bs, cb, opaque);
acb->cmd = cmd;
acb->qiov = qiov;
if (cmd == RBD_AIO_DISCARD) {
acb->bounce = NULL;
} else {
acb->bounce = qemu_blockalign(bs, qiov->size);
if (!acb) {
return NULL;
}
acb->write = write;
acb->qiov = qiov;
acb->bounce = qemu_blockalign(bs, qiov->size);
acb->ret = 0;
acb->error = 0;
acb->s = s;
acb->cancelled = 0;
acb->bh = NULL;
if (cmd == RBD_AIO_WRITE) {
qemu_iovec_to_buf(acb->qiov, 0, acb->bounce, qiov->size);
if (write) {
qemu_iovec_to_buffer(acb->qiov, acb->bounce);
}
buf = acb->bounce;
@@ -703,7 +605,7 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
s->qemu_aio_count++; /* All the RADOSCB */
rcb = g_malloc(sizeof(RADOSCB));
rcb = qemu_malloc(sizeof(RADOSCB));
rcb->done = 0;
rcb->acb = acb;
rcb->buf = buf;
@@ -714,18 +616,10 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
goto failed;
}
switch (cmd) {
case RBD_AIO_WRITE:
if (write) {
r = rbd_aio_write(s->image, off, size, buf, c);
break;
case RBD_AIO_READ:
} else {
r = rbd_aio_read(s->image, off, size, buf, c);
break;
case RBD_AIO_DISCARD:
r = rbd_aio_discard_wrapper(s->image, off, size, c);
break;
default:
r = -EINVAL;
}
if (r < 0) {
@@ -735,7 +629,7 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
return &acb->common;
failed:
g_free(rcb);
qemu_free(rcb);
s->qemu_aio_count--;
qemu_aio_release(acb);
return NULL;
@@ -748,8 +642,7 @@ static BlockDriverAIOCB *qemu_rbd_aio_readv(BlockDriverState *bs,
BlockDriverCompletionFunc *cb,
void *opaque)
{
return rbd_start_aio(bs, sector_num, qiov, nb_sectors, cb, opaque,
RBD_AIO_READ);
return rbd_aio_rw_vector(bs, sector_num, qiov, nb_sectors, cb, opaque, 0);
}
static BlockDriverAIOCB *qemu_rbd_aio_writev(BlockDriverState *bs,
@@ -759,19 +652,7 @@ static BlockDriverAIOCB *qemu_rbd_aio_writev(BlockDriverState *bs,
BlockDriverCompletionFunc *cb,
void *opaque)
{
return rbd_start_aio(bs, sector_num, qiov, nb_sectors, cb, opaque,
RBD_AIO_WRITE);
}
static int qemu_rbd_co_flush(BlockDriverState *bs)
{
#if LIBRBD_VERSION_CODE >= LIBRBD_VERSION(0, 1, 1)
/* rbd_flush added in 0.1.1 */
BDRVRBDState *s = bs->opaque;
return rbd_flush(s->image);
#else
return 0;
#endif
return rbd_aio_rw_vector(bs, sector_num, qiov, nb_sectors, cb, opaque, 1);
}
static int qemu_rbd_getinfo(BlockDriverState *bs, BlockDriverInfo *bdi)
@@ -848,26 +729,6 @@ static int qemu_rbd_snap_create(BlockDriverState *bs,
return 0;
}
static int qemu_rbd_snap_remove(BlockDriverState *bs,
const char *snapshot_name)
{
BDRVRBDState *s = bs->opaque;
int r;
r = rbd_snap_remove(s->image, snapshot_name);
return r;
}
static int qemu_rbd_snap_rollback(BlockDriverState *bs,
const char *snapshot_name)
{
BDRVRBDState *s = bs->opaque;
int r;
r = rbd_snap_rollback(s->image, snapshot_name);
return r;
}
static int qemu_rbd_snap_list(BlockDriverState *bs,
QEMUSnapshotInfo **psn_tab)
{
@@ -878,18 +739,18 @@ static int qemu_rbd_snap_list(BlockDriverState *bs,
int max_snaps = RBD_MAX_SNAPS;
do {
snaps = g_malloc(sizeof(*snaps) * max_snaps);
snaps = qemu_malloc(sizeof(*snaps) * max_snaps);
snap_count = rbd_snap_list(s->image, snaps, &max_snaps);
if (snap_count < 0) {
g_free(snaps);
qemu_free(snaps);
}
} while (snap_count == -ERANGE);
if (snap_count <= 0) {
goto done;
return snap_count;
}
sn_tab = g_malloc0(snap_count * sizeof(QEMUSnapshotInfo));
sn_tab = qemu_mallocz(snap_count * sizeof(QEMUSnapshotInfo));
for (i = 0; i < snap_count; i++) {
const char *snap_name = snaps[i].name;
@@ -905,23 +766,10 @@ static int qemu_rbd_snap_list(BlockDriverState *bs,
}
rbd_snap_list_end(snaps);
done:
*psn_tab = sn_tab;
return snap_count;
}
#ifdef LIBRBD_SUPPORTS_DISCARD
static BlockDriverAIOCB* qemu_rbd_aio_discard(BlockDriverState *bs,
int64_t sector_num,
int nb_sectors,
BlockDriverCompletionFunc *cb,
void *opaque)
{
return rbd_start_aio(bs, sector_num, NULL, nb_sectors, cb, opaque,
RBD_AIO_DISCARD);
}
#endif
static QEMUOptionParameter qemu_rbd_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
@@ -948,18 +796,11 @@ static BlockDriver bdrv_rbd = {
.bdrv_truncate = qemu_rbd_truncate,
.protocol_name = "rbd",
.bdrv_aio_readv = qemu_rbd_aio_readv,
.bdrv_aio_writev = qemu_rbd_aio_writev,
.bdrv_co_flush_to_disk = qemu_rbd_co_flush,
.bdrv_aio_readv = qemu_rbd_aio_readv,
.bdrv_aio_writev = qemu_rbd_aio_writev,
#ifdef LIBRBD_SUPPORTS_DISCARD
.bdrv_aio_discard = qemu_rbd_aio_discard,
#endif
.bdrv_snapshot_create = qemu_rbd_snap_create,
.bdrv_snapshot_delete = qemu_rbd_snap_remove,
.bdrv_snapshot_list = qemu_rbd_snap_list,
.bdrv_snapshot_goto = qemu_rbd_snap_rollback,
.bdrv_snapshot_create = qemu_rbd_snap_create,
.bdrv_snapshot_list = qemu_rbd_snap_list,
};
static void bdrv_rbd_init(void)

File diff suppressed because it is too large Load Diff

View File

@@ -1,209 +0,0 @@
/*
* Image streaming
*
* Copyright IBM, Corp. 2011
*
* Authors:
* Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
*
* This work is licensed under the terms of the GNU LGPL, version 2 or later.
* See the COPYING.LIB file in the top-level directory.
*
*/
#include "trace.h"
#include "block_int.h"
#include "qemu/ratelimit.h"
enum {
/*
* Size of data buffer for populating the image file. This should be large
* enough to process multiple clusters in a single call, so that populating
* contiguous regions of the image is efficient.
*/
STREAM_BUFFER_SIZE = 512 * 1024, /* in bytes */
};
#define SLICE_TIME 100000000ULL /* ns */
typedef struct StreamBlockJob {
BlockJob common;
RateLimit limit;
BlockDriverState *base;
char backing_file_id[1024];
} StreamBlockJob;
static int coroutine_fn stream_populate(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
void *buf)
{
struct iovec iov = {
.iov_base = buf,
.iov_len = nb_sectors * BDRV_SECTOR_SIZE,
};
QEMUIOVector qiov;
qemu_iovec_init_external(&qiov, &iov, 1);
/* Copy-on-read the unallocated clusters */
return bdrv_co_copy_on_readv(bs, sector_num, nb_sectors, &qiov);
}
static void close_unused_images(BlockDriverState *top, BlockDriverState *base,
const char *base_id)
{
BlockDriverState *intermediate;
intermediate = top->backing_hd;
while (intermediate) {
BlockDriverState *unused;
/* reached base */
if (intermediate == base) {
break;
}
unused = intermediate;
intermediate = intermediate->backing_hd;
unused->backing_hd = NULL;
bdrv_delete(unused);
}
top->backing_hd = base;
}
static void coroutine_fn stream_run(void *opaque)
{
StreamBlockJob *s = opaque;
BlockDriverState *bs = s->common.bs;
BlockDriverState *base = s->base;
int64_t sector_num, end;
int ret = 0;
int n = 0;
void *buf;
s->common.len = bdrv_getlength(bs);
if (s->common.len < 0) {
block_job_complete(&s->common, s->common.len);
return;
}
end = s->common.len >> BDRV_SECTOR_BITS;
buf = qemu_blockalign(bs, STREAM_BUFFER_SIZE);
/* Turn on copy-on-read for the whole block device so that guest read
* requests help us make progress. Only do this when copying the entire
* backing chain since the copy-on-read operation does not take base into
* account.
*/
if (!base) {
bdrv_enable_copy_on_read(bs);
}
for (sector_num = 0; sector_num < end; sector_num += n) {
uint64_t delay_ns = 0;
bool copy;
wait:
/* Note that even when no rate limit is applied we need to yield
* with no pending I/O here so that qemu_aio_flush() returns.
*/
block_job_sleep_ns(&s->common, rt_clock, delay_ns);
if (block_job_is_cancelled(&s->common)) {
break;
}
ret = bdrv_co_is_allocated(bs, sector_num,
STREAM_BUFFER_SIZE / BDRV_SECTOR_SIZE, &n);
if (ret == 1) {
/* Allocated in the top, no need to copy. */
copy = false;
} else {
/* Copy if allocated in the intermediate images. Limit to the
* known-unallocated area [sector_num, sector_num+n). */
ret = bdrv_co_is_allocated_above(bs->backing_hd, base,
sector_num, n, &n);
/* Finish early if end of backing file has been reached */
if (ret == 0 && n == 0) {
n = end - sector_num;
}
copy = (ret == 1);
}
trace_stream_one_iteration(s, sector_num, n, ret);
if (ret >= 0 && copy) {
if (s->common.speed) {
delay_ns = ratelimit_calculate_delay(&s->limit, n);
if (delay_ns > 0) {
goto wait;
}
}
ret = stream_populate(bs, sector_num, n, buf);
}
if (ret < 0) {
break;
}
ret = 0;
/* Publish progress */
s->common.offset += n * BDRV_SECTOR_SIZE;
}
if (!base) {
bdrv_disable_copy_on_read(bs);
}
if (!block_job_is_cancelled(&s->common) && sector_num == end && ret == 0) {
const char *base_id = NULL, *base_fmt = NULL;
if (base) {
base_id = s->backing_file_id;
if (base->drv) {
base_fmt = base->drv->format_name;
}
}
ret = bdrv_change_backing_file(bs, base_id, base_fmt);
close_unused_images(bs, base, base_id);
}
qemu_vfree(buf);
block_job_complete(&s->common, ret);
}
static void stream_set_speed(BlockJob *job, int64_t speed, Error **errp)
{
StreamBlockJob *s = container_of(job, StreamBlockJob, common);
if (speed < 0) {
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
}
static BlockJobType stream_job_type = {
.instance_size = sizeof(StreamBlockJob),
.job_type = "stream",
.set_speed = stream_set_speed,
};
void stream_start(BlockDriverState *bs, BlockDriverState *base,
const char *base_id, int64_t speed,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp)
{
StreamBlockJob *s;
s = block_job_create(&stream_job_type, bs, speed, cb, opaque, errp);
if (!s) {
return;
}
s->base = base;
if (base_id) {
pstrcpy(s->backing_file_id, sizeof(s->backing_file_id), base_id);
}
s->common.co = qemu_coroutine_create(stream_run);
trace_stream_start(bs, base, s, s->common.co, opaque);
qemu_coroutine_enter(s->common.co, s);
}

View File

@@ -1,7 +1,7 @@
/*
* Block driver for the Virtual Disk Image (VDI) format
*
* Copyright (c) 2009, 2012 Stefan Weil
* Copyright (c) 2009 Stefan Weil
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -52,7 +52,6 @@
#include "qemu-common.h"
#include "block_int.h"
#include "module.h"
#include "migration.h"
#if defined(CONFIG_UUID)
#include <uuid/uuid.h>
@@ -115,13 +114,8 @@ void uuid_unparse(const uuid_t uu, char *out);
*/
#define VDI_TEXT "<<< QEMU VM Virtual Disk Image >>>\n"
/* A never-allocated block; semantically arbitrary content. */
#define VDI_UNALLOCATED 0xffffffffU
/* A discarded (no longer allocated) block; semantically zero-filled. */
#define VDI_DISCARDED 0xfffffffeU
#define VDI_IS_ALLOCATED(X) ((X) < VDI_DISCARDED)
/* Unallocated blocks use this index (no need to convert endianness). */
#define VDI_UNALLOCATED UINT32_MAX
#if !defined(CONFIG_UUID)
void uuid_generate(uuid_t out)
@@ -143,6 +137,29 @@ void uuid_unparse(const uuid_t uu, char *out)
}
#endif
typedef struct {
BlockDriverAIOCB common;
int64_t sector_num;
QEMUIOVector *qiov;
uint8_t *buf;
/* Total number of sectors. */
int nb_sectors;
/* Number of sectors for current AIO. */
int n_sectors;
/* New allocated block map entry. */
uint32_t bmap_first;
uint32_t bmap_last;
/* Buffer for new allocated block. */
void *block_buffer;
void *orig_buf;
bool is_write;
int header_modified;
BlockDriverAIOCB *hd_aiocb;
struct iovec hd_iov;
QEMUIOVector hd_qiov;
QEMUBH *bh;
} VdiAIOCB;
typedef struct {
char text[0x40];
uint32_t signature;
@@ -181,8 +198,6 @@ typedef struct {
uint32_t bmap_sector;
/* VDI header (converted to host endianness). */
VdiHeader header;
Error *migration_blocker;
} BDRVVdiState;
/* Change UUID from little endian (IPRT = VirtualBox format) to big endian
@@ -277,8 +292,7 @@ static void vdi_header_print(VdiHeader *header)
}
#endif
static int vdi_check(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix)
static int vdi_check(BlockDriverState *bs, BdrvCheckResult *res)
{
/* TODO: additional checks possible. */
BDRVVdiState *s = (BDRVVdiState *)bs->opaque;
@@ -287,20 +301,16 @@ static int vdi_check(BlockDriverState *bs, BdrvCheckResult *res,
uint32_t *bmap;
logout("\n");
if (fix) {
return -ENOTSUP;
}
bmap = g_malloc(s->header.blocks_in_image * sizeof(uint32_t));
bmap = qemu_malloc(s->header.blocks_in_image * sizeof(uint32_t));
memset(bmap, 0xff, s->header.blocks_in_image * sizeof(uint32_t));
/* Check block map and value of blocks_allocated. */
for (block = 0; block < s->header.blocks_in_image; block++) {
uint32_t bmap_entry = le32_to_cpu(s->bmap[block]);
if (VDI_IS_ALLOCATED(bmap_entry)) {
if (bmap_entry != VDI_UNALLOCATED) {
if (bmap_entry < s->header.blocks_in_image) {
blocks_allocated++;
if (!VDI_IS_ALLOCATED(bmap[bmap_entry])) {
if (bmap[bmap_entry] == VDI_UNALLOCATED) {
bmap[bmap_entry] = bmap_entry;
} else {
fprintf(stderr, "ERROR: block index %" PRIu32
@@ -321,7 +331,7 @@ static int vdi_check(BlockDriverState *bs, BdrvCheckResult *res,
res->corruptions++;
}
g_free(bmap);
qemu_free(bmap);
return 0;
}
@@ -433,29 +443,23 @@ static int vdi_open(BlockDriverState *bs, int flags)
bmap_size = header.blocks_in_image * sizeof(uint32_t);
bmap_size = (bmap_size + SECTOR_SIZE - 1) / SECTOR_SIZE;
if (bmap_size > 0) {
s->bmap = g_malloc(bmap_size * SECTOR_SIZE);
s->bmap = qemu_malloc(bmap_size * SECTOR_SIZE);
}
if (bdrv_read(bs->file, s->bmap_sector, (uint8_t *)s->bmap, bmap_size) < 0) {
goto fail_free_bmap;
}
/* Disable migration when vdi images are used */
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"vdi", bs->device_name, "live migration");
migrate_add_blocker(s->migration_blocker);
return 0;
fail_free_bmap:
g_free(s->bmap);
qemu_free(s->bmap);
fail:
return -1;
}
static int coroutine_fn vdi_co_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum)
static int vdi_is_allocated(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum)
{
/* TODO: Check for too large sector_num (in bdrv_is_allocated or here). */
BDRVVdiState *s = (BDRVVdiState *)bs->opaque;
@@ -468,153 +472,361 @@ static int coroutine_fn vdi_co_is_allocated(BlockDriverState *bs,
n_sectors = nb_sectors;
}
*pnum = n_sectors;
return VDI_IS_ALLOCATED(bmap_entry);
return bmap_entry != VDI_UNALLOCATED;
}
static int vdi_co_read(BlockDriverState *bs,
int64_t sector_num, uint8_t *buf, int nb_sectors)
static void vdi_aio_cancel(BlockDriverAIOCB *blockacb)
{
BDRVVdiState *s = bs->opaque;
uint32_t bmap_entry;
uint32_t block_index;
uint32_t sector_in_block;
uint32_t n_sectors;
int ret = 0;
/* TODO: This code is untested. How can I get it executed? */
VdiAIOCB *acb = container_of(blockacb, VdiAIOCB, common);
logout("\n");
while (ret >= 0 && nb_sectors > 0) {
block_index = sector_num / s->block_sectors;
sector_in_block = sector_num % s->block_sectors;
n_sectors = s->block_sectors - sector_in_block;
if (n_sectors > nb_sectors) {
n_sectors = nb_sectors;
}
logout("will read %u sectors starting at sector %" PRIu64 "\n",
n_sectors, sector_num);
/* prepare next AIO request */
bmap_entry = le32_to_cpu(s->bmap[block_index]);
if (!VDI_IS_ALLOCATED(bmap_entry)) {
/* Block not allocated, return zeros, no need to wait. */
memset(buf, 0, n_sectors * SECTOR_SIZE);
ret = 0;
} else {
uint64_t offset = s->header.offset_data / SECTOR_SIZE +
(uint64_t)bmap_entry * s->block_sectors +
sector_in_block;
ret = bdrv_read(bs->file, offset, buf, n_sectors);
}
logout("%u sectors read\n", n_sectors);
nb_sectors -= n_sectors;
sector_num += n_sectors;
buf += n_sectors * SECTOR_SIZE;
if (acb->hd_aiocb) {
bdrv_aio_cancel(acb->hd_aiocb);
}
return ret;
qemu_aio_release(acb);
}
static int vdi_co_write(BlockDriverState *bs,
int64_t sector_num, const uint8_t *buf, int nb_sectors)
static AIOPool vdi_aio_pool = {
.aiocb_size = sizeof(VdiAIOCB),
.cancel = vdi_aio_cancel,
};
static VdiAIOCB *vdi_aio_setup(BlockDriverState *bs, int64_t sector_num,
QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque, int is_write)
{
BDRVVdiState *s = bs->opaque;
uint32_t bmap_entry;
uint32_t block_index;
uint32_t sector_in_block;
uint32_t n_sectors;
uint32_t bmap_first = VDI_UNALLOCATED;
uint32_t bmap_last = VDI_UNALLOCATED;
uint8_t *block = NULL;
int ret = 0;
VdiAIOCB *acb;
logout("\n");
logout("%p, %" PRId64 ", %p, %d, %p, %p, %d\n",
bs, sector_num, qiov, nb_sectors, cb, opaque, is_write);
while (ret >= 0 && nb_sectors > 0) {
block_index = sector_num / s->block_sectors;
sector_in_block = sector_num % s->block_sectors;
n_sectors = s->block_sectors - sector_in_block;
if (n_sectors > nb_sectors) {
n_sectors = nb_sectors;
}
acb = qemu_aio_get(&vdi_aio_pool, bs, cb, opaque);
if (acb) {
acb->hd_aiocb = NULL;
acb->sector_num = sector_num;
acb->qiov = qiov;
acb->is_write = is_write;
logout("will write %u sectors starting at sector %" PRIu64 "\n",
n_sectors, sector_num);
/* prepare next AIO request */
bmap_entry = le32_to_cpu(s->bmap[block_index]);
if (!VDI_IS_ALLOCATED(bmap_entry)) {
/* Allocate new block and write to it. */
uint64_t offset;
bmap_entry = s->header.blocks_allocated;
s->bmap[block_index] = cpu_to_le32(bmap_entry);
s->header.blocks_allocated++;
offset = s->header.offset_data / SECTOR_SIZE +
(uint64_t)bmap_entry * s->block_sectors;
if (block == NULL) {
block = g_malloc(s->block_size);
bmap_first = block_index;
if (qiov->niov > 1) {
acb->buf = qemu_blockalign(bs, qiov->size);
acb->orig_buf = acb->buf;
if (is_write) {
qemu_iovec_to_buffer(qiov, acb->buf);
}
bmap_last = block_index;
/* Copy data to be written to new block and zero unused parts. */
memset(block, 0, sector_in_block * SECTOR_SIZE);
memcpy(block + sector_in_block * SECTOR_SIZE,
buf, n_sectors * SECTOR_SIZE);
memset(block + (sector_in_block + n_sectors) * SECTOR_SIZE, 0,
(s->block_sectors - n_sectors - sector_in_block) * SECTOR_SIZE);
ret = bdrv_write(bs->file, offset, block, s->block_sectors);
} else {
uint64_t offset = s->header.offset_data / SECTOR_SIZE +
(uint64_t)bmap_entry * s->block_sectors +
sector_in_block;
ret = bdrv_write(bs->file, offset, buf, n_sectors);
acb->buf = (uint8_t *)qiov->iov->iov_base;
}
acb->nb_sectors = nb_sectors;
acb->n_sectors = 0;
acb->bmap_first = VDI_UNALLOCATED;
acb->bmap_last = VDI_UNALLOCATED;
acb->block_buffer = NULL;
acb->header_modified = 0;
}
return acb;
}
nb_sectors -= n_sectors;
sector_num += n_sectors;
buf += n_sectors * SECTOR_SIZE;
static int vdi_schedule_bh(QEMUBHFunc *cb, VdiAIOCB *acb)
{
logout("\n");
logout("%u sectors written\n", n_sectors);
if (acb->bh) {
return -EIO;
}
logout("finished data write\n");
acb->bh = qemu_bh_new(cb, acb);
if (!acb->bh) {
return -EIO;
}
qemu_bh_schedule(acb->bh);
return 0;
}
static void vdi_aio_read_cb(void *opaque, int ret);
static void vdi_aio_write_cb(void *opaque, int ret);
static void vdi_aio_rw_bh(void *opaque)
{
VdiAIOCB *acb = opaque;
logout("\n");
qemu_bh_delete(acb->bh);
acb->bh = NULL;
if (acb->is_write) {
vdi_aio_write_cb(opaque, 0);
} else {
vdi_aio_read_cb(opaque, 0);
}
}
static void vdi_aio_read_cb(void *opaque, int ret)
{
VdiAIOCB *acb = opaque;
BlockDriverState *bs = acb->common.bs;
BDRVVdiState *s = bs->opaque;
uint32_t bmap_entry;
uint32_t block_index;
uint32_t sector_in_block;
uint32_t n_sectors;
logout("%u sectors read\n", acb->n_sectors);
acb->hd_aiocb = NULL;
if (ret < 0) {
return ret;
goto done;
}
if (block) {
/* One or more new blocks were allocated. */
VdiHeader *header = (VdiHeader *) block;
uint8_t *base;
uint64_t offset;
acb->nb_sectors -= acb->n_sectors;
logout("now writing modified header\n");
assert(VDI_IS_ALLOCATED(bmap_first));
*header = s->header;
vdi_header_to_le(header);
ret = bdrv_write(bs->file, 0, block, 1);
g_free(block);
block = NULL;
if (acb->nb_sectors == 0) {
/* request completed */
ret = 0;
goto done;
}
acb->sector_num += acb->n_sectors;
acb->buf += acb->n_sectors * SECTOR_SIZE;
block_index = acb->sector_num / s->block_sectors;
sector_in_block = acb->sector_num % s->block_sectors;
n_sectors = s->block_sectors - sector_in_block;
if (n_sectors > acb->nb_sectors) {
n_sectors = acb->nb_sectors;
}
logout("will read %u sectors starting at sector %" PRIu64 "\n",
n_sectors, acb->sector_num);
/* prepare next AIO request */
acb->n_sectors = n_sectors;
bmap_entry = le32_to_cpu(s->bmap[block_index]);
if (bmap_entry == VDI_UNALLOCATED) {
/* Block not allocated, return zeros, no need to wait. */
memset(acb->buf, 0, n_sectors * SECTOR_SIZE);
ret = vdi_schedule_bh(vdi_aio_rw_bh, acb);
if (ret < 0) {
return ret;
goto done;
}
} else {
uint64_t offset = s->header.offset_data / SECTOR_SIZE +
(uint64_t)bmap_entry * s->block_sectors +
sector_in_block;
acb->hd_iov.iov_base = (void *)acb->buf;
acb->hd_iov.iov_len = n_sectors * SECTOR_SIZE;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
acb->hd_aiocb = bdrv_aio_readv(bs->file, offset, &acb->hd_qiov,
n_sectors, vdi_aio_read_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
}
return;
done:
if (acb->qiov->niov > 1) {
qemu_iovec_from_buffer(acb->qiov, acb->orig_buf, acb->qiov->size);
qemu_vfree(acb->orig_buf);
}
acb->common.cb(acb->common.opaque, ret);
qemu_aio_release(acb);
}
logout("now writing modified block map entry %u...%u\n",
bmap_first, bmap_last);
/* Write modified sectors from block map. */
bmap_first /= (SECTOR_SIZE / sizeof(uint32_t));
bmap_last /= (SECTOR_SIZE / sizeof(uint32_t));
n_sectors = bmap_last - bmap_first + 1;
offset = s->bmap_sector + bmap_first;
base = ((uint8_t *)&s->bmap[0]) + bmap_first * SECTOR_SIZE;
logout("will write %u block map sectors starting from entry %u\n",
n_sectors, bmap_first);
ret = bdrv_write(bs->file, offset, base, n_sectors);
static BlockDriverAIOCB *vdi_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
VdiAIOCB *acb;
int ret;
logout("\n");
acb = vdi_aio_setup(bs, sector_num, qiov, nb_sectors, cb, opaque, 0);
if (!acb) {
return NULL;
}
return ret;
ret = vdi_schedule_bh(vdi_aio_rw_bh, acb);
if (ret < 0) {
if (acb->qiov->niov > 1) {
qemu_vfree(acb->orig_buf);
}
qemu_aio_release(acb);
return NULL;
}
return &acb->common;
}
static void vdi_aio_write_cb(void *opaque, int ret)
{
VdiAIOCB *acb = opaque;
BlockDriverState *bs = acb->common.bs;
BDRVVdiState *s = bs->opaque;
uint32_t bmap_entry;
uint32_t block_index;
uint32_t sector_in_block;
uint32_t n_sectors;
acb->hd_aiocb = NULL;
if (ret < 0) {
goto done;
}
acb->nb_sectors -= acb->n_sectors;
acb->sector_num += acb->n_sectors;
acb->buf += acb->n_sectors * SECTOR_SIZE;
if (acb->nb_sectors == 0) {
logout("finished data write\n");
acb->n_sectors = 0;
if (acb->header_modified) {
VdiHeader *header = acb->block_buffer;
logout("now writing modified header\n");
assert(acb->bmap_first != VDI_UNALLOCATED);
*header = s->header;
vdi_header_to_le(header);
acb->header_modified = 0;
acb->hd_iov.iov_base = acb->block_buffer;
acb->hd_iov.iov_len = SECTOR_SIZE;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
acb->hd_aiocb = bdrv_aio_writev(bs->file, 0, &acb->hd_qiov, 1,
vdi_aio_write_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
return;
} else if (acb->bmap_first != VDI_UNALLOCATED) {
/* One or more new blocks were allocated. */
uint64_t offset;
uint32_t bmap_first;
uint32_t bmap_last;
qemu_free(acb->block_buffer);
acb->block_buffer = NULL;
bmap_first = acb->bmap_first;
bmap_last = acb->bmap_last;
logout("now writing modified block map entry %u...%u\n",
bmap_first, bmap_last);
/* Write modified sectors from block map. */
bmap_first /= (SECTOR_SIZE / sizeof(uint32_t));
bmap_last /= (SECTOR_SIZE / sizeof(uint32_t));
n_sectors = bmap_last - bmap_first + 1;
offset = s->bmap_sector + bmap_first;
acb->bmap_first = VDI_UNALLOCATED;
acb->hd_iov.iov_base = (void *)((uint8_t *)&s->bmap[0] +
bmap_first * SECTOR_SIZE);
acb->hd_iov.iov_len = n_sectors * SECTOR_SIZE;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
logout("will write %u block map sectors starting from entry %u\n",
n_sectors, bmap_first);
acb->hd_aiocb = bdrv_aio_writev(bs->file, offset, &acb->hd_qiov,
n_sectors, vdi_aio_write_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
return;
}
ret = 0;
goto done;
}
logout("%u sectors written\n", acb->n_sectors);
block_index = acb->sector_num / s->block_sectors;
sector_in_block = acb->sector_num % s->block_sectors;
n_sectors = s->block_sectors - sector_in_block;
if (n_sectors > acb->nb_sectors) {
n_sectors = acb->nb_sectors;
}
logout("will write %u sectors starting at sector %" PRIu64 "\n",
n_sectors, acb->sector_num);
/* prepare next AIO request */
acb->n_sectors = n_sectors;
bmap_entry = le32_to_cpu(s->bmap[block_index]);
if (bmap_entry == VDI_UNALLOCATED) {
/* Allocate new block and write to it. */
uint64_t offset;
uint8_t *block;
bmap_entry = s->header.blocks_allocated;
s->bmap[block_index] = cpu_to_le32(bmap_entry);
s->header.blocks_allocated++;
offset = s->header.offset_data / SECTOR_SIZE +
(uint64_t)bmap_entry * s->block_sectors;
block = acb->block_buffer;
if (block == NULL) {
block = qemu_mallocz(s->block_size);
acb->block_buffer = block;
acb->bmap_first = block_index;
assert(!acb->header_modified);
acb->header_modified = 1;
}
acb->bmap_last = block_index;
memcpy(block + sector_in_block * SECTOR_SIZE,
acb->buf, n_sectors * SECTOR_SIZE);
acb->hd_iov.iov_base = (void *)block;
acb->hd_iov.iov_len = s->block_size;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
acb->hd_aiocb = bdrv_aio_writev(bs->file, offset,
&acb->hd_qiov, s->block_sectors,
vdi_aio_write_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
} else {
uint64_t offset = s->header.offset_data / SECTOR_SIZE +
(uint64_t)bmap_entry * s->block_sectors +
sector_in_block;
acb->hd_iov.iov_base = (void *)acb->buf;
acb->hd_iov.iov_len = n_sectors * SECTOR_SIZE;
qemu_iovec_init_external(&acb->hd_qiov, &acb->hd_iov, 1);
acb->hd_aiocb = bdrv_aio_writev(bs->file, offset, &acb->hd_qiov,
n_sectors, vdi_aio_write_cb, acb);
if (acb->hd_aiocb == NULL) {
ret = -EIO;
goto done;
}
}
return;
done:
if (acb->qiov->niov > 1) {
qemu_vfree(acb->orig_buf);
}
acb->common.cb(acb->common.opaque, ret);
qemu_aio_release(acb);
}
static BlockDriverAIOCB *vdi_aio_writev(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
VdiAIOCB *acb;
int ret;
logout("\n");
acb = vdi_aio_setup(bs, sector_num, qiov, nb_sectors, cb, opaque, 1);
if (!acb) {
return NULL;
}
ret = vdi_schedule_bh(vdi_aio_rw_bh, acb);
if (ret < 0) {
if (acb->qiov->niov > 1) {
qemu_vfree(acb->orig_buf);
}
qemu_aio_release(acb);
return NULL;
}
return &acb->common;
}
static int vdi_create(const char *filename, QEMUOptionParameter *options)
@@ -628,6 +840,7 @@ static int vdi_create(const char *filename, QEMUOptionParameter *options)
VdiHeader header;
size_t i;
size_t bmap_size;
uint32_t *bmap;
logout("\n");
@@ -652,9 +865,8 @@ static int vdi_create(const char *filename, QEMUOptionParameter *options)
options++;
}
fd = qemu_open(filename,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY | O_LARGEFILE,
0644);
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY | O_LARGEFILE,
0644);
if (fd < 0) {
return -errno;
}
@@ -692,21 +904,21 @@ static int vdi_create(const char *filename, QEMUOptionParameter *options)
result = -errno;
}
bmap = NULL;
if (bmap_size > 0) {
uint32_t *bmap = g_malloc0(bmap_size);
for (i = 0; i < blocks; i++) {
if (image_type == VDI_TYPE_STATIC) {
bmap[i] = i;
} else {
bmap[i] = VDI_UNALLOCATED;
}
}
if (write(fd, bmap, bmap_size) < 0) {
result = -errno;
}
g_free(bmap);
bmap = (uint32_t *)qemu_mallocz(bmap_size);
}
for (i = 0; i < blocks; i++) {
if (image_type == VDI_TYPE_STATIC) {
bmap[i] = i;
} else {
bmap[i] = VDI_UNALLOCATED;
}
}
if (write(fd, bmap, bmap_size) < 0) {
result = -errno;
}
qemu_free(bmap);
if (image_type == VDI_TYPE_STATIC) {
if (ftruncate(fd, sizeof(header) + bmap_size + blocks * block_size)) {
result = -errno;
@@ -722,14 +934,15 @@ static int vdi_create(const char *filename, QEMUOptionParameter *options)
static void vdi_close(BlockDriverState *bs)
{
BDRVVdiState *s = bs->opaque;
g_free(s->bmap);
migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
}
static int vdi_flush(BlockDriverState *bs)
{
logout("\n");
return bdrv_flush(bs->file);
}
static QEMUOptionParameter vdi_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
@@ -762,12 +975,13 @@ static BlockDriver bdrv_vdi = {
.bdrv_open = vdi_open,
.bdrv_close = vdi_close,
.bdrv_create = vdi_create,
.bdrv_co_is_allocated = vdi_co_is_allocated,
.bdrv_flush = vdi_flush,
.bdrv_is_allocated = vdi_is_allocated,
.bdrv_make_empty = vdi_make_empty,
.bdrv_read = vdi_co_read,
.bdrv_aio_readv = vdi_aio_readv,
#if defined(CONFIG_VDI_WRITE)
.bdrv_write = vdi_co_write,
.bdrv_aio_writev = vdi_aio_writev,
#endif
.bdrv_get_info = vdi_get_info,

View File

@@ -26,16 +26,9 @@
#include "qemu-common.h"
#include "block_int.h"
#include "module.h"
#include "migration.h"
#include <zlib.h>
#define VMDK3_MAGIC (('C' << 24) | ('O' << 16) | ('W' << 8) | 'D')
#define VMDK4_MAGIC (('K' << 24) | ('D' << 16) | ('M' << 8) | 'V')
#define VMDK4_COMPRESSION_DEFLATE 1
#define VMDK4_FLAG_RGD (1 << 1)
#define VMDK4_FLAG_COMPRESS (1 << 16)
#define VMDK4_FLAG_MARKER (1 << 17)
#define VMDK4_GD_AT_END 0xffffffffffffffffULL
typedef struct {
uint32_t version;
@@ -63,16 +56,13 @@ typedef struct {
int64_t grain_offset;
char filler[1];
char check_bytes[4];
uint16_t compressAlgorithm;
} QEMU_PACKED VMDK4Header;
} __attribute__((packed)) VMDK4Header;
#define L2_CACHE_SIZE 16
typedef struct VmdkExtent {
BlockDriverState *file;
bool flat;
bool compressed;
bool has_marker;
int64_t sectors;
int64_t end_sector;
int64_t flat_start_offset;
@@ -92,14 +82,12 @@ typedef struct VmdkExtent {
} VmdkExtent;
typedef struct BDRVVmdkState {
CoMutex lock;
int desc_offset;
bool cid_updated;
uint32_t parent_cid;
int num_extents;
/* Extent array with num_extents entries, ascend ordered by address */
VmdkExtent *extents;
Error *migration_blocker;
} BDRVVmdkState;
typedef struct VmdkMetaData {
@@ -110,19 +98,6 @@ typedef struct VmdkMetaData {
int valid;
} VmdkMetaData;
typedef struct VmdkGrainMarker {
uint64_t lba;
uint32_t size;
uint8_t data[0];
} VmdkGrainMarker;
enum {
MARKER_END_OF_STREAM = 0,
MARKER_GRAIN_TABLE = 1,
MARKER_GRAIN_DIRECTORY = 2,
MARKER_FOOTER = 3,
};
static int vmdk_probe(const uint8_t *buf, int buf_size, const char *filename)
{
uint32_t magic;
@@ -190,42 +165,24 @@ static void vmdk_free_extents(BlockDriverState *bs)
{
int i;
BDRVVmdkState *s = bs->opaque;
VmdkExtent *e;
for (i = 0; i < s->num_extents; i++) {
e = &s->extents[i];
g_free(e->l1_table);
g_free(e->l2_cache);
g_free(e->l1_backup_table);
if (e->file != bs->file) {
bdrv_delete(e->file);
}
qemu_free(s->extents[i].l1_table);
qemu_free(s->extents[i].l2_cache);
qemu_free(s->extents[i].l1_backup_table);
}
g_free(s->extents);
}
static void vmdk_free_last_extent(BlockDriverState *bs)
{
BDRVVmdkState *s = bs->opaque;
if (s->num_extents == 0) {
return;
}
s->num_extents--;
s->extents = g_realloc(s->extents, s->num_extents * sizeof(VmdkExtent));
qemu_free(s->extents);
}
static uint32_t vmdk_read_cid(BlockDriverState *bs, int parent)
{
char desc[DESC_SIZE];
uint32_t cid = 0xffffffff;
uint32_t cid;
const char *p_name, *cid_str;
size_t cid_str_size;
BDRVVmdkState *s = bs->opaque;
int ret;
ret = bdrv_pread(bs->file, s->desc_offset, desc, DESC_SIZE);
if (ret < 0) {
if (bdrv_pread(bs->file, s->desc_offset, desc, DESC_SIZE) != DESC_SIZE) {
return 0;
}
@@ -237,7 +194,6 @@ static uint32_t vmdk_read_cid(BlockDriverState *bs, int parent)
cid_str_size = sizeof("CID");
}
desc[DESC_SIZE - 1] = '\0';
p_name = strstr(desc, cid_str);
if (p_name != NULL) {
p_name += cid_str_size;
@@ -252,19 +208,13 @@ static int vmdk_write_cid(BlockDriverState *bs, uint32_t cid)
char desc[DESC_SIZE], tmp_desc[DESC_SIZE];
char *p_name, *tmp_str;
BDRVVmdkState *s = bs->opaque;
int ret;
ret = bdrv_pread(bs->file, s->desc_offset, desc, DESC_SIZE);
if (ret < 0) {
return ret;
memset(desc, 0, sizeof(desc));
if (bdrv_pread(bs->file, s->desc_offset, desc, DESC_SIZE) != DESC_SIZE) {
return -EIO;
}
desc[DESC_SIZE - 1] = '\0';
tmp_str = strstr(desc, "parentCID");
if (tmp_str == NULL) {
return -EINVAL;
}
pstrcpy(tmp_desc, sizeof(tmp_desc), tmp_str);
p_name = strstr(desc, "CID");
if (p_name != NULL) {
@@ -273,11 +223,9 @@ static int vmdk_write_cid(BlockDriverState *bs, uint32_t cid)
pstrcat(desc, sizeof(desc), tmp_desc);
}
ret = bdrv_pwrite_sync(bs->file, s->desc_offset, desc, DESC_SIZE);
if (ret < 0) {
return ret;
if (bdrv_pwrite_sync(bs->file, s->desc_offset, desc, DESC_SIZE) < 0) {
return -EIO;
}
return 0;
}
@@ -305,12 +253,10 @@ static int vmdk_parent_open(BlockDriverState *bs)
char *p_name;
char desc[DESC_SIZE + 1];
BDRVVmdkState *s = bs->opaque;
int ret;
desc[DESC_SIZE] = '\0';
ret = bdrv_pread(bs->file, s->desc_offset, desc, DESC_SIZE);
if (ret < 0) {
return ret;
if (bdrv_pread(bs->file, s->desc_offset, desc, DESC_SIZE) != DESC_SIZE) {
return -1;
}
p_name = strstr(desc, "parentFileNameHint");
@@ -320,10 +266,10 @@ static int vmdk_parent_open(BlockDriverState *bs)
p_name += sizeof("parentFileNameHint") + 1;
end_name = strchr(p_name, '\"');
if (end_name == NULL) {
return -EINVAL;
return -1;
}
if ((end_name - p_name) > sizeof(bs->backing_file) - 1) {
return -EINVAL;
return -1;
}
pstrcpy(bs->backing_file, end_name - p_name + 1, p_name);
@@ -343,7 +289,7 @@ static VmdkExtent *vmdk_add_extent(BlockDriverState *bs,
VmdkExtent *extent;
BDRVVmdkState *s = bs->opaque;
s->extents = g_realloc(s->extents,
s->extents = qemu_realloc(s->extents,
(s->num_extents + 1) * sizeof(VmdkExtent));
extent = &s->extents[s->num_extents];
s->num_extents++;
@@ -375,7 +321,7 @@ static int vmdk_init_tables(BlockDriverState *bs, VmdkExtent *extent)
/* read the L1 table */
l1_size = extent->l1_size * sizeof(uint32_t);
extent->l1_table = g_malloc(l1_size);
extent->l1_table = qemu_malloc(l1_size);
ret = bdrv_pread(extent->file,
extent->l1_table_offset,
extent->l1_table,
@@ -388,7 +334,7 @@ static int vmdk_init_tables(BlockDriverState *bs, VmdkExtent *extent)
}
if (extent->l1_backup_table_offset) {
extent->l1_backup_table = g_malloc(l1_size);
extent->l1_backup_table = qemu_malloc(l1_size);
ret = bdrv_pread(extent->file,
extent->l1_backup_table_offset,
extent->l1_backup_table,
@@ -402,27 +348,27 @@ static int vmdk_init_tables(BlockDriverState *bs, VmdkExtent *extent)
}
extent->l2_cache =
g_malloc(extent->l2_size * L2_CACHE_SIZE * sizeof(uint32_t));
qemu_malloc(extent->l2_size * L2_CACHE_SIZE * sizeof(uint32_t));
return 0;
fail_l1b:
g_free(extent->l1_backup_table);
qemu_free(extent->l1_backup_table);
fail_l1:
g_free(extent->l1_table);
qemu_free(extent->l1_table);
return ret;
}
static int vmdk_open_vmdk3(BlockDriverState *bs,
BlockDriverState *file,
int flags)
static int vmdk_open_vmdk3(BlockDriverState *bs, int flags)
{
int ret;
uint32_t magic;
VMDK3Header header;
BDRVVmdkState *s = bs->opaque;
VmdkExtent *extent;
ret = bdrv_pread(file, sizeof(magic), &header, sizeof(header));
s->desc_offset = 0x200;
ret = bdrv_pread(bs->file, sizeof(magic), &header, sizeof(header));
if (ret < 0) {
return ret;
goto fail;
}
extent = vmdk_add_extent(bs,
bs->file, false,
@@ -432,106 +378,58 @@ static int vmdk_open_vmdk3(BlockDriverState *bs,
le32_to_cpu(header.granularity));
ret = vmdk_init_tables(bs, extent);
if (ret) {
/* free extent allocated by vmdk_add_extent */
vmdk_free_last_extent(bs);
/* vmdk_init_tables cleans up on fail, so only free allocation of
* vmdk_add_extent here. */
goto fail;
}
return 0;
fail:
vmdk_free_extents(bs);
return ret;
}
static int vmdk_open_desc_file(BlockDriverState *bs, int flags,
int64_t desc_offset);
static int vmdk_open_vmdk4(BlockDriverState *bs,
BlockDriverState *file,
int flags)
static int vmdk_open_vmdk4(BlockDriverState *bs, int flags)
{
int ret;
uint32_t magic;
uint32_t l1_size, l1_entry_sectors;
VMDK4Header header;
BDRVVmdkState *s = bs->opaque;
VmdkExtent *extent;
int64_t l1_backup_offset = 0;
ret = bdrv_pread(file, sizeof(magic), &header, sizeof(header));
s->desc_offset = 0x200;
ret = bdrv_pread(bs->file, sizeof(magic), &header, sizeof(header));
if (ret < 0) {
return ret;
goto fail;
}
if (header.capacity == 0 && header.desc_offset) {
return vmdk_open_desc_file(bs, flags, header.desc_offset << 9);
}
if (le64_to_cpu(header.gd_offset) == VMDK4_GD_AT_END) {
/*
* The footer takes precedence over the header, so read it in. The
* footer starts at offset -1024 from the end: One sector for the
* footer, and another one for the end-of-stream marker.
*/
struct {
struct {
uint64_t val;
uint32_t size;
uint32_t type;
uint8_t pad[512 - 16];
} QEMU_PACKED footer_marker;
uint32_t magic;
VMDK4Header header;
uint8_t pad[512 - 4 - sizeof(VMDK4Header)];
struct {
uint64_t val;
uint32_t size;
uint32_t type;
uint8_t pad[512 - 16];
} QEMU_PACKED eos_marker;
} QEMU_PACKED footer;
ret = bdrv_pread(file,
bs->file->total_sectors * 512 - 1536,
&footer, sizeof(footer));
if (ret < 0) {
return ret;
}
/* Some sanity checks for the footer */
if (be32_to_cpu(footer.magic) != VMDK4_MAGIC ||
le32_to_cpu(footer.footer_marker.size) != 0 ||
le32_to_cpu(footer.footer_marker.type) != MARKER_FOOTER ||
le64_to_cpu(footer.eos_marker.val) != 0 ||
le32_to_cpu(footer.eos_marker.size) != 0 ||
le32_to_cpu(footer.eos_marker.type) != MARKER_END_OF_STREAM)
{
return -EINVAL;
}
header = footer.header;
}
l1_entry_sectors = le32_to_cpu(header.num_gtes_per_gte)
* le64_to_cpu(header.granularity);
if (l1_entry_sectors == 0) {
return -EINVAL;
}
l1_size = (le64_to_cpu(header.capacity) + l1_entry_sectors - 1)
/ l1_entry_sectors;
if (le32_to_cpu(header.flags) & VMDK4_FLAG_RGD) {
l1_backup_offset = le64_to_cpu(header.rgd_offset) << 9;
}
extent = vmdk_add_extent(bs, file, false,
extent = vmdk_add_extent(bs, bs->file, false,
le64_to_cpu(header.capacity),
le64_to_cpu(header.gd_offset) << 9,
l1_backup_offset,
le64_to_cpu(header.rgd_offset) << 9,
l1_size,
le32_to_cpu(header.num_gtes_per_gte),
le64_to_cpu(header.granularity));
extent->compressed =
le16_to_cpu(header.compressAlgorithm) == VMDK4_COMPRESSION_DEFLATE;
extent->has_marker = le32_to_cpu(header.flags) & VMDK4_FLAG_MARKER;
if (extent->l1_entry_sectors <= 0) {
ret = -EINVAL;
goto fail;
}
/* try to open parent images, if exist */
ret = vmdk_parent_open(bs);
if (ret) {
goto fail;
}
s->parent_cid = vmdk_read_cid(bs, 1);
ret = vmdk_init_tables(bs, extent);
if (ret) {
/* free extent allocated by vmdk_add_extent */
vmdk_free_last_extent(bs);
goto fail;
}
return 0;
fail:
vmdk_free_extents(bs);
return ret;
}
@@ -562,31 +460,6 @@ static int vmdk_parse_description(const char *desc, const char *opt_name,
return 0;
}
/* Open an extent file and append to bs array */
static int vmdk_open_sparse(BlockDriverState *bs,
BlockDriverState *file,
int flags)
{
uint32_t magic;
if (bdrv_pread(file, 0, &magic, sizeof(magic)) != sizeof(magic)) {
return -EIO;
}
magic = be32_to_cpu(magic);
switch (magic) {
case VMDK3_MAGIC:
return vmdk_open_vmdk3(bs, file, flags);
break;
case VMDK4_MAGIC:
return vmdk_open_vmdk4(bs, file, flags);
break;
default:
return -EINVAL;
break;
}
}
static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
const char *desc_file_path)
{
@@ -597,8 +470,6 @@ static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
const char *p = desc;
int64_t sectors = 0;
int64_t flat_offset;
char extent_path[PATH_MAX];
BlockDriverState *extent_file;
while (*p) {
/* parse extent line:
@@ -633,29 +504,24 @@ static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
goto next_line;
}
path_combine(extent_path, sizeof(extent_path),
desc_file_path, fname);
ret = bdrv_file_open(&extent_file, extent_path, bs->open_flags);
if (ret) {
return ret;
}
/* save to extents array */
if (!strcmp(type, "FLAT")) {
/* FLAT extent */
char extent_path[PATH_MAX];
BlockDriverState *extent_file;
VmdkExtent *extent;
extent = vmdk_add_extent(bs, extent_file, true, sectors,
0, 0, 0, 0, sectors);
extent->flat_start_offset = flat_offset << 9;
} else if (!strcmp(type, "SPARSE")) {
/* SPARSE extent */
ret = vmdk_open_sparse(bs, extent_file, bs->open_flags);
path_combine(extent_path, sizeof(extent_path),
desc_file_path, fname);
ret = bdrv_file_open(&extent_file, extent_path, bs->open_flags);
if (ret) {
bdrv_delete(extent_file);
return ret;
}
extent = vmdk_add_extent(bs, extent_file, true, sectors,
0, 0, 0, 0, sectors);
extent->flat_start_offset = flat_offset;
} else {
/* SPARSE extent, not supported for now */
fprintf(stderr,
"VMDK: Not supported extent type \"%s\""".\n", type);
return -ENOTSUP;
@@ -670,15 +536,14 @@ next_line:
return 0;
}
static int vmdk_open_desc_file(BlockDriverState *bs, int flags,
int64_t desc_offset)
static int vmdk_open_desc_file(BlockDriverState *bs, int flags)
{
int ret;
char buf[2048];
char ct[128];
BDRVVmdkState *s = bs->opaque;
ret = bdrv_pread(bs->file, desc_offset, buf, sizeof(buf));
ret = bdrv_pread(bs->file, 0, buf, sizeof(buf));
if (ret < 0) {
return ret;
}
@@ -686,49 +551,42 @@ static int vmdk_open_desc_file(BlockDriverState *bs, int flags,
if (vmdk_parse_description(buf, "createType", ct, sizeof(ct))) {
return -EINVAL;
}
if (strcmp(ct, "monolithicFlat") &&
strcmp(ct, "twoGbMaxExtentSparse") &&
strcmp(ct, "twoGbMaxExtentFlat")) {
if (strcmp(ct, "monolithicFlat")) {
fprintf(stderr,
"VMDK: Not supported image type \"%s\""".\n", ct);
return -ENOTSUP;
}
s->desc_offset = 0;
return vmdk_parse_extents(buf, bs, bs->file->filename);
ret = vmdk_parse_extents(buf, bs, bs->file->filename);
if (ret) {
return ret;
}
/* try to open parent images, if exist */
if (vmdk_parent_open(bs)) {
qemu_free(s->extents);
return -EINVAL;
}
s->parent_cid = vmdk_read_cid(bs, 1);
return 0;
}
static int vmdk_open(BlockDriverState *bs, int flags)
{
int ret;
BDRVVmdkState *s = bs->opaque;
uint32_t magic;
if (vmdk_open_sparse(bs, bs->file, flags) == 0) {
s->desc_offset = 0x200;
if (bdrv_pread(bs->file, 0, &magic, sizeof(magic)) != sizeof(magic)) {
return -EIO;
}
magic = be32_to_cpu(magic);
if (magic == VMDK3_MAGIC) {
return vmdk_open_vmdk3(bs, flags);
} else if (magic == VMDK4_MAGIC) {
return vmdk_open_vmdk4(bs, flags);
} else {
ret = vmdk_open_desc_file(bs, flags, 0);
if (ret) {
goto fail;
}
return vmdk_open_desc_file(bs, flags);
}
/* try to open parent images, if exist */
ret = vmdk_parent_open(bs);
if (ret) {
goto fail;
}
s->parent_cid = vmdk_read_cid(bs, 1);
qemu_co_mutex_init(&s->lock);
/* Disable migration when VMDK images are used */
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"vmdk", bs->device_name, "live migration");
migrate_add_blocker(s->migration_blocker);
return 0;
fail:
vmdk_free_extents(bs);
return ret;
}
static int get_whole_cluster(BlockDriverState *bs,
@@ -814,7 +672,6 @@ static int get_cluster_offset(BlockDriverState *bs,
return 0;
}
offset -= (extent->end_sector - extent->sectors) * SECTOR_SIZE;
l1_index = (offset >> 9) / extent->l1_entry_sectors;
if (l1_index >= extent->l1_size) {
return -1;
@@ -867,12 +724,10 @@ static int get_cluster_offset(BlockDriverState *bs,
/* Avoid the L2 tables update for the images that have snapshots. */
*cluster_offset = bdrv_getlength(extent->file);
if (!extent->compressed) {
bdrv_truncate(
extent->file,
*cluster_offset + (extent->cluster_sectors << 9)
);
}
bdrv_truncate(
extent->file,
*cluster_offset + (extent->cluster_sectors << 9)
);
*cluster_offset >>= 9;
tmp = cpu_to_le32(*cluster_offset);
@@ -917,8 +772,8 @@ static VmdkExtent *find_extent(BDRVVmdkState *s,
return NULL;
}
static int coroutine_fn vmdk_co_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum)
static int vmdk_is_allocated(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum)
{
BDRVVmdkState *s = bs->opaque;
int64_t index_in_cluster, n, ret;
@@ -929,10 +784,8 @@ static int coroutine_fn vmdk_co_is_allocated(BlockDriverState *bs,
if (!extent) {
return 0;
}
qemu_co_mutex_lock(&s->lock);
ret = get_cluster_offset(bs, extent, NULL,
sector_num * 512, 0, &offset);
qemu_co_mutex_unlock(&s->lock);
/* get_cluster_offset returning 0 means success */
ret = !ret;
@@ -945,113 +798,6 @@ static int coroutine_fn vmdk_co_is_allocated(BlockDriverState *bs,
return ret;
}
static int vmdk_write_extent(VmdkExtent *extent, int64_t cluster_offset,
int64_t offset_in_cluster, const uint8_t *buf,
int nb_sectors, int64_t sector_num)
{
int ret;
VmdkGrainMarker *data = NULL;
uLongf buf_len;
const uint8_t *write_buf = buf;
int write_len = nb_sectors * 512;
if (extent->compressed) {
if (!extent->has_marker) {
ret = -EINVAL;
goto out;
}
buf_len = (extent->cluster_sectors << 9) * 2;
data = g_malloc(buf_len + sizeof(VmdkGrainMarker));
if (compress(data->data, &buf_len, buf, nb_sectors << 9) != Z_OK ||
buf_len == 0) {
ret = -EINVAL;
goto out;
}
data->lba = sector_num;
data->size = buf_len;
write_buf = (uint8_t *)data;
write_len = buf_len + sizeof(VmdkGrainMarker);
}
ret = bdrv_pwrite(extent->file,
cluster_offset + offset_in_cluster,
write_buf,
write_len);
if (ret != write_len) {
ret = ret < 0 ? ret : -EIO;
goto out;
}
ret = 0;
out:
g_free(data);
return ret;
}
static int vmdk_read_extent(VmdkExtent *extent, int64_t cluster_offset,
int64_t offset_in_cluster, uint8_t *buf,
int nb_sectors)
{
int ret;
int cluster_bytes, buf_bytes;
uint8_t *cluster_buf, *compressed_data;
uint8_t *uncomp_buf;
uint32_t data_len;
VmdkGrainMarker *marker;
uLongf buf_len;
if (!extent->compressed) {
ret = bdrv_pread(extent->file,
cluster_offset + offset_in_cluster,
buf, nb_sectors * 512);
if (ret == nb_sectors * 512) {
return 0;
} else {
return -EIO;
}
}
cluster_bytes = extent->cluster_sectors * 512;
/* Read two clusters in case GrainMarker + compressed data > one cluster */
buf_bytes = cluster_bytes * 2;
cluster_buf = g_malloc(buf_bytes);
uncomp_buf = g_malloc(cluster_bytes);
ret = bdrv_pread(extent->file,
cluster_offset,
cluster_buf, buf_bytes);
if (ret < 0) {
goto out;
}
compressed_data = cluster_buf;
buf_len = cluster_bytes;
data_len = cluster_bytes;
if (extent->has_marker) {
marker = (VmdkGrainMarker *)cluster_buf;
compressed_data = marker->data;
data_len = le32_to_cpu(marker->size);
}
if (!data_len || data_len > buf_bytes) {
ret = -EINVAL;
goto out;
}
ret = uncompress(uncomp_buf, &buf_len, compressed_data, data_len);
if (ret != Z_OK) {
ret = -EINVAL;
goto out;
}
if (offset_in_cluster < 0 ||
offset_in_cluster + nb_sectors * 512 > buf_len) {
ret = -EINVAL;
goto out;
}
memcpy(buf, uncomp_buf + offset_in_cluster, nb_sectors * 512);
ret = 0;
out:
g_free(uncomp_buf);
g_free(cluster_buf);
return ret;
}
static int vmdk_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
@@ -1088,10 +834,10 @@ static int vmdk_read(BlockDriverState *bs, int64_t sector_num,
memset(buf, 0, 512 * n);
}
} else {
ret = vmdk_read_extent(extent,
cluster_offset, index_in_cluster * 512,
buf, n);
if (ret) {
ret = bdrv_pread(extent->file,
cluster_offset + index_in_cluster * 512,
buf, n * 512);
if (ret < 0) {
return ret;
}
}
@@ -1102,17 +848,6 @@ static int vmdk_read(BlockDriverState *bs, int64_t sector_num,
return 0;
}
static coroutine_fn int vmdk_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVVmdkState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = vmdk_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static int vmdk_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
@@ -1140,25 +875,8 @@ static int vmdk_write(BlockDriverState *bs, int64_t sector_num,
bs,
extent,
&m_data,
sector_num << 9, !extent->compressed,
sector_num << 9, 1,
&cluster_offset);
if (extent->compressed) {
if (ret == 0) {
/* Refuse write to allocated cluster for streamOptimized */
fprintf(stderr,
"VMDK: can't write to allocated cluster"
" for streamOptimized\n");
return -EIO;
} else {
/* allocate */
ret = get_cluster_offset(
bs,
extent,
&m_data,
sector_num << 9, 1,
&cluster_offset);
}
}
if (ret) {
return -EINVAL;
}
@@ -1168,10 +886,11 @@ static int vmdk_write(BlockDriverState *bs, int64_t sector_num,
n = nb_sectors;
}
ret = vmdk_write_extent(extent,
cluster_offset, index_in_cluster * 512,
buf, n, sector_num);
if (ret) {
ret = bdrv_pwrite(extent->file,
cluster_offset + index_in_cluster * 512,
buf,
n * 512);
if (ret < 0) {
return ret;
}
if (m_data.valid) {
@@ -1187,39 +906,25 @@ static int vmdk_write(BlockDriverState *bs, int64_t sector_num,
/* update CID on the first write every time the virtual disk is
* opened */
if (!s->cid_updated) {
ret = vmdk_write_cid(bs, time(NULL));
if (ret < 0) {
return ret;
}
vmdk_write_cid(bs, time(NULL));
s->cid_updated = true;
}
}
return 0;
}
static coroutine_fn int vmdk_co_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
int ret;
BDRVVmdkState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = vmdk_write(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static int vmdk_create_extent(const char *filename, int64_t filesize,
bool flat, bool compress)
static int vmdk_create_extent(const char *filename, int64_t filesize, bool flat)
{
int ret, i;
int fd = 0;
VMDK4Header header;
uint32_t tmp, magic, grains, gd_size, gt_size, gt_count;
fd = qemu_open(filename,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY | O_LARGEFILE,
0644);
fd = open(
filename,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY | O_LARGEFILE,
0644);
if (fd < 0) {
return -errno;
}
@@ -1233,9 +938,7 @@ static int vmdk_create_extent(const char *filename, int64_t filesize,
magic = cpu_to_be32(VMDK4_MAGIC);
memset(&header, 0, sizeof(header));
header.version = 1;
header.flags =
3 | (compress ? VMDK4_FLAG_COMPRESS | VMDK4_FLAG_MARKER : 0);
header.compressAlgorithm = compress ? VMDK4_COMPRESSION_DEFLATE : 0;
header.flags = 3; /* ?? */
header.capacity = filesize / 512;
header.granularity = 128;
header.num_gtes_per_gte = 512;
@@ -1265,7 +968,6 @@ static int vmdk_create_extent(const char *filename, int64_t filesize,
header.rgd_offset = cpu_to_le64(header.rgd_offset);
header.gd_offset = cpu_to_le64(header.gd_offset);
header.grain_offset = cpu_to_le64(header.grain_offset);
header.compressAlgorithm = cpu_to_le16(header.compressAlgorithm);
header.check_bytes[0] = 0xa;
header.check_bytes[1] = 0x20;
@@ -1314,7 +1016,7 @@ static int vmdk_create_extent(const char *filename, int64_t filesize,
ret = 0;
exit:
qemu_close(fd);
close(fd);
return ret;
}
@@ -1407,7 +1109,7 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options)
const char *fmt = NULL;
int flags = 0;
int ret = 0;
bool flat, split, compress;
bool flat, split;
char ext_desc_lines[BUF_SIZE] = "";
char path[PATH_MAX], prefix[PATH_MAX], postfix[PATH_MAX];
const int64_t split_size = 0x80000000; /* VMDK has constant split size */
@@ -1456,8 +1158,7 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options)
} else if (strcmp(fmt, "monolithicFlat") &&
strcmp(fmt, "monolithicSparse") &&
strcmp(fmt, "twoGbMaxExtentSparse") &&
strcmp(fmt, "twoGbMaxExtentFlat") &&
strcmp(fmt, "streamOptimized")) {
strcmp(fmt, "twoGbMaxExtentFlat")) {
fprintf(stderr, "VMDK: Unknown subformat: %s\n", fmt);
return -EINVAL;
}
@@ -1465,7 +1166,6 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options)
strcmp(fmt, "twoGbMaxExtentSparse"));
flat = !(strcmp(fmt, "monolithicFlat") &&
strcmp(fmt, "twoGbMaxExtentFlat"));
compress = !strcmp(fmt, "streamOptimized");
if (flat) {
desc_extent_line = "RW %lld FLAT \"%s\" 0\n";
} else {
@@ -1487,6 +1187,7 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options)
bdrv_delete(bs);
return -EINVAL;
}
filesize = bdrv_getlength(bs);
parent_cid = vmdk_read_cid(bs, 0);
bdrv_delete(bs);
relative_path(parent_filename, sizeof(parent_filename),
@@ -1519,7 +1220,7 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options)
snprintf(ext_filename, sizeof(ext_filename), "%s%s",
path, desc_filename);
if (vmdk_create_extent(ext_filename, size, flat, compress)) {
if (vmdk_create_extent(ext_filename, size, flat)) {
return -EINVAL;
}
filesize -= size;
@@ -1539,13 +1240,15 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options)
(flags & BLOCK_FLAG_COMPAT6 ? 6 : 4),
total_size / (int64_t)(63 * 16 * 512));
if (split || flat) {
fd = qemu_open(filename,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY | O_LARGEFILE,
0644);
fd = open(
filename,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY | O_LARGEFILE,
0644);
} else {
fd = qemu_open(filename,
O_WRONLY | O_BINARY | O_LARGEFILE,
0644);
fd = open(
filename,
O_WRONLY | O_BINARY | O_LARGEFILE,
0644);
}
if (fd < 0) {
return -errno;
@@ -1562,28 +1265,23 @@ static int vmdk_create(const char *filename, QEMUOptionParameter *options)
}
ret = 0;
exit:
qemu_close(fd);
close(fd);
return ret;
}
static void vmdk_close(BlockDriverState *bs)
{
BDRVVmdkState *s = bs->opaque;
vmdk_free_extents(bs);
migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
}
static coroutine_fn int vmdk_co_flush(BlockDriverState *bs)
static int vmdk_flush(BlockDriverState *bs)
{
int i, ret, err;
BDRVVmdkState *s = bs->opaque;
int i, err;
int ret = 0;
ret = bdrv_flush(bs->file);
for (i = 0; i < s->num_extents; i++) {
err = bdrv_co_flush(s->extents[i].file);
err = bdrv_flush(s->extents[i].file);
if (err < 0) {
ret = err;
}
@@ -1636,7 +1334,7 @@ static QEMUOptionParameter vmdk_create_options[] = {
.type = OPT_STRING,
.help =
"VMDK flat extent format, can be one of "
"{monolithicSparse (default) | monolithicFlat | twoGbMaxExtentSparse | twoGbMaxExtentFlat | streamOptimized} "
"{monolithicSparse (default) | monolithicFlat | twoGbMaxExtentSparse | twoGbMaxExtentFlat} "
},
{ NULL }
};
@@ -1646,12 +1344,12 @@ static BlockDriver bdrv_vmdk = {
.instance_size = sizeof(BDRVVmdkState),
.bdrv_probe = vmdk_probe,
.bdrv_open = vmdk_open,
.bdrv_read = vmdk_co_read,
.bdrv_write = vmdk_co_write,
.bdrv_read = vmdk_read,
.bdrv_write = vmdk_write,
.bdrv_close = vmdk_close,
.bdrv_create = vmdk_create,
.bdrv_co_flush_to_disk = vmdk_co_flush,
.bdrv_co_is_allocated = vmdk_co_is_allocated,
.bdrv_flush = vmdk_flush,
.bdrv_is_allocated = vmdk_is_allocated,
.bdrv_get_allocated_file_size = vmdk_get_allocated_file_size,
.create_options = vmdk_create_options,

View File

@@ -25,7 +25,6 @@
#include "qemu-common.h"
#include "block_int.h"
#include "module.h"
#include "migration.h"
/**************************************************************/
@@ -111,7 +110,6 @@ struct vhd_dyndisk_header {
};
typedef struct BDRVVPCState {
CoMutex lock;
uint8_t footer_buf[HEADER_SIZE];
uint64_t free_data_block_offset;
int max_table_entries;
@@ -129,8 +127,6 @@ typedef struct BDRVVPCState {
uint64_t last_bitmap;
#endif
Error *migration_blocker;
} BDRVVPCState;
static uint32_t vpc_checksum(uint8_t* buf, size_t size)
@@ -160,28 +156,13 @@ static int vpc_open(BlockDriverState *bs, int flags)
struct vhd_dyndisk_header* dyndisk_header;
uint8_t buf[HEADER_SIZE];
uint32_t checksum;
int err = -1;
int disk_type = VHD_DYNAMIC;
if (bdrv_pread(bs->file, 0, s->footer_buf, HEADER_SIZE) != HEADER_SIZE)
goto fail;
footer = (struct vhd_footer*) s->footer_buf;
if (strncmp(footer->creator, "conectix", 8)) {
int64_t offset = bdrv_getlength(bs->file);
if (offset < HEADER_SIZE) {
goto fail;
}
/* If a fixed disk, the footer is found only at the end of the file */
if (bdrv_pread(bs->file, offset-HEADER_SIZE, s->footer_buf, HEADER_SIZE)
!= HEADER_SIZE) {
goto fail;
}
if (strncmp(footer->creator, "conectix", 8)) {
goto fail;
}
disk_type = VHD_FIXED;
}
if (strncmp(footer->creator, "conectix", 8))
goto fail;
checksum = be32_to_cpu(footer->checksum);
footer->checksum = 0;
@@ -189,80 +170,59 @@ static int vpc_open(BlockDriverState *bs, int flags)
fprintf(stderr, "block-vpc: The header checksum of '%s' is "
"incorrect.\n", bs->filename);
/* Write 'checksum' back to footer, or else will leave it with zero. */
footer->checksum = be32_to_cpu(checksum);
// The visible size of a image in Virtual PC depends on the geometry
// rather than on the size stored in the footer (the size in the footer
// is too large usually)
bs->total_sectors = (int64_t)
be16_to_cpu(footer->cyls) * footer->heads * footer->secs_per_cyl;
if (bs->total_sectors >= 65535 * 16 * 255) {
err = -EFBIG;
if (bdrv_pread(bs->file, be64_to_cpu(footer->data_offset), buf, HEADER_SIZE)
!= HEADER_SIZE)
goto fail;
dyndisk_header = (struct vhd_dyndisk_header*) buf;
if (strncmp(dyndisk_header->magic, "cxsparse", 8))
goto fail;
s->block_size = be32_to_cpu(dyndisk_header->block_size);
s->bitmap_size = ((s->block_size / (8 * 512)) + 511) & ~511;
s->max_table_entries = be32_to_cpu(dyndisk_header->max_table_entries);
s->pagetable = qemu_malloc(s->max_table_entries * 4);
s->bat_offset = be64_to_cpu(dyndisk_header->table_offset);
if (bdrv_pread(bs->file, s->bat_offset, s->pagetable,
s->max_table_entries * 4) != s->max_table_entries * 4)
goto fail;
s->free_data_block_offset =
(s->bat_offset + (s->max_table_entries * 4) + 511) & ~511;
for (i = 0; i < s->max_table_entries; i++) {
be32_to_cpus(&s->pagetable[i]);
if (s->pagetable[i] != 0xFFFFFFFF) {
int64_t next = (512 * (int64_t) s->pagetable[i]) +
s->bitmap_size + s->block_size;
if (next> s->free_data_block_offset)
s->free_data_block_offset = next;
}
}
if (disk_type == VHD_DYNAMIC) {
if (bdrv_pread(bs->file, be64_to_cpu(footer->data_offset), buf,
HEADER_SIZE) != HEADER_SIZE) {
goto fail;
}
dyndisk_header = (struct vhd_dyndisk_header *) buf;
if (strncmp(dyndisk_header->magic, "cxsparse", 8)) {
goto fail;
}
s->block_size = be32_to_cpu(dyndisk_header->block_size);
s->bitmap_size = ((s->block_size / (8 * 512)) + 511) & ~511;
s->max_table_entries = be32_to_cpu(dyndisk_header->max_table_entries);
s->pagetable = g_malloc(s->max_table_entries * 4);
s->bat_offset = be64_to_cpu(dyndisk_header->table_offset);
if (bdrv_pread(bs->file, s->bat_offset, s->pagetable,
s->max_table_entries * 4) != s->max_table_entries * 4) {
goto fail;
}
s->free_data_block_offset =
(s->bat_offset + (s->max_table_entries * 4) + 511) & ~511;
for (i = 0; i < s->max_table_entries; i++) {
be32_to_cpus(&s->pagetable[i]);
if (s->pagetable[i] != 0xFFFFFFFF) {
int64_t next = (512 * (int64_t) s->pagetable[i]) +
s->bitmap_size + s->block_size;
if (next > s->free_data_block_offset) {
s->free_data_block_offset = next;
}
}
}
s->last_bitmap_offset = (int64_t) -1;
s->last_bitmap_offset = (int64_t) -1;
#ifdef CACHE
s->pageentry_u8 = g_malloc(512);
s->pageentry_u32 = s->pageentry_u8;
s->pageentry_u16 = s->pageentry_u8;
s->last_pagetable = -1;
s->pageentry_u8 = qemu_malloc(512);
s->pageentry_u32 = s->pageentry_u8;
s->pageentry_u16 = s->pageentry_u8;
s->last_pagetable = -1;
#endif
}
qemu_co_mutex_init(&s->lock);
/* Disable migration when VHD images are used */
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"vpc", bs->device_name, "live migration");
migrate_add_blocker(s->migration_blocker);
return 0;
fail:
return err;
return -1;
}
/*
@@ -384,11 +344,8 @@ static int64_t alloc_block(BlockDriverState* bs, int64_t sector_num)
// Initialize the block's bitmap
memset(bitmap, 0xff, s->bitmap_size);
ret = bdrv_pwrite_sync(bs->file, s->free_data_block_offset, bitmap,
bdrv_pwrite_sync(bs->file, s->free_data_block_offset, bitmap,
s->bitmap_size);
if (ret < 0) {
return ret;
}
// Write new footer (the old one will be overwritten)
s->free_data_block_offset += s->block_size + s->bitmap_size;
@@ -417,11 +374,7 @@ static int vpc_read(BlockDriverState *bs, int64_t sector_num,
int ret;
int64_t offset;
int64_t sectors, sectors_per_block;
struct vhd_footer *footer = (struct vhd_footer *) s->footer_buf;
if (cpu_to_be32(footer->type) == VHD_FIXED) {
return bdrv_read(bs->file, sector_num, buf, nb_sectors);
}
while (nb_sectors > 0) {
offset = get_sector_offset(bs, sector_num, 0);
@@ -448,17 +401,6 @@ static int vpc_read(BlockDriverState *bs, int64_t sector_num,
return 0;
}
static coroutine_fn int vpc_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVVPCState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = vpc_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static int vpc_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
@@ -466,11 +408,7 @@ static int vpc_write(BlockDriverState *bs, int64_t sector_num,
int64_t offset;
int64_t sectors, sectors_per_block;
int ret;
struct vhd_footer *footer = (struct vhd_footer *) s->footer_buf;
if (cpu_to_be32(footer->type) == VHD_FIXED) {
return bdrv_write(bs->file, sector_num, buf, nb_sectors);
}
while (nb_sectors > 0) {
offset = get_sector_offset(bs, sector_num, 1);
@@ -499,15 +437,9 @@ static int vpc_write(BlockDriverState *bs, int64_t sector_num,
return 0;
}
static coroutine_fn int vpc_co_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
static int vpc_flush(BlockDriverState *bs)
{
int ret;
BDRVVPCState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = vpc_write(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
return bdrv_flush(bs->file);
}
/*
@@ -558,14 +490,70 @@ static int calculate_geometry(int64_t total_sectors, uint16_t* cyls,
return 0;
}
static int create_dynamic_disk(int fd, uint8_t *buf, int64_t total_sectors)
static int vpc_create(const char *filename, QEMUOptionParameter *options)
{
uint8_t buf[1024];
struct vhd_footer* footer = (struct vhd_footer*) buf;
struct vhd_dyndisk_header* dyndisk_header =
(struct vhd_dyndisk_header*) buf;
int fd, i;
uint16_t cyls = 0;
uint8_t heads = 0;
uint8_t secs_per_cyl = 0;
size_t block_size, num_bat_entries;
int i;
int64_t total_sectors = 0;
int ret = -EIO;
// Read out options
total_sectors = get_option_parameter(options, BLOCK_OPT_SIZE)->value.n /
BDRV_SECTOR_SIZE;
// Create the file
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, 0644);
if (fd < 0)
return -EIO;
/* Calculate matching total_size and geometry. Increase the number of
sectors requested until we get enough (or fail). */
for (i = 0; total_sectors > (int64_t)cyls * heads * secs_per_cyl; i++) {
if (calculate_geometry(total_sectors + i,
&cyls, &heads, &secs_per_cyl)) {
ret = -EFBIG;
goto fail;
}
}
total_sectors = (int64_t) cyls * heads * secs_per_cyl;
// Prepare the Hard Disk Footer
memset(buf, 0, 1024);
memcpy(footer->creator, "conectix", 8);
// TODO Check if "qemu" creator_app is ok for VPC
memcpy(footer->creator_app, "qemu", 4);
memcpy(footer->creator_os, "Wi2k", 4);
footer->features = be32_to_cpu(0x02);
footer->version = be32_to_cpu(0x00010000);
footer->data_offset = be64_to_cpu(HEADER_SIZE);
footer->timestamp = be32_to_cpu(time(NULL) - VHD_TIMESTAMP_BASE);
// Version of Virtual PC 2007
footer->major = be16_to_cpu(0x0005);
footer->minor =be16_to_cpu(0x0003);
footer->orig_size = be64_to_cpu(total_sectors * 512);
footer->size = be64_to_cpu(total_sectors * 512);
footer->cyls = be16_to_cpu(cyls);
footer->heads = heads;
footer->secs_per_cyl = secs_per_cyl;
footer->type = be32_to_cpu(VHD_DYNAMIC);
// TODO uuid is missing
footer->checksum = be32_to_cpu(vpc_checksum(buf, HEADER_SIZE));
// Write the footer (twice: at the beginning and at the end)
block_size = 0x200000;
num_bat_entries = (total_sectors + block_size / 512) / (block_size / 512);
@@ -593,16 +581,13 @@ static int create_dynamic_disk(int fd, uint8_t *buf, int64_t total_sectors)
}
}
// Prepare the Dynamic Disk Header
memset(buf, 0, 1024);
memcpy(dyndisk_header->magic, "cxsparse", 8);
/*
* Note: The spec is actually wrong here for data_offset, it says
* 0xFFFFFFFF, but MS tools expect all 64 bits to be set.
*/
dyndisk_header->data_offset = be64_to_cpu(0xFFFFFFFFFFFFFFFFULL);
dyndisk_header->data_offset = be64_to_cpu(0xFFFFFFFF);
dyndisk_header->table_offset = be64_to_cpu(3 * 512);
dyndisk_header->version = be32_to_cpu(0x00010000);
dyndisk_header->block_size = be32_to_cpu(block_size);
@@ -621,143 +606,17 @@ static int create_dynamic_disk(int fd, uint8_t *buf, int64_t total_sectors)
ret = 0;
fail:
return ret;
}
static int create_fixed_disk(int fd, uint8_t *buf, int64_t total_size)
{
int ret = -EIO;
/* Add footer to total size */
total_size += 512;
if (ftruncate(fd, total_size) != 0) {
ret = -errno;
goto fail;
}
if (lseek(fd, -512, SEEK_END) < 0) {
goto fail;
}
if (write(fd, buf, HEADER_SIZE) != HEADER_SIZE) {
goto fail;
}
ret = 0;
fail:
return ret;
}
static int vpc_create(const char *filename, QEMUOptionParameter *options)
{
uint8_t buf[1024];
struct vhd_footer *footer = (struct vhd_footer *) buf;
QEMUOptionParameter *disk_type_param;
int fd, i;
uint16_t cyls = 0;
uint8_t heads = 0;
uint8_t secs_per_cyl = 0;
int64_t total_sectors;
int64_t total_size;
int disk_type;
int ret = -EIO;
/* Read out options */
total_size = get_option_parameter(options, BLOCK_OPT_SIZE)->value.n;
disk_type_param = get_option_parameter(options, BLOCK_OPT_SUBFMT);
if (disk_type_param && disk_type_param->value.s) {
if (!strcmp(disk_type_param->value.s, "dynamic")) {
disk_type = VHD_DYNAMIC;
} else if (!strcmp(disk_type_param->value.s, "fixed")) {
disk_type = VHD_FIXED;
} else {
return -EINVAL;
}
} else {
disk_type = VHD_DYNAMIC;
}
/* Create the file */
fd = qemu_open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, 0644);
if (fd < 0) {
return -EIO;
}
/*
* Calculate matching total_size and geometry. Increase the number of
* sectors requested until we get enough (or fail). This ensures that
* qemu-img convert doesn't truncate images, but rather rounds up.
*/
total_sectors = total_size / BDRV_SECTOR_SIZE;
for (i = 0; total_sectors > (int64_t)cyls * heads * secs_per_cyl; i++) {
if (calculate_geometry(total_sectors + i, &cyls, &heads,
&secs_per_cyl))
{
ret = -EFBIG;
goto fail;
}
}
total_sectors = (int64_t) cyls * heads * secs_per_cyl;
/* Prepare the Hard Disk Footer */
memset(buf, 0, 1024);
memcpy(footer->creator, "conectix", 8);
/* TODO Check if "qemu" creator_app is ok for VPC */
memcpy(footer->creator_app, "qemu", 4);
memcpy(footer->creator_os, "Wi2k", 4);
footer->features = be32_to_cpu(0x02);
footer->version = be32_to_cpu(0x00010000);
if (disk_type == VHD_DYNAMIC) {
footer->data_offset = be64_to_cpu(HEADER_SIZE);
} else {
footer->data_offset = be64_to_cpu(0xFFFFFFFFFFFFFFFFULL);
}
footer->timestamp = be32_to_cpu(time(NULL) - VHD_TIMESTAMP_BASE);
/* Version of Virtual PC 2007 */
footer->major = be16_to_cpu(0x0005);
footer->minor = be16_to_cpu(0x0003);
if (disk_type == VHD_DYNAMIC) {
footer->orig_size = be64_to_cpu(total_sectors * 512);
footer->size = be64_to_cpu(total_sectors * 512);
} else {
footer->orig_size = be64_to_cpu(total_size);
footer->size = be64_to_cpu(total_size);
}
footer->cyls = be16_to_cpu(cyls);
footer->heads = heads;
footer->secs_per_cyl = secs_per_cyl;
footer->type = be32_to_cpu(disk_type);
/* TODO uuid is missing */
footer->checksum = be32_to_cpu(vpc_checksum(buf, HEADER_SIZE));
if (disk_type == VHD_DYNAMIC) {
ret = create_dynamic_disk(fd, buf, total_sectors);
} else {
ret = create_fixed_disk(fd, buf, total_size);
}
fail:
qemu_close(fd);
close(fd);
return ret;
}
static void vpc_close(BlockDriverState *bs)
{
BDRVVPCState *s = bs->opaque;
g_free(s->pagetable);
qemu_free(s->pagetable);
#ifdef CACHE
g_free(s->pageentry_u8);
qemu_free(s->pageentry_u8);
#endif
migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
}
static QEMUOptionParameter vpc_create_options[] = {
@@ -766,28 +625,20 @@ static QEMUOptionParameter vpc_create_options[] = {
.type = OPT_SIZE,
.help = "Virtual disk size"
},
{
.name = BLOCK_OPT_SUBFMT,
.type = OPT_STRING,
.help =
"Type of virtual hard disk format. Supported formats are "
"{dynamic (default) | fixed} "
},
{ NULL }
};
static BlockDriver bdrv_vpc = {
.format_name = "vpc",
.instance_size = sizeof(BDRVVPCState),
.bdrv_probe = vpc_probe,
.bdrv_open = vpc_open,
.bdrv_read = vpc_read,
.bdrv_write = vpc_write,
.bdrv_flush = vpc_flush,
.bdrv_close = vpc_close,
.bdrv_create = vpc_create,
.bdrv_read = vpc_co_read,
.bdrv_write = vpc_co_write,
.create_options = vpc_create_options,
};

View File

@@ -27,7 +27,6 @@
#include "qemu-common.h"
#include "block_int.h"
#include "module.h"
#include "migration.h"
#ifndef S_IWGRP
#define S_IWGRP 0
@@ -87,7 +86,8 @@ static inline void array_init(array_t* array,unsigned int item_size)
static inline void array_free(array_t* array)
{
g_free(array->pointer);
if(array->pointer)
free(array->pointer);
array->size=array->next=0;
}
@@ -101,7 +101,7 @@ static inline int array_ensure_allocated(array_t* array, int index)
{
if((index + 1) * array->item_size > array->size) {
int new_size = (index + 32) * array->item_size;
array->pointer = g_realloc(array->pointer, new_size);
array->pointer = qemu_realloc(array->pointer, new_size);
if (!array->pointer)
return -1;
array->size = new_size;
@@ -127,7 +127,7 @@ static inline void* array_get_next(array_t* array) {
static inline void* array_insert(array_t* array,unsigned int index,unsigned int count) {
if((array->next+count)*array->item_size>array->size) {
int increment=count*array->item_size;
array->pointer=g_realloc(array->pointer,array->size+increment);
array->pointer=qemu_realloc(array->pointer,array->size+increment);
if(!array->pointer)
return NULL;
array->size+=increment;
@@ -159,7 +159,7 @@ static inline int array_roll(array_t* array,int index_to,int index_from,int coun
is=array->item_size;
from=array->pointer+index_from*is;
to=array->pointer+index_to*is;
buf=g_malloc(is*count);
buf=qemu_malloc(is*count);
memcpy(buf,from,is*count);
if(index_to<index_from)
@@ -169,7 +169,7 @@ static inline int array_roll(array_t* array,int index_to,int index_from,int coun
memcpy(to,buf,is*count);
g_free(buf);
free(buf);
return 0;
}
@@ -200,7 +200,7 @@ static int array_index(array_t* array, void* pointer)
}
/* These structures are used to fake a disk and the VFAT filesystem.
* For this reason we need to use QEMU_PACKED. */
* For this reason we need to use __attribute__((packed)). */
typedef struct bootsector_t {
uint8_t jump[3];
@@ -224,7 +224,7 @@ typedef struct bootsector_t {
uint8_t signature;
uint32_t id;
uint8_t volume_label[11];
} QEMU_PACKED fat16;
} __attribute__((packed)) fat16;
struct {
uint32_t sectors_per_fat;
uint16_t flags;
@@ -233,12 +233,12 @@ typedef struct bootsector_t {
uint16_t info_sector;
uint16_t backup_boot_sector;
uint16_t ignored;
} QEMU_PACKED fat32;
} __attribute__((packed)) fat32;
} u;
uint8_t fat_type[8];
uint8_t ignored[0x1c0];
uint8_t magic[2];
} QEMU_PACKED bootsector_t;
} __attribute__((packed)) bootsector_t;
typedef struct {
uint8_t head;
@@ -253,7 +253,7 @@ typedef struct partition_t {
mbr_chs_t end_CHS;
uint32_t start_sector_long;
uint32_t length_sector_long;
} QEMU_PACKED partition_t;
} __attribute__((packed)) partition_t;
typedef struct mbr_t {
uint8_t ignored[0x1b8];
@@ -261,7 +261,7 @@ typedef struct mbr_t {
uint8_t ignored2[2];
partition_t partition[4];
uint8_t magic[2];
} QEMU_PACKED mbr_t;
} __attribute__((packed)) mbr_t;
typedef struct direntry_t {
uint8_t name[8];
@@ -276,7 +276,7 @@ typedef struct direntry_t {
uint16_t mdate;
uint16_t begin;
uint32_t size;
} QEMU_PACKED direntry_t;
} __attribute__((packed)) direntry_t;
/* this structure are used to transparently access the files */
@@ -318,7 +318,6 @@ static void print_mapping(const struct mapping_t* mapping);
/* here begins the real VVFAT driver */
typedef struct BDRVVVFATState {
CoMutex lock;
BlockDriverState* bs; /* pointer to parent */
unsigned int first_sectors_number; /* 1 for a single partition, 0x40 for a disk with partition table */
unsigned char first_sectors[0x40*0x200];
@@ -351,20 +350,17 @@ typedef struct BDRVVVFATState {
array_t commits;
const char* path;
int downcase_short_names;
Error *migration_blocker;
} BDRVVVFATState;
/* take the sector position spos and convert it to Cylinder/Head/Sector position
* if the position is outside the specified geometry, fill maximum value for CHS
* and return 1 to signal overflow.
*/
static int sector2CHS(mbr_chs_t *chs, int spos, int cyls, int heads, int secs)
{
static int sector2CHS(BlockDriverState* bs, mbr_chs_t * chs, int spos){
int head,sector;
sector = spos % secs; spos /= secs;
head = spos % heads; spos /= heads;
if (spos >= cyls) {
sector = spos % (bs->secs); spos/= bs->secs;
head = spos % (bs->heads); spos/= bs->heads;
if(spos >= bs->cyls){
/* Overflow,
it happens if 32bit sector positions are used, while CHS is only 24bit.
Windows/Dos is said to take 1023/255/63 as nonrepresentable CHS */
@@ -379,7 +375,7 @@ static int sector2CHS(mbr_chs_t *chs, int spos, int cyls, int heads, int secs)
return 0;
}
static void init_mbr(BDRVVVFATState *s, int cyls, int heads, int secs)
static void init_mbr(BDRVVVFATState* s)
{
/* TODO: if the files mbr.img and bootsect.img exist, use them */
mbr_t* real_mbr=(mbr_t*)s->first_sectors;
@@ -394,15 +390,12 @@ static void init_mbr(BDRVVVFATState *s, int cyls, int heads, int secs)
partition->attributes=0x80; /* bootable */
/* LBA is used when partition is outside the CHS geometry */
lba = sector2CHS(&partition->start_CHS, s->first_sectors_number - 1,
cyls, heads, secs);
lba |= sector2CHS(&partition->end_CHS, s->bs->total_sectors - 1,
cyls, heads, secs);
lba = sector2CHS(s->bs, &partition->start_CHS, s->first_sectors_number-1);
lba|= sector2CHS(s->bs, &partition->end_CHS, s->sector_count);
/*LBA partitions are identified only by start/length_sector_long not by CHS*/
partition->start_sector_long = cpu_to_le32(s->first_sectors_number - 1);
partition->length_sector_long = cpu_to_le32(s->bs->total_sectors
- s->first_sectors_number + 1);
partition->start_sector_long =cpu_to_le32(s->first_sectors_number-1);
partition->length_sector_long=cpu_to_le32(s->sector_count - s->first_sectors_number+1);
/* FAT12/FAT16/FAT32 */
/* DOS uses different types when partition is LBA,
@@ -735,11 +728,11 @@ static int read_directory(BDRVVVFATState* s, int mapping_index)
if(first_cluster == 0 && (is_dotdot || is_dot))
continue;
buffer=(char*)g_malloc(length);
buffer=(char*)qemu_malloc(length);
snprintf(buffer,length,"%s/%s",dirname,entry->d_name);
if(stat(buffer,&st)<0) {
g_free(buffer);
free(buffer);
continue;
}
@@ -762,7 +755,7 @@ static int read_directory(BDRVVVFATState* s, int mapping_index)
direntry->begin=0; /* do that later */
if (st.st_size > 0x7fffffff) {
fprintf(stderr, "File %s is larger than 2GB\n", buffer);
g_free(buffer);
free(buffer);
closedir(dir);
return -2;
}
@@ -806,7 +799,6 @@ static int read_directory(BDRVVVFATState* s, int mapping_index)
/* root directory */
int cur = s->directory.next;
array_ensure_allocated(&(s->directory), ROOT_ENTRIES - 1);
s->directory.next = ROOT_ENTRIES;
memset(array_get(&(s->directory), cur), 0,
(ROOT_ENTRIES - cur) * sizeof(direntry_t));
}
@@ -833,8 +825,22 @@ static inline off_t cluster2sector(BDRVVVFATState* s, uint32_t cluster_num)
return s->faked_sectors + s->sectors_per_cluster * cluster_num;
}
static inline uint32_t sector_offset_in_cluster(BDRVVVFATState* s,off_t sector_num)
{
return (sector_num-s->first_sectors_number-2*s->sectors_per_fat)%s->sectors_per_cluster;
}
#ifdef DBG
static direntry_t* get_direntry_for_mapping(BDRVVVFATState* s,mapping_t* mapping)
{
if(mapping->mode==MODE_UNDEFINED)
return 0;
return (direntry_t*)(s->directory.pointer+sizeof(direntry_t)*mapping->dir_index);
}
#endif
static int init_directories(BDRVVVFATState* s,
const char *dirname, int heads, int secs)
const char* dirname)
{
bootsector_t* bootsector;
mapping_t* mapping;
@@ -844,7 +850,7 @@ static int init_directories(BDRVVVFATState* s,
memset(&(s->first_sectors[0]),0,0x40*0x200);
s->cluster_size=s->sectors_per_cluster*0x200;
s->cluster_buffer=g_malloc(s->cluster_size);
s->cluster_buffer=qemu_malloc(s->cluster_size);
/*
* The formula: sc = spf+1+spf*spc*(512*8/fat_type),
@@ -878,7 +884,7 @@ static int init_directories(BDRVVVFATState* s,
mapping->dir_index = 0;
mapping->info.dir.parent_mapping_index = -1;
mapping->first_mapping_index = -1;
mapping->path = g_strdup(dirname);
mapping->path = qemu_strdup(dirname);
i = strlen(mapping->path);
if (i > 0 && mapping->path[i - 1] == '/')
mapping->path[i - 1] = '\0';
@@ -923,8 +929,11 @@ static int init_directories(BDRVVVFATState* s,
cluster = mapping->end;
if(cluster > s->cluster_count) {
fprintf(stderr,"Directory does not fit in FAT%d (capacity %.2f MB)\n",
s->fat_type, s->sector_count / 2000.0);
fprintf(stderr,"Directory does not fit in FAT%d (capacity %s)\n",
s->fat_type,
s->fat_type == 12 ? s->sector_count == 2880 ? "1.44 MB"
: "2.88 MB"
: "504MB");
return -EINVAL;
}
@@ -958,16 +967,16 @@ static int init_directories(BDRVVVFATState* s,
bootsector->number_of_fats=0x2; /* number of FATs */
bootsector->root_entries=cpu_to_le16(s->sectors_of_root_directory*0x10);
bootsector->total_sectors16=s->sector_count>0xffff?0:cpu_to_le16(s->sector_count);
bootsector->media_type=(s->first_sectors_number>1?0xf8:0xf0); /* media descriptor (f8=hd, f0=3.5 fd)*/
bootsector->media_type=(s->fat_type!=12?0xf8:s->sector_count==5760?0xf9:0xf8); /* media descriptor */
s->fat.pointer[0] = bootsector->media_type;
bootsector->sectors_per_fat=cpu_to_le16(s->sectors_per_fat);
bootsector->sectors_per_track = cpu_to_le16(secs);
bootsector->number_of_heads = cpu_to_le16(heads);
bootsector->sectors_per_track=cpu_to_le16(s->bs->secs);
bootsector->number_of_heads=cpu_to_le16(s->bs->heads);
bootsector->hidden_sectors=cpu_to_le32(s->first_sectors_number==1?0:0x3f);
bootsector->total_sectors=cpu_to_le32(s->sector_count>0xffff?s->sector_count:0);
/* LATER TODO: if FAT32, this is wrong */
bootsector->u.fat16.drive_number=s->first_sectors_number==1?0:0x80; /* fda=0, hda=0x80 */
bootsector->u.fat16.drive_number=s->fat_type==12?0:0x80; /* assume this is hda (TODO) */
bootsector->u.fat16.current_head=0;
bootsector->u.fat16.signature=0x29;
bootsector->u.fat16.id=cpu_to_le32(0xfabe1afd);
@@ -986,16 +995,11 @@ static BDRVVVFATState *vvv = NULL;
static int enable_write_target(BDRVVVFATState *s);
static int is_consistent(BDRVVVFATState *s);
static void vvfat_rebind(BlockDriverState *bs)
{
BDRVVVFATState *s = bs->opaque;
s->bs = bs;
}
static int vvfat_open(BlockDriverState *bs, const char* dirname, int flags)
{
BDRVVVFATState *s = bs->opaque;
int i, cyls, heads, secs;
int floppy = 0;
int i;
#ifdef DEBUG
vvv = s;
@@ -1008,8 +1012,11 @@ DLOG(if (stderr == NULL) {
s->bs = bs;
s->fat_type=16;
/* LATER TODO: if FAT32, adjust */
s->sectors_per_cluster=0x10;
/* 504MB disk*/
bs->cyls=1024; bs->heads=16; bs->secs=63;
s->current_cluster=0xffffffff;
@@ -1024,6 +1031,16 @@ DLOG(if (stderr == NULL) {
if (!strstart(dirname, "fat:", NULL))
return -1;
if (strstr(dirname, ":floppy:")) {
floppy = 1;
s->fat_type = 12;
s->first_sectors_number = 1;
s->sectors_per_cluster=2;
bs->cyls = 80; bs->heads = 2; bs->secs = 36;
}
s->sector_count=bs->cyls*bs->heads*bs->secs;
if (strstr(dirname, ":32:")) {
fprintf(stderr, "Big fat greek warning: FAT32 has not been tested. You are welcome to do so!\n");
s->fat_type = 32;
@@ -1031,35 +1048,9 @@ DLOG(if (stderr == NULL) {
s->fat_type = 16;
} else if (strstr(dirname, ":12:")) {
s->fat_type = 12;
s->sector_count=2880;
}
if (strstr(dirname, ":floppy:")) {
/* 1.44MB or 2.88MB floppy. 2.88MB can be FAT12 (default) or FAT16. */
if (!s->fat_type) {
s->fat_type = 12;
secs = 36;
s->sectors_per_cluster=2;
} else {
secs = s->fat_type == 12 ? 18 : 36;
s->sectors_per_cluster=1;
}
s->first_sectors_number = 1;
cyls = 80;
heads = 2;
} else {
/* 32MB or 504MB disk*/
if (!s->fat_type) {
s->fat_type = 16;
}
cyls = s->fat_type == 12 ? 64 : 1024;
heads = 16;
secs = 63;
}
fprintf(stderr, "vvfat %s chs %d,%d,%d\n",
dirname, cyls, heads, secs);
s->sector_count = cyls * heads * secs - (s->first_sectors_number - 1);
if (strstr(dirname, ":rw:")) {
if (enable_write_target(s))
return -1;
@@ -1074,29 +1065,21 @@ DLOG(if (stderr == NULL) {
else
dirname += i+1;
bs->total_sectors = cyls * heads * secs;
bs->total_sectors=bs->cyls*bs->heads*bs->secs;
if (init_directories(s, dirname, heads, secs)) {
if(init_directories(s, dirname))
return -1;
}
s->sector_count = s->faked_sectors + s->sectors_per_cluster*s->cluster_count;
if (s->first_sectors_number == 0x40) {
init_mbr(s, cyls, heads, secs);
}
if(s->first_sectors_number==0x40)
init_mbr(s);
/* for some reason or other, MS-DOS does not like to know about CHS... */
if (floppy)
bs->heads = bs->cyls = bs->secs = 0;
// assert(is_consistent(s));
qemu_co_mutex_init(&s->lock);
/* Disable migration when vvfat is used rw */
if (s->qcow) {
error_set(&s->migration_blocker,
QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
"vvfat (rw)", bs->device_name, "live migration");
migrate_add_blocker(s->migration_blocker);
}
return 0;
}
@@ -1105,7 +1088,7 @@ static inline void vvfat_close_current_file(BDRVVVFATState *s)
if(s->current_mapping) {
s->current_mapping = NULL;
if (s->current_fd) {
qemu_close(s->current_fd);
close(s->current_fd);
s->current_fd = 0;
}
}
@@ -1155,6 +1138,25 @@ static inline mapping_t* find_mapping_for_cluster(BDRVVVFATState* s,int cluster_
return mapping;
}
/*
* This function simply compares path == mapping->path. Since the mappings
* are sorted by cluster, this is expensive: O(n).
*/
static inline mapping_t* find_mapping_for_path(BDRVVVFATState* s,
const char* path)
{
int i;
for (i = 0; i < s->mapping.next; i++) {
mapping_t* mapping = array_get(&(s->mapping), i);
if (mapping->first_mapping_index < 0 &&
!strcmp(path, mapping->path))
return mapping;
}
return NULL;
}
static int open_file(BDRVVVFATState* s,mapping_t* mapping)
{
if(!mapping)
@@ -1162,7 +1164,7 @@ static int open_file(BDRVVVFATState* s,mapping_t* mapping)
if(!s->current_mapping ||
strcmp(s->current_mapping->path,mapping->path)) {
/* open file */
int fd = qemu_open(mapping->path, O_RDONLY | O_BINARY | O_LARGEFILE);
int fd = open(mapping->path, O_RDONLY | O_BINARY | O_LARGEFILE);
if(fd<0)
return -1;
vvfat_close_current_file(s);
@@ -1221,6 +1223,23 @@ read_cluster_directory:
}
#ifdef DEBUG
static void hexdump(const void* address, uint32_t len)
{
const unsigned char* p = address;
int i, j;
for (i = 0; i < len; i += 16) {
for (j = 0; j < 16 && i + j < len; j++)
fprintf(stderr, "%02x ", p[i + j]);
for (; j < 16; j++)
fprintf(stderr, " ");
fprintf(stderr, " ");
for (j = 0; j < 16 && i + j < len; j++)
fprintf(stderr, "%c", (p[i + j] < ' ' || p[i + j] > 0x7f) ? '.' : p[i + j]);
fprintf(stderr, "\n");
}
}
static void print_direntry(const direntry_t* direntry)
{
int j = 0;
@@ -1274,19 +1293,19 @@ static int vvfat_read(BlockDriverState *bs, int64_t sector_num,
int i;
for(i=0;i<nb_sectors;i++,sector_num++) {
if (sector_num >= bs->total_sectors)
if (sector_num >= s->sector_count)
return -1;
if (s->qcow) {
int n;
if (bdrv_is_allocated(s->qcow, sector_num, nb_sectors-i, &n)) {
if (s->qcow->drv->bdrv_is_allocated(s->qcow,
sector_num, nb_sectors-i, &n)) {
DLOG(fprintf(stderr, "sectors %d+%d allocated\n", (int)sector_num, n));
if (bdrv_read(s->qcow, sector_num, buf + i*0x200, n)) {
return -1;
}
i += n - 1;
sector_num += n - 1;
continue;
}
if (s->qcow->drv->bdrv_read(s->qcow, sector_num, buf+i*0x200, n))
return -1;
i += n - 1;
sector_num += n - 1;
continue;
}
DLOG(fprintf(stderr, "sector %d not allocated\n", (int)sector_num));
}
if(sector_num<s->faked_sectors) {
@@ -1300,7 +1319,7 @@ DLOG(fprintf(stderr, "sector %d not allocated\n", (int)sector_num));
uint32_t sector=sector_num-s->faked_sectors,
sector_offset_in_cluster=(sector%s->sectors_per_cluster),
cluster_num=sector/s->sectors_per_cluster;
if(cluster_num > s->cluster_count || read_cluster(s, cluster_num) != 0) {
if(read_cluster(s, cluster_num) != 0) {
/* LATER TODO: strict: return -1; */
memset(buf+i*0x200,0,0x200);
continue;
@@ -1311,17 +1330,6 @@ DLOG(fprintf(stderr, "sector %d not allocated\n", (int)sector_num));
return 0;
}
static coroutine_fn int vvfat_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVVVFATState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = vvfat_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
/* LATER TODO: statify all functions */
/*
@@ -1367,7 +1375,7 @@ DLOG(fprintf(stderr, "clear_commits (%d commits)\n", s->commits.next));
assert(commit->path || commit->action == ACTION_WRITEOUT);
if (commit->action != ACTION_WRITEOUT) {
assert(commit->path);
g_free(commit->path);
free(commit->path);
} else
assert(commit->path == NULL);
}
@@ -1540,7 +1548,7 @@ static inline int cluster_was_modified(BDRVVVFATState* s, uint32_t cluster_num)
return 0;
for (i = 0; !was_modified && i < s->sectors_per_cluster; i++)
was_modified = bdrv_is_allocated(s->qcow,
was_modified = s->qcow->drv->bdrv_is_allocated(s->qcow,
cluster2sector(s, cluster_num) + i, 1, &dummy);
return was_modified;
@@ -1630,10 +1638,10 @@ static uint32_t get_cluster_count_for_direntry(BDRVVVFATState* s,
/* rename */
if (strcmp(basename, basename2))
schedule_rename(s, cluster_num, g_strdup(path));
schedule_rename(s, cluster_num, qemu_strdup(path));
} else if (is_file(direntry))
/* new file */
schedule_new_file(s, g_strdup(path), cluster_num);
schedule_new_file(s, qemu_strdup(path), cluster_num);
else {
abort();
return 0;
@@ -1689,16 +1697,16 @@ static uint32_t get_cluster_count_for_direntry(BDRVVVFATState* s,
int64_t offset = cluster2sector(s, cluster_num);
vvfat_close_current_file(s);
for (i = 0; i < s->sectors_per_cluster; i++) {
if (!bdrv_is_allocated(s->qcow, offset + i, 1, &dummy)) {
if (vvfat_read(s->bs, offset, s->cluster_buffer, 1)) {
return -1;
}
if (bdrv_write(s->qcow, offset, s->cluster_buffer, 1)) {
return -2;
}
}
}
for (i = 0; i < s->sectors_per_cluster; i++)
if (!s->qcow->drv->bdrv_is_allocated(s->qcow,
offset + i, 1, &dummy)) {
if (vvfat_read(s->bs,
offset, s->cluster_buffer, 1))
return -1;
if (s->qcow->drv->bdrv_write(s->qcow,
offset, s->cluster_buffer, 1))
return -2;
}
}
}
@@ -1727,13 +1735,13 @@ static int check_directory_consistency(BDRVVVFATState *s,
int cluster_num, const char* path)
{
int ret = 0;
unsigned char* cluster = g_malloc(s->cluster_size);
unsigned char* cluster = qemu_malloc(s->cluster_size);
direntry_t* direntries = (direntry_t*)cluster;
mapping_t* mapping = find_mapping_for_cluster(s, cluster_num);
long_file_name lfn;
int path_len = strlen(path);
char path2[PATH_MAX + 1];
char path2[PATH_MAX];
assert(path_len < PATH_MAX); /* len was tested before! */
pstrcpy(path2, sizeof(path2), path);
@@ -1750,10 +1758,10 @@ static int check_directory_consistency(BDRVVVFATState *s,
mapping->mode &= ~MODE_DELETED;
if (strcmp(basename, basename2))
schedule_rename(s, cluster_num, g_strdup(path));
schedule_rename(s, cluster_num, qemu_strdup(path));
} else
/* new directory */
schedule_mkdir(s, cluster_num, g_strdup(path));
schedule_mkdir(s, cluster_num, qemu_strdup(path));
lfn_init(&lfn);
do {
@@ -1774,14 +1782,14 @@ DLOG(fprintf(stderr, "read cluster %d (sector %d)\n", (int)cluster_num, (int)clu
if (subret) {
fprintf(stderr, "Error fetching direntries\n");
fail:
g_free(cluster);
free(cluster);
return 0;
}
for (i = 0; i < 0x10 * s->sectors_per_cluster; i++) {
int cluster_count = 0;
DLOG(fprintf(stderr, "check direntry %d:\n", i); print_direntry(direntries + i));
DLOG(fprintf(stderr, "check direntry %d: \n", i); print_direntry(direntries + i));
if (is_volume_label(direntries + i) || is_dot(direntries + i) ||
is_free(direntries + i))
continue;
@@ -1842,7 +1850,7 @@ DLOG(fprintf(stderr, "check direntry %d:\n", i); print_direntry(direntries + i))
cluster_num = modified_fat_get(s, cluster_num);
} while(!fat_eof(s, cluster_num));
g_free(cluster);
free(cluster);
return ret;
}
@@ -1868,7 +1876,7 @@ DLOG(checkpoint());
*/
if (s->fat2 == NULL) {
int size = 0x200 * s->sectors_per_fat;
s->fat2 = g_malloc(size);
s->fat2 = qemu_malloc(size);
memcpy(s->fat2, s->fat.pointer, size);
}
check = vvfat_read(s->bs,
@@ -1987,9 +1995,8 @@ static int remove_mapping(BDRVVVFATState* s, int mapping_index)
mapping_t* first_mapping = array_get(&(s->mapping), 0);
/* free mapping */
if (mapping->first_mapping_index < 0) {
g_free(mapping->path);
}
if (mapping->first_mapping_index < 0)
free(mapping->path);
/* remove from s->mapping */
array_remove(&(s->mapping), mapping_index);
@@ -2211,7 +2218,7 @@ static int commit_one_file(BDRVVVFATState* s,
uint32_t first_cluster = c;
mapping_t* mapping = find_mapping_for_cluster(s, c);
uint32_t size = filesize_of_direntry(direntry);
char* cluster = g_malloc(s->cluster_size);
char* cluster = qemu_malloc(s->cluster_size);
uint32_t i;
int fd = 0;
@@ -2221,20 +2228,15 @@ static int commit_one_file(BDRVVVFATState* s,
for (i = s->cluster_size; i < offset; i += s->cluster_size)
c = modified_fat_get(s, c);
fd = qemu_open(mapping->path, O_RDWR | O_CREAT | O_BINARY, 0666);
fd = open(mapping->path, O_RDWR | O_CREAT | O_BINARY, 0666);
if (fd < 0) {
fprintf(stderr, "Could not open %s... (%s, %d)\n", mapping->path,
strerror(errno), errno);
g_free(cluster);
return fd;
}
if (offset > 0) {
if (lseek(fd, offset, SEEK_SET) != offset) {
qemu_close(fd);
g_free(cluster);
return -3;
}
}
if (offset > 0)
if (lseek(fd, offset, SEEK_SET) != offset)
return -3;
while (offset < size) {
uint32_t c1;
@@ -2250,17 +2252,11 @@ static int commit_one_file(BDRVVVFATState* s,
ret = vvfat_read(s->bs, cluster2sector(s, c),
(uint8_t*)cluster, (rest_size + 0x1ff) / 0x200);
if (ret < 0) {
qemu_close(fd);
g_free(cluster);
return ret;
}
if (ret < 0)
return ret;
if (write(fd, cluster, rest_size) < 0) {
qemu_close(fd);
g_free(cluster);
return -2;
}
if (write(fd, cluster, rest_size) < 0)
return -2;
offset += rest_size;
c = c1;
@@ -2268,12 +2264,10 @@ static int commit_one_file(BDRVVVFATState* s,
if (ftruncate(fd, size)) {
perror("ftruncate()");
qemu_close(fd);
g_free(cluster);
close(fd);
return -4;
}
qemu_close(fd);
g_free(cluster);
close(fd);
return commit_mappings(s, first_cluster, dir_index);
}
@@ -2389,7 +2383,7 @@ static int handle_renames_and_mkdirs(BDRVVVFATState* s)
mapping_t* m = find_mapping_for_cluster(s,
begin_of_direntry(d));
int l = strlen(m->path);
char* new_path = g_malloc(l + diff + 1);
char* new_path = qemu_malloc(l + diff + 1);
assert(!strncmp(m->path, mapping->path, l2));
@@ -2405,7 +2399,7 @@ static int handle_renames_and_mkdirs(BDRVVVFATState* s)
}
}
g_free(old_path);
free(old_path);
array_remove(&(s->commits), i);
continue;
} else if (commit->action == ACTION_MKDIR) {
@@ -2646,9 +2640,7 @@ static int do_commit(BDRVVVFATState* s)
return ret;
}
if (s->qcow->drv->bdrv_make_empty) {
s->qcow->drv->bdrv_make_empty(s->qcow);
}
s->qcow->drv->bdrv_make_empty(s->qcow);
memset(s->used_clusters, 0, sector2cluster(s, s->sector_count));
@@ -2743,7 +2735,7 @@ DLOG(checkpoint());
* Use qcow backend. Commit later.
*/
DLOG(fprintf(stderr, "Write to qcow backend: %d + %d\n", (int)sector_num, nb_sectors));
ret = bdrv_write(s->qcow, sector_num, buf, nb_sectors);
ret = s->qcow->drv->bdrv_write(s->qcow, sector_num, buf, nb_sectors);
if (ret < 0) {
fprintf(stderr, "Error writing to qcow backend\n");
return ret;
@@ -2762,18 +2754,7 @@ DLOG(checkpoint());
return 0;
}
static coroutine_fn int vvfat_co_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
int ret;
BDRVVVFATState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = vvfat_write(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static int coroutine_fn vvfat_co_is_allocated(BlockDriverState *bs,
static int vvfat_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int* n)
{
BDRVVVFATState* s = bs->opaque;
@@ -2794,7 +2775,7 @@ static int write_target_commit(BlockDriverState *bs, int64_t sector_num,
static void write_target_close(BlockDriverState *bs) {
BDRVVVFATState* s = *((BDRVVVFATState**) bs->opaque);
bdrv_delete(s->qcow);
g_free(s->qcow_filename);
free(s->qcow_filename);
}
static BlockDriver vvfat_write_target = {
@@ -2813,13 +2794,8 @@ static int enable_write_target(BDRVVVFATState *s)
array_init(&(s->commits), sizeof(commit_t));
s->qcow_filename = g_malloc(1024);
ret = get_tmp_filename(s->qcow_filename, 1024);
if (ret < 0) {
g_free(s->qcow_filename);
s->qcow_filename = NULL;
return ret;
}
s->qcow_filename = qemu_malloc(1024);
get_tmp_filename(s->qcow_filename, 1024);
bdrv_qcow = bdrv_find_format("qcow");
options = parse_option_parameters("", bdrv_qcow->create_options, NULL);
@@ -2846,7 +2822,7 @@ static int enable_write_target(BDRVVVFATState *s)
s->bs->backing_hd = calloc(sizeof(BlockDriverState), 1);
s->bs->backing_hd->drv = &vvfat_write_target;
s->bs->backing_hd->opaque = g_malloc(sizeof(void*));
s->bs->backing_hd->opaque = qemu_malloc(sizeof(void*));
*(void**)s->bs->backing_hd->opaque = s;
return 0;
@@ -2860,23 +2836,18 @@ static void vvfat_close(BlockDriverState *bs)
array_free(&(s->fat));
array_free(&(s->directory));
array_free(&(s->mapping));
g_free(s->cluster_buffer);
if (s->qcow) {
migrate_del_blocker(s->migration_blocker);
error_free(s->migration_blocker);
}
if(s->cluster_buffer)
free(s->cluster_buffer);
}
static BlockDriver bdrv_vvfat = {
.format_name = "vvfat",
.instance_size = sizeof(BDRVVVFATState),
.bdrv_file_open = vvfat_open,
.bdrv_rebind = vvfat_rebind,
.bdrv_read = vvfat_co_read,
.bdrv_write = vvfat_co_write,
.bdrv_read = vvfat_read,
.bdrv_write = vvfat_write,
.bdrv_close = vvfat_close,
.bdrv_co_is_allocated = vvfat_co_is_allocated,
.bdrv_is_allocated = vvfat_is_allocated,
.protocol_name = "fat",
};
@@ -2907,5 +2878,11 @@ static void checkpoint(void) {
direntry = array_get(&(vvv->directory), mapping->dir_index);
assert(!memcmp(direntry->name, "USB H ", 11) || direntry->name[0]==0);
#endif
return;
/* avoid compiler warnings: */
hexdump(NULL, 100);
remove_mapping(vvv, 0);
print_mapping(NULL);
print_direntry(NULL);
}
#endif

View File

@@ -27,112 +27,25 @@
#include "block.h"
#include "qemu-option.h"
#include "qemu-queue.h"
#include "qemu-coroutine.h"
#include "qemu-timer.h"
#include "qapi-types.h"
#include "qerror.h"
#define BLOCK_FLAG_ENCRYPT 1
#define BLOCK_FLAG_COMPAT6 4
#define BLOCK_FLAG_LAZY_REFCOUNTS 8
#define BLOCK_FLAG_ENCRYPT 1
#define BLOCK_FLAG_COMPAT6 4
#define BLOCK_IO_LIMIT_READ 0
#define BLOCK_IO_LIMIT_WRITE 1
#define BLOCK_IO_LIMIT_TOTAL 2
#define BLOCK_OPT_SIZE "size"
#define BLOCK_OPT_ENCRYPT "encryption"
#define BLOCK_OPT_COMPAT6 "compat6"
#define BLOCK_OPT_BACKING_FILE "backing_file"
#define BLOCK_OPT_BACKING_FMT "backing_fmt"
#define BLOCK_OPT_CLUSTER_SIZE "cluster_size"
#define BLOCK_OPT_TABLE_SIZE "table_size"
#define BLOCK_OPT_PREALLOC "preallocation"
#define BLOCK_OPT_SUBFMT "subformat"
#define BLOCK_IO_SLICE_TIME 100000000
#define NANOSECONDS_PER_SECOND 1000000000.0
#define BLOCK_OPT_SIZE "size"
#define BLOCK_OPT_ENCRYPT "encryption"
#define BLOCK_OPT_COMPAT6 "compat6"
#define BLOCK_OPT_BACKING_FILE "backing_file"
#define BLOCK_OPT_BACKING_FMT "backing_fmt"
#define BLOCK_OPT_CLUSTER_SIZE "cluster_size"
#define BLOCK_OPT_TABLE_SIZE "table_size"
#define BLOCK_OPT_PREALLOC "preallocation"
#define BLOCK_OPT_SUBFMT "subformat"
#define BLOCK_OPT_COMPAT_LEVEL "compat"
#define BLOCK_OPT_LAZY_REFCOUNTS "lazy_refcounts"
typedef struct BdrvTrackedRequest BdrvTrackedRequest;
typedef struct BlockIOLimit {
int64_t bps[3];
int64_t iops[3];
} BlockIOLimit;
typedef struct BlockIOBaseValue {
uint64_t bytes[2];
uint64_t ios[2];
} BlockIOBaseValue;
typedef struct BlockJob BlockJob;
/**
* BlockJobType:
*
* A class type for block job objects.
*/
typedef struct BlockJobType {
/** Derived BlockJob struct size */
size_t instance_size;
/** String describing the operation, part of query-block-jobs QMP API */
const char *job_type;
/** Optional callback for job types that support setting a speed limit */
void (*set_speed)(BlockJob *job, int64_t speed, Error **errp);
} BlockJobType;
/**
* BlockJob:
*
* Long-running operation on a BlockDriverState.
*/
struct BlockJob {
/** The job type, including the job vtable. */
const BlockJobType *job_type;
/** The block device on which the job is operating. */
BlockDriverState *bs;
/**
* The coroutine that executes the job. If not NULL, it is
* reentered when busy is false and the job is cancelled.
*/
Coroutine *co;
/**
* Set to true if the job should cancel itself. The flag must
* always be tested just before toggling the busy flag from false
* to true. After a job has been cancelled, it should only yield
* if #qemu_aio_wait will ("sooner or later") reenter the coroutine.
*/
bool cancelled;
/**
* Set to false by the job while it is in a quiescent state, where
* no I/O is pending and the job has yielded on any condition
* that is not detected by #qemu_aio_wait, such as a timer.
*/
bool busy;
/** Offset that is published by the query-block-jobs QMP API */
int64_t offset;
/** Length that is published by the query-block-jobs QMP API */
int64_t len;
/** Speed that was set with @block_job_set_speed. */
int64_t speed;
/** The completion function that will be called when the job completes. */
BlockDriverCompletionFunc *cb;
/** The opaque value that is passed to the completion function. */
void *opaque;
};
typedef struct AIOPool {
void (*cancel)(BlockDriverAIOCB *acb);
int aiocb_size;
BlockDriverAIOCB *free_aiocb;
} AIOPool;
struct BlockDriver {
const char *format_name;
@@ -146,8 +59,10 @@ struct BlockDriver {
int (*bdrv_write)(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors);
void (*bdrv_close)(BlockDriverState *bs);
void (*bdrv_rebind)(BlockDriverState *bs);
int (*bdrv_create)(const char *filename, QEMUOptionParameter *options);
int (*bdrv_flush)(BlockDriverState *bs);
int (*bdrv_is_allocated)(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum);
int (*bdrv_set_key)(BlockDriverState *bs, const char *key);
int (*bdrv_make_empty)(BlockDriverState *bs);
/* aio */
@@ -159,44 +74,14 @@ struct BlockDriver {
BlockDriverCompletionFunc *cb, void *opaque);
BlockDriverAIOCB *(*bdrv_aio_flush)(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque);
BlockDriverAIOCB *(*bdrv_aio_discard)(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque);
int (*bdrv_discard)(BlockDriverState *bs, int64_t sector_num,
int nb_sectors);
int coroutine_fn (*bdrv_co_readv)(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov);
int coroutine_fn (*bdrv_co_writev)(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov);
/*
* Efficiently zero a region of the disk image. Typically an image format
* would use a compact metadata representation to implement this. This
* function pointer may be NULL and .bdrv_co_writev() will be called
* instead.
*/
int coroutine_fn (*bdrv_co_write_zeroes)(BlockDriverState *bs,
int64_t sector_num, int nb_sectors);
int coroutine_fn (*bdrv_co_discard)(BlockDriverState *bs,
int64_t sector_num, int nb_sectors);
int coroutine_fn (*bdrv_co_is_allocated)(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum);
int (*bdrv_aio_multiwrite)(BlockDriverState *bs, BlockRequest *reqs,
int num_reqs);
int (*bdrv_merge_requests)(BlockDriverState *bs, BlockRequest* a,
BlockRequest *b);
/*
* Invalidate any cached meta-data.
*/
void (*bdrv_invalidate_cache)(BlockDriverState *bs);
/*
* Flushes all data that was already written to the OS all the way down to
* the disk (for example raw-posix calls fsync()).
*/
int coroutine_fn (*bdrv_co_flush_to_disk)(BlockDriverState *bs);
/*
* Flushes all internal caches to the OS. The data may still sit in a
* writeback cache of the host OS, but it will survive a crash of the qemu
* process.
*/
int coroutine_fn (*bdrv_co_flush_to_os)(BlockDriverState *bs);
const char *protocol_name;
int (*bdrv_truncate)(BlockDriverState *bs, int64_t offset);
@@ -227,8 +112,8 @@ struct BlockDriver {
/* removable device specific */
int (*bdrv_is_inserted)(BlockDriverState *bs);
int (*bdrv_media_changed)(BlockDriverState *bs);
void (*bdrv_eject)(BlockDriverState *bs, bool eject_flag);
void (*bdrv_lock_medium)(BlockDriverState *bs, bool locked);
int (*bdrv_eject)(BlockDriverState *bs, int eject_flag);
int (*bdrv_set_locked)(BlockDriverState *bs, int locked);
/* to control generic scsi devices */
int (*bdrv_ioctl)(BlockDriverState *bs, unsigned long int req, void *buf);
@@ -244,8 +129,7 @@ struct BlockDriver {
* Returns 0 for completed check, -errno for internal errors.
* The check results are stored in result.
*/
int (*bdrv_check)(BlockDriverState* bs, BdrvCheckResult *result,
BdrvCheckMode fix);
int (*bdrv_check)(BlockDriverState* bs, BdrvCheckResult *result);
void (*bdrv_debug_event)(BlockDriverState *bs, BlkDebugEvent event);
@@ -258,58 +142,46 @@ struct BlockDriver {
QLIST_ENTRY(BlockDriver) list;
};
/*
* Note: the function bdrv_append() copies and swaps contents of
* BlockDriverStates, so if you add new fields to this struct, please
* inspect bdrv_append() to determine if the new fields need to be
* copied as well.
*/
struct BlockDriverState {
int64_t total_sectors; /* if we are reading a disk image, give its
size in sectors */
int read_only; /* if true, the media is read only */
int keep_read_only; /* if true, the media was requested to stay read only */
int open_flags; /* flags used to open the file, re-used for re-open */
int removable; /* if true, the media can be removed */
int locked; /* if true, the media cannot temporarily be ejected */
int tray_open; /* if true, the virtual tray is open */
int encrypted; /* if true, the media is encrypted */
int valid_key; /* if true, a valid encryption key has been set */
int sg; /* if true, the device is a /dev/sg* */
int copy_on_read; /* if true, copy read backing sectors into image
note this is a reference count */
/* event callback when inserting/removing */
void (*change_cb)(void *opaque, int reason);
void *change_opaque;
BlockDriver *drv; /* NULL means no media */
void *opaque;
void *dev; /* attached device model, if any */
/* TODO change to DeviceState when all users are qdevified */
const BlockDevOps *dev_ops;
void *dev_opaque;
DeviceState *peer;
char filename[1024];
char backing_file[1024]; /* if non zero, the image is a diff of
this file image */
char backing_format[16]; /* if non-zero and backing_file exists */
int is_temporary;
int media_changed;
BlockDriverState *backing_hd;
BlockDriverState *file;
/* number of in-flight copy-on-read requests */
unsigned int copy_on_read_in_flight;
/* async read/write emulation */
/* the time for latest disk I/O */
int64_t slice_time;
int64_t slice_start;
int64_t slice_end;
BlockIOLimit io_limits;
BlockIOBaseValue io_base;
CoQueue throttled_reqs;
QEMUTimer *block_timer;
bool io_limits_enabled;
void *sync_aiocb;
/* I/O stats (display with "info blockstats"). */
uint64_t nr_bytes[BDRV_MAX_IOTYPE];
uint64_t nr_ops[BDRV_MAX_IOTYPE];
uint64_t total_time_ns[BDRV_MAX_IOTYPE];
uint64_t rd_bytes;
uint64_t wr_bytes;
uint64_t rd_ops;
uint64_t wr_ops;
uint64_t wr_highest_sector;
/* Whether the disk can expand beyond total_sectors */
@@ -323,137 +195,72 @@ struct BlockDriverState {
/* NOTE: the following infos are only hints for real hardware
drivers. They are not used by the block driver */
int cyls, heads, secs, translation;
BlockErrorAction on_read_error, on_write_error;
bool iostatus_enabled;
BlockDeviceIoStatus iostatus;
char device_name[32];
unsigned long *dirty_bitmap;
int64_t dirty_count;
int in_use; /* users other than guest access, eg. block migration */
QTAILQ_ENTRY(BlockDriverState) list;
QLIST_HEAD(, BdrvTrackedRequest) tracked_requests;
/* long-running background operation */
BlockJob *job;
void *private;
};
int get_tmp_filename(char *filename, int size);
#define CHANGE_MEDIA 0x01
#define CHANGE_SIZE 0x02
void bdrv_set_io_limits(BlockDriverState *bs,
BlockIOLimit *io_limits);
struct BlockDriverAIOCB {
AIOPool *pool;
BlockDriverState *bs;
BlockDriverCompletionFunc *cb;
void *opaque;
BlockDriverAIOCB *next;
};
void get_tmp_filename(char *filename, int size);
void *qemu_aio_get(AIOPool *pool, BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque);
void qemu_aio_release(void *p);
void *qemu_blockalign(BlockDriverState *bs, size_t size);
#ifdef _WIN32
int is_windows_drive(const char *filename);
#endif
/**
* block_job_create:
* @job_type: The class object for the newly-created job.
* @bs: The block
* @speed: The maximum speed, in bytes per second, or 0 for unlimited.
* @cb: Completion function for the job.
* @opaque: Opaque pointer value passed to @cb.
* @errp: Error object.
*
* Create a new long-running block device job and return it. The job
* will call @cb asynchronously when the job completes. Note that
* @bs may have been closed at the time the @cb it is called. If
* this is the case, the job may be reported as either cancelled or
* completed.
*
* This function is not part of the public job interface; it should be
* called from a wrapper that is specific to the job type.
*/
void *block_job_create(const BlockJobType *job_type, BlockDriverState *bs,
int64_t speed, BlockDriverCompletionFunc *cb,
void *opaque, Error **errp);
typedef struct BlockConf {
BlockDriverState *bs;
uint16_t physical_block_size;
uint16_t logical_block_size;
uint16_t min_io_size;
uint32_t opt_io_size;
int32_t bootindex;
uint32_t discard_granularity;
} BlockConf;
/**
* block_job_sleep_ns:
* @job: The job that calls the function.
* @clock: The clock to sleep on.
* @ns: How many nanoseconds to stop for.
*
* Put the job to sleep (assuming that it wasn't canceled) for @ns
* nanoseconds. Canceling the job will interrupt the wait immediately.
*/
void block_job_sleep_ns(BlockJob *job, QEMUClock *clock, int64_t ns);
static inline unsigned int get_physical_block_exp(BlockConf *conf)
{
unsigned int exp = 0, size;
/**
* block_job_complete:
* @job: The job being completed.
* @ret: The status code.
*
* Call the completion function that was registered at creation time, and
* free @job.
*/
void block_job_complete(BlockJob *job, int ret);
for (size = conf->physical_block_size;
size > conf->logical_block_size;
size >>= 1) {
exp++;
}
/**
* block_job_set_speed:
* @job: The job to set the speed for.
* @speed: The new value
* @errp: Error object.
*
* Set a rate-limiting parameter for the job; the actual meaning may
* vary depending on the job type.
*/
void block_job_set_speed(BlockJob *job, int64_t speed, Error **errp);
return exp;
}
/**
* block_job_cancel:
* @job: The job to be canceled.
*
* Asynchronously cancel the specified job.
*/
void block_job_cancel(BlockJob *job);
/**
* block_job_is_cancelled:
* @job: The job being queried.
*
* Returns whether the job is scheduled for cancellation.
*/
bool block_job_is_cancelled(BlockJob *job);
/**
* block_job_cancel:
* @job: The job to be canceled.
*
* Asynchronously cancel the job and wait for it to reach a quiescent
* state. Note that the completion callback will still be called
* asynchronously, hence it is *not* valid to call #bdrv_delete
* immediately after #block_job_cancel_sync. Users of block jobs
* will usually protect the BlockDriverState objects with a reference
* count, should this be a concern.
*
* Returns the return value from the job if the job actually completed
* during the call, or -ECANCELED if it was canceled.
*/
int block_job_cancel_sync(BlockJob *job);
/**
* stream_start:
* @bs: Block device to operate on.
* @base: Block device that will become the new base, or %NULL to
* flatten the whole backing file chain onto @bs.
* @base_id: The file name that will be written to @bs as the new
* backing file if the job completes. Ignored if @base is %NULL.
* @speed: The maximum speed, in bytes per second, or 0 for unlimited.
* @cb: Completion function for the job.
* @opaque: Opaque pointer value passed to @cb.
* @errp: Error object.
*
* Start a streaming operation on @bs. Clusters that are unallocated
* in @bs, but allocated in any image between @base and @bs (both
* exclusive) will be written to @bs. At the end of a successful
* streaming job, the backing file of @bs will be changed to
* @base_id in the written image and to @base in the live BlockDriverState.
*/
void stream_start(BlockDriverState *bs, BlockDriverState *base,
const char *base_id, int64_t speed,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp);
#define DEFINE_BLOCK_PROPERTIES(_state, _conf) \
DEFINE_PROP_DRIVE("drive", _state, _conf.bs), \
DEFINE_PROP_UINT16("logical_block_size", _state, \
_conf.logical_block_size, 512), \
DEFINE_PROP_UINT16("physical_block_size", _state, \
_conf.physical_block_size, 512), \
DEFINE_PROP_UINT16("min_io_size", _state, _conf.min_io_size, 0), \
DEFINE_PROP_UINT32("opt_io_size", _state, _conf.opt_io_size, 0), \
DEFINE_PROP_INT32("bootindex", _state, _conf.bootindex, -1), \
DEFINE_PROP_UINT32("discard_granularity", _state, \
_conf.discard_granularity, 0)
#endif /* BLOCK_INT_H */

File diff suppressed because it is too large Load Diff

View File

@@ -11,12 +11,13 @@
#define BLOCKDEV_H
#include "block.h"
#include "error.h"
#include "qemu-queue.h"
void blockdev_mark_auto_del(BlockDriverState *bs);
void blockdev_auto_del(BlockDriverState *bs);
#define BLOCK_SERIAL_STRLEN 20
typedef enum {
IF_DEFAULT = -1, /* for use with drive_add() only */
IF_NONE,
@@ -33,9 +34,8 @@ struct DriveInfo {
int unit;
int auto_del; /* see blockdev_mark_auto_del() */
int media_cd;
int cyls, heads, secs, trans;
QemuOpts *opts;
const char *serial;
char serial[BLOCK_SERIAL_STRLEN + 1];
QTAILQ_ENTRY(DriveInfo) next;
int refcount;
};
@@ -57,8 +57,13 @@ DriveInfo *drive_init(QemuOpts *arg, int default_to_scsi);
DriveInfo *add_init_drive(const char *opts);
void qmp_change_blockdev(const char *device, const char *filename,
bool has_format, const char *format, Error **errp);
void do_commit(Monitor *mon, const QDict *qdict);
int do_eject(Monitor *mon, const QDict *qdict, QObject **ret_data);
int do_block_set_passwd(Monitor *mon, const QDict *qdict, QObject **ret_data);
int do_change_block(Monitor *mon, const char *device,
const char *filename, const char *fmt);
int do_drive_del(Monitor *mon, const QDict *qdict, QObject **ret_data);
int do_snapshot_blkdev(Monitor *mon, const QDict *qdict, QObject **ret_data);
int do_block_resize(Monitor *mon, const QDict *qdict, QObject **ret_data);
#endif

View File

@@ -1,2 +0,0 @@
obj-y = main.o bsdload.o elfload.o mmap.o signal.o strace.o syscall.o \
uaccess.o

View File

@@ -196,7 +196,7 @@ int loader_exec(const char * filename, char ** argv, char ** envp,
/* Something went wrong, return the inode and free the argument pages*/
for (i=0 ; i<MAX_ARG_PAGES ; i++) {
g_free(bprm.page[i]);
free(bprm.page[i]);
}
return(retval);
}

View File

@@ -641,7 +641,8 @@ static abi_ulong copy_elf_strings(int argc,char ** argv, void **page,
offset = p % TARGET_PAGE_SIZE;
pag = (char *)page[p/TARGET_PAGE_SIZE];
if (!pag) {
pag = g_try_malloc0(TARGET_PAGE_SIZE);
pag = (char *)malloc(TARGET_PAGE_SIZE);
memset(pag, 0, TARGET_PAGE_SIZE);
page[p/TARGET_PAGE_SIZE] = pag;
if (!pag)
return 0;
@@ -695,7 +696,7 @@ static abi_ulong setup_arg_pages(abi_ulong p, struct linux_binprm *bprm,
info->rss++;
/* FIXME - check return value of memcpy_to_target() for failure */
memcpy_to_target(stack_base, bprm->page[i], TARGET_PAGE_SIZE);
g_free(bprm->page[i]);
free(bprm->page[i]);
}
stack_base += TARGET_PAGE_SIZE;
}
@@ -993,12 +994,12 @@ static abi_ulong load_elf_interp(struct elfhdr * interp_elf_ex,
static int symfind(const void *s0, const void *s1)
{
target_ulong addr = *(target_ulong *)s0;
struct elf_sym *key = (struct elf_sym *)s0;
struct elf_sym *sym = (struct elf_sym *)s1;
int result = 0;
if (addr < sym->st_value) {
if (key->st_value < sym->st_value) {
result = -1;
} else if (addr >= sym->st_value + sym->st_size) {
} else if (key->st_value > sym->st_value + sym->st_size) {
result = 1;
}
return result;
@@ -1013,9 +1014,12 @@ static const char *lookup_symbolxx(struct syminfo *s, target_ulong orig_addr)
#endif
// binary search
struct elf_sym key;
struct elf_sym *sym;
sym = bsearch(&orig_addr, syms, s->disas_num_syms, sizeof(*syms), symfind);
key.st_value = orig_addr;
sym = bsearch(&key, syms, s->disas_num_syms, sizeof(*syms), symfind);
if (sym != NULL) {
return s->disas_strtab + sym->st_name;
}

View File

@@ -41,7 +41,6 @@ int singlestep;
unsigned long mmap_min_addr;
unsigned long guest_base;
int have_guest_base;
unsigned long reserved_va;
#endif
static const char *interp_prefix = CONFIG_QEMU_INTERP_PREFIX;
@@ -64,18 +63,18 @@ void gemu_log(const char *fmt, ...)
}
#if defined(TARGET_I386)
int cpu_get_pic_interrupt(CPUX86State *env)
int cpu_get_pic_interrupt(CPUState *env)
{
return -1;
}
#endif
/* These are no-ops because we are not threadsafe. */
static inline void cpu_exec_start(CPUArchState *env)
static inline void cpu_exec_start(CPUState *env)
{
}
static inline void cpu_exec_end(CPUArchState *env)
static inline void cpu_exec_end(CPUState *env)
{
}
@@ -110,7 +109,7 @@ void cpu_list_unlock(void)
/***********************************************************/
/* CPUX86 core interface */
void cpu_smm_update(CPUX86State *env)
void cpu_smm_update(CPUState *env)
{
}
@@ -681,7 +680,7 @@ static void usage(void)
"-g port wait gdb connection to port\n"
"-L path set the elf interpreter prefix (default=%s)\n"
"-s size set the stack size in bytes (default=%ld)\n"
"-cpu model select CPU (-cpu help for list)\n"
"-cpu model select CPU (-cpu ? for list)\n"
"-drop-ld-preload drop LD_PRELOAD for target process\n"
"-E var=value sets/modifies targets environment variable(s)\n"
"-U var unsets targets environment variable(s)\n"
@@ -714,7 +713,7 @@ static void usage(void)
exit(1);
}
THREAD CPUArchState *thread_env;
THREAD CPUState *thread_env;
/* Assumes contents are already zeroed. */
void init_task_state(TaskState *ts)
@@ -738,7 +737,7 @@ int main(int argc, char **argv)
struct target_pt_regs regs1, *regs = &regs1;
struct image_info info1, *info = &info1;
TaskState ts1, *ts = &ts1;
CPUArchState *env;
CPUState *env;
int optind;
const char *r;
int gdbstub_port = 0;
@@ -749,8 +748,6 @@ int main(int argc, char **argv)
if (argc <= 1)
usage();
module_call_init(MODULE_INIT_QOM);
if ((envlist = envlist_create()) == NULL) {
(void) fprintf(stderr, "Unable to allocate envlist\n");
exit(1);
@@ -825,7 +822,7 @@ int main(int argc, char **argv)
qemu_uname_release = argv[optind++];
} else if (!strcmp(r, "cpu")) {
cpu_model = argv[optind++];
if (is_help_option(cpu_model)) {
if (strcmp(cpu_model, "?") == 0) {
/* XXX: implement xxx_cpu_list for targets that still miss it */
#if defined(cpu_list)
cpu_list(stdout, &fprintf);
@@ -908,8 +905,7 @@ int main(int argc, char **argv)
cpu_model = "any";
#endif
}
tcg_exec_init(0);
cpu_exec_init_all();
cpu_exec_init_all(0);
/* NOTE: we need to init the CPU at this stage to get
qemu_host_page_size */
env = cpu_init(cpu_model);
@@ -918,7 +914,7 @@ int main(int argc, char **argv)
exit(1);
}
#if defined(TARGET_I386) || defined(TARGET_SPARC) || defined(TARGET_PPC)
cpu_reset(ENV_GET_CPU(env));
cpu_reset(env);
#endif
thread_env = env;

View File

@@ -94,7 +94,7 @@ void *qemu_vmalloc(size_t size)
return p;
}
void *g_malloc(size_t size)
void *qemu_malloc(size_t size)
{
char * p;
size += 16;
@@ -104,12 +104,12 @@ void *g_malloc(size_t size)
}
/* We use map, which is always zero initialized. */
void * g_malloc0(size_t size)
void * qemu_mallocz(size_t size)
{
return g_malloc(size);
return qemu_malloc(size);
}
void g_free(void *ptr)
void qemu_free(void *ptr)
{
/* FIXME: We should unmark the reserved pages here. However this gets
complicated when one target page spans multiple host pages, so we
@@ -119,18 +119,18 @@ void g_free(void *ptr)
munmap(p, *p);
}
void *g_realloc(void *ptr, size_t size)
void *qemu_realloc(void *ptr, size_t size)
{
size_t old_size, copy;
void *new_ptr;
if (!ptr)
return g_malloc(size);
return qemu_malloc(size);
old_size = *(size_t *)((char *)ptr - 16);
copy = old_size < size ? old_size : size;
new_ptr = g_malloc(size);
new_ptr = qemu_malloc(size);
memcpy(new_ptr, ptr, copy);
g_free(ptr);
qemu_free(ptr);
return new_ptr;
}

View File

@@ -139,8 +139,8 @@ abi_long do_openbsd_syscall(void *cpu_env, int num, abi_long arg1,
abi_long arg2, abi_long arg3, abi_long arg4,
abi_long arg5, abi_long arg6);
void gemu_log(const char *fmt, ...) GCC_FMT_ATTR(1, 2);
extern THREAD CPUArchState *thread_env;
void cpu_loop(CPUArchState *env);
extern THREAD CPUState *thread_env;
void cpu_loop(CPUState *env);
char *target_strerror(int err);
int get_osversion(void);
void fork_start(void);
@@ -167,13 +167,13 @@ void print_openbsd_syscall_ret(int num, abi_long ret);
extern int do_strace;
/* signal.c */
void process_pending_signals(CPUArchState *cpu_env);
void process_pending_signals(CPUState *cpu_env);
void signal_init(void);
//int queue_signal(CPUArchState *env, int sig, target_siginfo_t *info);
//int queue_signal(CPUState *env, int sig, target_siginfo_t *info);
//void host_to_target_siginfo(target_siginfo_t *tinfo, const siginfo_t *info);
//void target_to_host_siginfo(siginfo_t *info, const target_siginfo_t *tinfo);
long do_sigreturn(CPUArchState *env);
long do_rt_sigreturn(CPUArchState *env);
long do_sigreturn(CPUState *env);
long do_rt_sigreturn(CPUState *env);
abi_long do_sigaltstack(abi_ulong uss_addr, abi_ulong uoss_addr, abi_ulong sp);
/* mmap.c */

Some files were not shown because too many files have changed in this diff Show More