Compare commits

..

67 Commits

Author SHA1 Message Date
Anthony Liguori
6ed912999d Update for 0.13.0 release
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-10-14 10:00:59 -05:00
Michael S. Tsirkin
da5aeb8aeb vhost: error code
fix up errors returned to include errno, not just -1

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit c885212109)
2010-10-12 16:10:00 -05:00
Michael S. Tsirkin
7c763827a5 virtio: change set guest notifier to per-device
When using irqfd with vhost-net to inject interrupts,
a single evenfd might inject multiple interrupts.
Implementing this is much easier with a single
per-device callback to set guest notifiers.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 54dd932128)
2010-10-12 16:09:52 -05:00
Stefan Weil
e74f86377a eepro100: Add support for multiple individual addresses (multiple IA)
I reviewed the latest sources of Linux, FreeBSD and NetBSD.
They all reset the multiple IA bit (multi_ia in BSD) to zero,
but I did not find code which sets this bit to one
(like it is done by some routers).

Running Windows guests also did not set this bit.

Intel's Open Source Software Developer Manual does not
give much information on the semantics related to this bit,
so I had to guess how it works. The guess was good enough
to make the router emulation work.

Related changes in this patch:
* Update naming and documentation of the internal hash register.
  It is not limited to multicast, but also used for multiple IA.
* Dump complete configuration register when debug traces are enabled.
* Debug output when multiple IA bit is set during CmdConfigure.
* Debug output when frames are received because multiple IA bit is set,
  or when they are ignored although it is set.

Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 010ec62934)
2010-10-12 16:09:45 -05:00
Michael S. Tsirkin
286409ad63 virtio-net: unify vhost-net start/stop
Move all of vhost-net start/stop logic to a single routine,
and call it from everywhere.

Additionally, start/stop vhost-net on link up/down:
we should not transmit anything if user asked us to
put the link down.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
2010-10-12 16:09:35 -05:00
Michael S. Tsirkin
8006040d47 virtio: invoke set_status callback on reset
As status is set to 0 on reset, invoke the relevant callback. This makes
for a cleaner code in devices as they don't need to duplicate the code
in their reset routine, as well as excercises this path a little more.

In particular this makes it possible to unify
vhost-net handling code with the following patch.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit e0c472d8c2)
2010-10-12 16:09:26 -05:00
Michael S. Tsirkin
456496e225 net: delay freeing peer host device
With -netdev, virtio devices present offload
features to guest, depending on the backend used.
Thus, removing host netdev peer while guest is
active leads to guest-visible inconsistency and/or crashes.

As a solution, while guest (NIC) peer device exists,
we prevent the host peer from being deleted.
This patch does this by adding peer_deleted flag in nic state:
if host device is going away while guest device
is around, set this flag and keep a shell of
the host device around for as long as guest device exists.

The link is put down so all packets will get discarded.

At the moment, management can detect that device deletion
is delayed by doing info net. As a next step, we shall add
commands that control hotplug/unplug without
removing the device, and an event to report that
guest has responded to the hotplug event.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
(cherry picked from commit a083a89d72)
2010-10-12 16:09:19 -05:00
Anthony Liguori
a62e5f4120 Merge remote branch 'qmp/for-stable-0.13' into stable-0.13 2010-10-11 19:01:41 -05:00
Yoshiaki Tamura
c2ccc98ceb vnc: check fd before calling qemu_set_fd_handler2() in vnc_client_write()
Setting fd = -1 to qemu_set_fd_handler2() causes bus error at FD_SET
in main_loop_wait().

Signed-off-by: Yoshiaki Tamura <tamura.yoshiaki@lab.ntt.co.jp>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit ac71103dc6)
2010-10-11 18:22:45 -05:00
Eduardo Habkost
8b84b68e7d disable guest-provided stats on "info balloon" command
The addition of memory stats reporting to the virtio balloon causes
the 'info balloon' command to become asynchronous.  This is a regression
because in some cases it can hang the user monitor.

This is an alternative to Adam Litke's patch. Adam's patch disabled the
corresponding (guest-visible) virtio feature bit, causing issues for migration.
Original discussion is available at:
http://marc.info/?l=qemu-devel&m=128448124328314&w=2

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Adam Litke <agl@us.ibm.com
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
(cherry picked from commit 07b0403dfc)
2010-10-11 18:22:18 -05:00
Anthony Liguori
14a0b95684 Revert "Make default invocation of block drivers safer (v3)"
This reverts commit 79368c81bf.

Conflicts:

	block.c

I haven't been able to come up with a solution yet for the corruption caused by
unaligned requests from the IDE disk so revert until a solution can be written.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 8b33d9eeba)
2010-10-11 18:20:59 -05:00
Avi Kivity
cb03355a26 QEMUFileBuffered: indicate that we're ready when the underlying file is ready
QEMUFileBuffered stops writing when the underlying QEMUFile is not ready,
and tells its producer so.  However, when the underlying QEMUFile becomes
ready, it neglects to pass that information along, resulting in stoppage
of all data until the next tick (a tenths of a second).

Usually this doesn't matter, because most QEMUFiles used with QEMUFileBuffered
are almost always ready, but in the case of exec: migration this is not true,
due to the small pipe buffers used to connect to the target process.  The
result is very slow migration.

Fix by detecting the readiness notification and propagating it.  The detection
is a little ugly since QEMUFile overloads put_buffer() to send it, but that's
the suject for a different patch.

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 5e77aaa0d7)
2010-10-11 18:20:52 -05:00
Artyom Tarasenko
ddfe317152 sparc escc IUS improvements (SunOS 4.1.4 fix)
According to scc_escc_um.pdf:
 - Reset Highest IUS must update irq status to allow processing
   of the next priority interrupt.
 - rx interrupt has always higher priority than tx on same channel

The documentation only explicitly says that Reset Highest IUS
command (0x38) clears IUS bits, not that it clears the corresponding
interrupt too, so don't clear interrupts on this command.

The patch allows SunOS 4.1.4 to use the serial ports

Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 9fc391f8b5)
2010-10-11 18:20:45 -05:00
Artyom Tarasenko
9f20b55b9a fix last cpu timer initialization
The timer #0 is the system timer, so the timer #num_cpu is the
timer of the last CPU, and it must be initialized in slavio_timer_reset.

Don't mark non-existing timers as running.

Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 5933e8a96a)
2010-10-11 18:20:39 -05:00
Luiz Capitulino
a0a90b92c9 QMP/README: Update QMP homepage address
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
(cherry picked from commit a18b2ce2ed)
2010-10-11 19:53:58 -03:00
Eduardo Habkost
7bd11bc311 disable guest-provided stats on "info balloon" command
The addition of memory stats reporting to the virtio balloon causes
the 'info balloon' command to become asynchronous.  This is a regression
because in some cases it can hang the user monitor.

This is an alternative to Adam Litke's patch. Adam's patch disabled the
corresponding (guest-visible) virtio feature bit, causing issues for migration.
Original discussion is available at:
http://marc.info/?l=qemu-devel&m=128448124328314&w=2

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Adam Litke <agl@us.ibm.com
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
(cherry picked from commit 07b0403dfc)
2010-10-11 19:53:31 -03:00
Luiz Capitulino
30811f0d94 QMP: Update README file
A number of changes I prefer to do in one shot:

- Fix example
- Small clarifications
- Add multiple monitors example
- Add 'Development Process' section

Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit d29f3196af)
2010-10-11 19:53:18 -03:00
Luiz Capitulino
0d88bb1bab QMP doc: Add 'Stability Considerations' section
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 05705ce2f8)
2010-10-11 19:53:13 -03:00
Miguel Di Ciurcio Filho
b73e06943b QMP/monitor: update do_info_version() to output broken down version string
This code was originally developed by Daniel P. Berrange <berrange@redhat.com>

Signed-off-by: Miguel Di Ciurcio Filho <miguel.filho@gmail.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 0ec0291d67)
2010-10-11 19:53:07 -03:00
Miguel Di Ciurcio Filho
f8e3ee1a7d QMP: update 'query-version' documentation
Update the documentation of 'query-version' to output the string version broken
down.

Signed-off-by: Miguel Di Ciurcio Filho <miguel.filho@gmail.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 6597e1a6dc)
2010-10-11 19:53:01 -03:00
Alex Williamson
c082082c76 savevm: Reset last block info at beginning of each save
If we save more than once we need to reset the last block info or else
only the first save has the actual block info and each subsequent save
will only use continue flags, making them unloadable independently.

Found-by: Miguel Di Ciurcio Filho <miguel.filho@gmail.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Glauber Costa <glommer@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 760e77eab5)
2010-10-11 19:52:37 -03:00
Marcelo Tosatti
c4127fbf17 set proper migration status on ->write error (v5)
If ->write fails, declare migration status as MIG_STATE_ERROR.

Also, in buffered_file.c, ->close the object in case of an
error.

Fixes "migrate -d "exec:dd of=file", where dd fails to open file.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit e447b1a603)
2010-10-11 19:52:25 -03:00
Amit Shah
fc4e0c7018 migration: Accept 'cont' only after successful incoming migration
When a 'cont' is issued on a VM that's just waiting for an incoming
migration, the VM reboots and boots into the guest, possibly corrupting
its storage since it could be shared with another VM running elsewhere.

Ensure that a VM started with '-incoming' is only run when an incoming
migration successfully completes.

A new qerror, QERR_MIGRATION_EXPECTED, is added to signal that 'cont'
failed due to no incoming migration has been attempted yet.

Reported-by: Laine Stump <laine@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 8e84865e54)
2010-10-11 19:52:01 -03:00
Anthony Liguori
d25de8db83 Update for v0.13.0-rc3
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-10-11 17:09:10 -05:00
Anthony Liguori
5c0961618d Merge remote branch 'kwolf/for-stable-0.13' into stable-0.13 2010-10-11 17:08:39 -05:00
Anthony Liguori
472de0c851 Update version for 0.13.0-rc2
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-10-11 16:37:35 -05:00
Avi Kivity
0131c8c2dd Fix ivshmem build on 32-bit hosts
stat() fields can be more or less anything depending on configuration, cast
explicitly to uint64_t to avoid printf() format mismatches.

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit ad0a4ac1c0)
2010-10-11 16:34:38 -05:00
Jes Sorensen
d3c5b2e670 hw/ivshmem.c don't check for negative values on unsigned data types
There is no need to check for dest < 0 or vector >= 0 as both are
uint16_t.

This should fix problems with broken build with aggressive compiler
flags. Reported by Xudong Hao <xudong.hao@intel.com>

Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com>
Acked-by: Cam Macdonell <cam@cs.ualberta.ca>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 1b27d7a1e8)
2010-10-11 16:34:30 -05:00
Cam Macdonell
a385adb70e Disable build of ivshmem on non-KVM systems
Signed-off-by: Cam Macdonell <cam@cs.ualberta.ca>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 3dcbf8f9ca)
2010-10-11 16:34:02 -05:00
Cam Macdonell
41de2f0c86 Add kvm_set_ioeventfd_mmio_long definition for non-KVM systems
Signed-off-by: Cam Macdonell <cam@cs.ualberta.ca>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 1fd7401275)
2010-10-11 16:33:56 -05:00
Cam Macdonell
ec810f662a RESEND: Inter-VM shared memory PCI device
resend for bug fix related to removal of irqfd

Support an inter-vm shared memory device that maps a shared-memory object as a
PCI device in the guest.  This patch also supports interrupts between guest by
communicating over a unix domain socket.  This patch applies to the qemu-kvm
repository.

    -device ivshmem,size=<size in format accepted by -m>[,shm=<shm name>]

Interrupts are supported between multiple VMs by using a shared memory server
by using a chardev socket.

    -device ivshmem,size=<size in format accepted by -m>[,shm=<shm name>]
           [,chardev=<id>][,msi=on][,ioeventfd=on][,vectors=n][,role=peer|master]
    -chardev socket,path=<path>,id=<id>

The shared memory server, sample programs and init scripts are in a git repo here:

    www.gitorious.org/nahanni

Signed-off-by: Cam Macdonell <cam@cs.ualberta.ca>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 6cbf4c8c64)
2010-10-11 16:33:42 -05:00
Cam Macdonell
9367dbbe6d Support marking a device as non-migratable
A non-migratable device should be removed before migration and re-added after.

Signed-off-by: Cam Macdonell <cam@cs.ualberta.ca>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 2431296806)
2010-10-11 16:33:32 -05:00
Cam Macdonell
089c672520 Add function to assign ioeventfd to MMIO.
Signed-off-by: Cam Macdonell <cam@cs.ualberta.ca>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 44f1a3d876)
2010-10-11 16:33:25 -05:00
Cam Macdonell
6f8d14beb2 Device specification for shared memory PCI device
Signed-off-by: Cam Macdonell <cam@cs.ualberta.ca>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit b6828931eb)
2010-10-11 16:33:15 -05:00
Cam Macdonell
5b500f974b Add qemu_ram_alloc_from_ptr function
Provide a function to add an allocated region of memory to the qemu RAM.

This patch is copied from Marcelo's qemu_ram_map() in qemu-kvm and given the
clearer name qemu_ram_alloc_from_ptr().

Signed-off-by: Cam Macdonell <cam@cs.ualberta.ca>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 84b89d782f)
2010-10-11 16:33:09 -05:00
Kevin Wolf
a3c4a01fb2 vvfat: Use cache=unsafe
The qcow file used for write support in vvfat is a temporary file,
so we can use cache=unsafe there. Without this, write support is just
too slow to be of any use.

Signed-off-by: Kevin Wolf <mail@kevin-wolf.de>
(cherry picked from commit 35ccd8aed64727dbefa1b274a8000b46318bfea1)
2010-09-13 14:35:06 +02:00
Kevin Wolf
345a6d2b54 vvfat: Fix double free for opening the image rw
Allocation and deallocation of bs->opaque is not in the control of a
block driver. Therefore it should not set bs->opaque to a data structure
used by another bs, or closing the image will lead to a double free.

Signed-off-by: Kevin Wolf <mail@kevin-wolf.de>
(cherry picked from commit 0af1e52e93bf5da63b15f1f9596dd4c076da07dc)
2010-09-13 14:35:01 +02:00
Kevin Wolf
1b191088ae vvfat: Fix segfault on write to read-only disk
vvfat tries to set the readonly flag in its open function, but nowadays
this is overwritted with the readonly=... command line option. Check in
bdrv_write if the vvfat was opened read-only and return an error in this
case.

Without this check, vvfat tries to access the qcow bs, which is NULL
without enabled write support.

Signed-off-by: Kevin Wolf <mail@kevin-wolf.de>
(cherry picked from commit bfd0049440f53745d31eb93c208f0f3ab6308027)
2010-09-13 14:34:56 +02:00
Kevin Wolf
2c25b81316 qcow2: Remove unnecessary flush after L2 write
When a new cluster was allocated, we only need a flush after the write to the
L2 table if it was a COW and we need to decrease the refcounts of the old
clusters.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 7ec5e6a4ca)
2010-09-13 14:34:03 +02:00
Kevin Wolf
5a0d460c35 block: Fix BDRV_O_CACHE_MASK
BDRV_O_CACHE_MASK should have been extended when cache=unsafe introduced a new
flag BDRV_O_NO_FLUSH. There are currently no users that would change their
behaviour because of this, but let's clean it up before things break.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit ceb25e5c75)
2010-09-13 14:33:58 +02:00
Kevin Wolf
78b6890828 qemu-img convert: Use cache=unsafe for output image
If qemu-img crashes during the conversion, the user will throw away the broken
output file anyway and start over. So no need to be too cautious.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 1bd8e17558)
2010-09-13 14:33:53 +02:00
Kevin Wolf
375d40709e raw-posix: Don't use file name for host_cdrom detection on Linux
On Linux, we have code to detect CD-ROMs using an ioctl. We shouldn't lose
anything but false positives by removing the check for a /dev/cd* path.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 897804d629)
2010-09-13 14:31:58 +02:00
Bernhard Kohl
e632519ab8 scsi-disk: fix the check of the DBD bit in the MODE SENSE command
The DBD bit does not work as expected.

SCSI-Spec:
http://ldkelley.com/SCSI2/SCSI2/SCSI2-08.html#8.2.10
"A disable block descriptors (DBD) bit of zero indicates that the target
may return zero or more block descriptors in the returned MODE SENSE
data (see 8.3.3), at the target's discretion. A DBD bit of one
specifies that the target shall not return any block descriptors in the
returned MODE SENSE data."

Signed-off-by: Bernhard Kohl <bernhard.kohl@nsn.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 333d50fe3d)
2010-09-13 14:31:44 +02:00
Bernhard Kohl
d65741acf4 scsi-disk: return CHECK CONDITION for unknown page codes in the MODE SENSE command
SCSI-Spec:
http://ldkelley.com/SCSI2/SCSI2/SCSI2-08.html#8.2.10
"An initiator may request any one or all of the supported mode pages
from a target. If an initiator issues a MODE SENSE command with a
page code value not implemented by the target, the target shall return
CHECK CONDITION status and shall set the sense key to ILLEGAL REQUEST
and the additional sense code to INVALID FIELD IN CDB."

Signed-off-by: Bernhard Kohl <bernhard.kohl@nsn.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit a9c17b2bf3)
2010-09-13 14:31:37 +02:00
Bernhard Kohl
5aa0e6cb56 scsi-disk: fix the block descriptor returned by the MODE SENSE command
The block descriptor contains the number of blocks, not the highest LBA.
Real hard disks return 0 if the number of blocks exceed the maximum 0xFFFFFF.

SCSI-Spec:
http://ldkelley.com/SCSI2/SCSI2/SCSI2-08.html#8.3.3
"The number of blocks field specifies the number of logical blocks on the
medium to which the density code and block length fields apply. A value
of zero indicates that all of the remaining logical blocks of the logical
unit shall have the medium characteristics specified."

Signed-off-by: Bernhard Kohl <bernhard.kohl@nsn.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 2488b74081)
2010-09-13 14:31:23 +02:00
Bernhard Kohl
3bc5aa187f scsi-disk: respect the page control (PC) field in the MODE SENSE command
The page control (PC) field defines the type of mode parameter values
to be returned in the mode pages:

PC=0 : Current values
PC=1 : Changeable values
PC=2 : Default values
PC=3 : Saved values

The current implementation always returns the same type of parameters.
This is OK for Current and Default values as we don't support changes
to be done by the MODE SELECT command.

For Saved values the following applies (implemented by this patch):
"A PC field value of 3h requests that the target return the saved
values of the mode parameters. Implementation of saved page parameters
is optional. Mode parameters not supported by the target shall be set
to zero. If saved values are not implemented, the command shall be
terminated with CHECK CONDITION status, the sense key set to
ILLEGAL REQUEST and the additional sense code set to
SAVING PARAMETERS NOT SUPPORTED."

For Changeable values the following applies (implemented by this patch):
"A PC field value of 1h requests that the target return a mask denoting
those mode parameters that are changeable. In the mask, the fields of
the mode parameters that are changeable shall be set to all one bits and
the fields of the mode parameters that are non-changeable (i.e. defined
by the target) shall be set to all zero bits."

In newer versions of the SCSI-2 spec the following clause was added.
"If the logical unit does not implement changeable parameters mode pages
and the device server receives a MODE SENSE command with 01b in the PC
field, then the command shall be terminated with CHECK CONDITION status,
with the sense key set to ILLEGAL REQUEST, and the additional sense code
set to INVALID FIELD IN CDB."

This was not yet included in the SCSI-2 Working Drafts from 1986-1993.
I assume that the variant to return CHECK CONDITION for PC=1 is not
widely implemented by real devices. I have a legacy OS which fails,
if MODE_SENSE returns non GOOD for PC=1. So for highest compatibility I
implemented the former variant with this patch.

The last Working Draft X3T9.2 Rev. 10L 7-SEP-93 can be found here:
http://ldkelley.com/SCSI2/SCSI2/SCSI2-08.html#8.2.10

In mode_sense_page() this patch also avoids multiple hard coded
definitions of the same mode page length. Instead I use the varable
p[1]. In fact the returned length of the mode pages 4 and 5 were wrong
(2 bytes less).

Signed-off-by: Bernhard Kohl <bernhard.kohl@nsn.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 282ab04eb1)
2010-09-13 14:31:15 +02:00
Bernhard Kohl
5105d99b7f scsi-disk: fix the mode data header returned by the MODE SENSE(10) command
The header for the  MODE SENSE(10) command is 8 bytes long.

Signed-off-by: Bernhard Kohl <bernhard.kohl@nsn.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit ce512ee115)
2010-09-13 14:31:03 +02:00
Bernhard Kohl
b422f4194d scsi-disk: fix the mode data length field returned by the MODE SENSE command
The MODE DATA LENGTH field indicates the length in bytes of the following
data that is available to be transferred. The mode data length does not include
the number of bytes in the MODE DATA LENGTH field.

Signed-off-by: Bernhard Kohl <bernhard.kohl@nsn.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 78e70c3061)
2010-09-13 14:30:44 +02:00
Anthony Liguori
72230c523b Update version for 0.13.0-rc1
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-08-31 08:19:23 -05:00
Andrew de Quincey
a9b56f8289 posix-aio-compat: Fix async_conmtext for ioctl
Set the async_context_id field when queuing an async ioctl call

Signed-off-by: Andrew de Quincey <adq@lidskialf.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 34cf008129)
2010-08-30 18:44:22 +02:00
Loïc Minier
f891f9f74d vvfat: fat_chksum(): fix access above array bounds
Signed-off-by: Loïc Minier <loic.minier@linaro.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 2aa326be0d)
2010-08-30 18:44:13 +02:00
Kevin Wolf
271a24e7bf qemu-img rebase: Open new backing file read-only
We never write to a backing file, so opening rw is useless. It just means that
you can't rebase on top of a file for which you don't have write permissions.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit cdbae85169)
2010-08-30 18:44:03 +02:00
Kevin Wolf
2c1064ed2d block: Fix image re-open in bdrv_commit
Arguably we should re-open the backing file with the backing file format and
not with the format of the snapshot image.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit ee1811965f)

Conflicts:

	block.c

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2010-08-30 18:42:15 +02:00
Kevin Wolf
55ee7b38e8 virtio-blk: Fix migration of queued requests
in_sg[].iovec and out_sg[].ioved are pointer to (source) host memory and
therefore invalid after migration. When loading the device state we must
create a new mapping on the destination host.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit b6a4805b55)
2010-08-30 18:38:36 +02:00
Kevin Wolf
6674dc4269 virtio: Factor virtqueue_map_sg out
Separate the mapping of requests to host memory from the descriptor iteration.
The next patch will make use of it in a different context.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 42fb2e0720)
2010-08-30 18:38:35 +02:00
Andrea Arcangeli
96638e706c ide: Avoid canceling IDE DMA
The reason for not actually canceling the I/O is because with
virtualization and lots of VM running, a guest fs may mistake a
overload of the host, as an IDE timeout. So rather than canceling the
I/O, it's safer to wait I/O completion and simulate that the I/O has
completed just before the io cancellation was requested by the
guest. This way if ntfs or an app writes data without checking for
-EIO retval, and it thinks the write has succeeded, it's less likely
to run into troubles. Similar issues for reads.

Furthermore because the DMA operation is splitted into many synchronous
aio_read/write if there's more than one entry in the SG table, without this
patch the DMA would be cancelled in the middle, something we've no idea if it
happens on real hardware too or not. Overall this seems a great risk for zero
gain.

This approach is sure safer than previous code given we can't pretend all guest
fs code out there to check for errors and reply the DMA if it was completed
partially, given a timeout would never materialize on a real harddisk unless
there are defective blocks (and defective blocks are practically only an issue
for reads never for writes in any recent hardware as writing to blocks is the
way to fix them) or the harddisk breaks as a whole.

Signed-off-by: Izik Eidus <ieidus@redhat.com>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 953844d102)
2010-08-03 16:39:54 +02:00
Markus Armbruster
08e90b3cad block: Change bdrv_eject() not to drop the image
bdrv_eject() gets called when a device model opens or closes the tray.

If the block driver implements method bdrv_eject(), that method gets
called.  Drivers host_cdrom implements it, and it opens and closes the
physical tray, and nothing else.  When a device model opens, then
closes the tray, media changes only if the user actively changes the
physical media while the tray is open.  This is matches how physical
hardware behaves.

If the block driver doesn't implement method bdrv_eject(), we do
something quite different: opening the tray severs the connection to
the image by calling bdrv_close(), and closing the tray does nothing.
When the device model opens, then closes the tray, media is gone,
unless the user actively inserts another one while the tray is open,
with a suitable change command in the monitor.  This isn't how
physical hardware behaves.  Rather inconvenient when programs
"helpfully" eject media to give you a chance to change it.  The way
bdrv_eject() behaves here turns that chance into a must, which is not
what these programs or their users expect.

Change the default action not to call bdrv_close().  Instead, note the
tray status in new BlockDriverState member tray_open.  Use it in
bdrv_is_inserted().

Arguably, the device models should keep track of tray status
themselves.  But this is less invasive.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 4be9762adb)
2010-08-03 16:39:54 +02:00
Kevin Wolf
ada70b4522 block: Fix bdrv_has_zero_init
Assuming that any image on a block device is not properly zero-initialized is
actually wrong: Only raw images have this problem. Any other image format
shouldn't care about it, they initialize everything properly themselves.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 336c1c1255)
2010-08-03 16:39:53 +02:00
Alex Williamson
8f6e28789f savevm: Fix memory leak of compat struct
Forgot to check for and free these.

Found-by: Zachary Amsden <zamsden@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 69e58af92c)
2010-07-30 23:02:03 +02:00
Aurelien Jarno
e14aad448b linux-user: fix build on hosts not using guest base
Commit 68a1c81686 broke qemu on hosts not
using guest base. It uses reserved_va unconditionally in mmap.c. To
avoid to many #ifdef #endif blocks, define RESERVED_VA as either
reserved_va or 0ul, and use it instead of reserved_va, similarly to what
has been done with guest_base/GUEST_BASE.
(cherry picked from commit 18e9ea8a3f)
2010-07-30 21:12:59 +02:00
Blue Swirl
7829bc6c9f Fix -snapshot deleting images on disk change
Block device change command did not copy BDRV_O_SNAPSHOT flag. Thus
the new image did not have this flag and the file got deleted during
opening.

Fix by copying BDRV_O_SNAPSHOT flag.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 199630b62e)

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-07-28 14:04:25 -05:00
Stefan Weil
32b8bb3b3b block: Use error codes from lower levels for error message
"No such file or directory" is a misleading error message
when a user tries to open a file with wrong permissions.

Cc: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit c98ac35d87)

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-07-28 14:04:24 -05:00
Christoph Hellwig
cc12b5c748 block: default to 0 minimal / optiomal I/O size
Currently we set them to 512 bytes unless manually specified.  Unforuntaly
some brain-dead partitioning tools create unaligned partitions if they
get low enough optiomal I/O size values, so don't report any at all
unless explicitly set.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 55459498b2)

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-07-28 14:04:24 -05:00
Bruce Rogers
50aa457e1d move 'unsafe' to end of caching modes in help
Libvirt parses qemu help output to determine qemu features. In particular
it probes for the following: "cache=writethrough|writeback|none". The
addition of the unsafe cache mode was inserted within this string, as
opposed to being added to the end, which impacted libvirt's probe.
Unbreak libvirt by keeping the existing cache modes intact and add
unsafe to the end.

This problem only manifests itself if a caching mode is explicitly
specified in the libvirt xml, in which case older syntax for caching is
passed to qemu, which it  no longer understands.

Signed-off-by: Bruce Rogers <brogers@novell.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 6c6b6ba20a)

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-07-28 14:04:24 -05:00
Alex Williamson
6546605650 virtio-blk: Create exit function to unregister savevm
Otherwise we can't migrate after we've removed a virtio block device.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 9d0d313859)

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-07-28 14:04:24 -05:00
Yoshiaki Tamura
42ccca964c block migration: propagate return value when bdrv_write() returns < 0
Currently block_load() doesn't check return value of bdrv_write(), and
even the destination weren't prepared to execute block migration, it
proceeds and guest boots on the target.  This patch fix this issue.

Signed-off-by: Yoshiaki Tamura <tamura.yoshiaki@lab.ntt.co.jp>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit b02bea3a85)

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-07-28 14:04:24 -05:00
Aurelien Jarno
966444248f ide/atapi: add support for GET EVENT STATUS NOTIFICATION
The GET EVENT STATUS NOTIFICATION is a mandatory command according
to MMC-3, even if event status notification is not supported.

This patch adds support for this command. It returns NEA ("No Event
Available") with an empty "Supported Event Classes" to show that it
doesn't event support status notification. If asychronous operation is
requested, which requires NCQ support, it returns an error according
to the specifications.

This fixes HAL support on FreeBSD and derivatives, which fill up the
logs every second with:

  acd0: FAILURE - unknown CMD (0x03) ILLEGAL REQUEST asc=0x20 ascq=0x00

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 253cb7b990)

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-07-28 14:04:24 -05:00
4235 changed files with 350907 additions and 1100379 deletions

7
.exrc
View File

@@ -1,7 +0,0 @@
"VIM settings to match QEMU coding style. They are activated by adding the
"following settings (without the " symbol) as last two lines in $HOME/.vimrc:
"set secure
"set exrc
set expandtab
set shiftwidth=4
set smarttab

134
.gitignore vendored
View File

@@ -1,112 +1,56 @@
/config-devices.*
/config-all-devices.*
/config-all-disas.*
/config-host.*
/config-target.*
/config.status
/config-temp
/trace/generated-tracers.h
/trace/generated-tracers.c
/trace/generated-tracers-dtrace.h
/trace/generated-tracers.dtrace
/trace/generated-events.h
/trace/generated-events.c
/trace/generated-helpers-wrappers.h
/trace/generated-helpers.h
/trace/generated-helpers.c
/trace/generated-tcg-tracers.h
/trace/generated-ust-provider.h
/trace/generated-ust.c
/libcacard/trace/generated-tracers.c
*-timestamp
/*-softmmu
/*-darwin-user
/*-linux-user
/*-bsd-user
/libdis*
/libuser
/linux-headers/asm
/qga/qapi-generated
/qapi-generated
/qapi-types.[ch]
/qapi-visit.[ch]
/qapi-event.[ch]
/qmp-commands.h
/qmp-marshal.c
/qemu-doc.html
/qemu-tech.html
/qemu-doc.info
/qemu-tech.info
/qemu.1
/qemu.pod
/qemu-img.1
/qemu-img.pod
/qemu-img
/qemu-nbd
/qemu-nbd.8
/qemu-nbd.pod
/qemu-options.def
/qemu-options.texi
/qemu-img-cmds.texi
/qemu-img-cmds.h
/qemu-io
/qemu-ga
/qemu-bridge-helper
/qemu-monitor.texi
/qmp-commands.txt
/vscclient
/fsdev/virtfs-proxy-helper
/fsdev/virtfs-proxy-helper.1
/fsdev/virtfs-proxy-helper.pod
config-devices.*
config-all-devices.*
config-host.*
config-target.*
*-softmmu
*-darwin-user
*-linux-user
*-bsd-user
libdis*
libhw32
libhw64
libuser
qemu-doc.html
qemu-tech.html
qemu-doc.info
qemu-tech.info
qemu.1
qemu.pod
qemu-img.1
qemu-img.pod
qemu-img
qemu-nbd
qemu-nbd.8
qemu-nbd.pod
qemu-options.def
qemu-options.texi
qemu-img-cmds.texi
qemu-img-cmds.h
qemu-io
qemu-monitor.texi
QMP/qmp-commands.txt
.gdbinit
*.a
*.aux
*.cp
*.dvi
*.exe
*.dll
*.so
*.mo
*.fn
*.ky
*.log
*.pdf
*.cps
*.fns
*.kys
*.pg
*.pyc
*.toc
*.tp
*.vr
*.d
!/scripts/qemu-guest-agent/fsfreeze-hook.d
*.o
*.lo
*.la
*.pc
.libs
.sdk
*.gcda
*.gcno
/pc-bios/bios-pq/status
/pc-bios/vgabios-pq/status
/pc-bios/optionrom/linuxboot.asm
/pc-bios/optionrom/linuxboot.bin
/pc-bios/optionrom/linuxboot.raw
/pc-bios/optionrom/linuxboot.img
/pc-bios/optionrom/multiboot.asm
/pc-bios/optionrom/multiboot.bin
/pc-bios/optionrom/multiboot.raw
/pc-bios/optionrom/multiboot.img
/pc-bios/optionrom/kvmvapic.asm
/pc-bios/optionrom/kvmvapic.bin
/pc-bios/optionrom/kvmvapic.raw
/pc-bios/optionrom/kvmvapic.img
/pc-bios/s390-ccw/s390-ccw.elf
/pc-bios/s390-ccw/s390-ccw.img
.pc
patches
pc-bios/bios-pq/status
pc-bios/vgabios-pq/status
pc-bios/optionrom/linuxboot.bin
pc-bios/optionrom/multiboot.bin
pc-bios/optionrom/multiboot.raw
.stgit-*
cscope.*
tags
TAGS
*~
/tests/qemu-iotests/common.env

31
.gitmodules vendored
View File

@@ -1,33 +1,6 @@
[submodule "roms/vgabios"]
path = roms/vgabios
url = git://git.qemu-project.org/vgabios.git/
url = git://git.qemu.org/vgabios.git/
[submodule "roms/seabios"]
path = roms/seabios
url = git://git.qemu-project.org/seabios.git/
[submodule "roms/SLOF"]
path = roms/SLOF
url = git://git.qemu-project.org/SLOF.git
[submodule "roms/ipxe"]
path = roms/ipxe
url = git://git.qemu-project.org/ipxe.git
[submodule "roms/openbios"]
path = roms/openbios
url = git://git.qemu-project.org/openbios.git
[submodule "roms/openhackware"]
path = roms/openhackware
url = git://git.qemu-project.org/openhackware.git
[submodule "roms/qemu-palcode"]
path = roms/qemu-palcode
url = git://github.com/rth7680/qemu-palcode.git
[submodule "roms/sgabios"]
path = roms/sgabios
url = git://git.qemu-project.org/sgabios.git
[submodule "pixman"]
path = pixman
url = git://anongit.freedesktop.org/pixman
[submodule "dtc"]
path = dtc
url = git://git.qemu-project.org/dtc.git
[submodule "roms/u-boot"]
path = roms/u-boot
url = git://git.qemu-project.org/u-boot.git
url = git://git.qemu.org/seabios.git/

View File

@@ -1,17 +0,0 @@
# This mailmap just translates the weird addresses from the original import into git
# into proper addresses so that they are counted properly in git shortlog output.
#
Andrzej Zaborowski <balrogg@gmail.com> balrog <balrog@c046a42c-6fe2-441c-8c8c-71466251a162>
Anthony Liguori <anthony@codemonkey.ws> aliguori <aliguori@c046a42c-6fe2-441c-8c8c-71466251a162>
Anthony Liguori <anthony@codemonkey.ws> Anthony Liguori <aliguori@us.ibm.com>
Aurelien Jarno <aurelien@aurel32.net> aurel32 <aurel32@c046a42c-6fe2-441c-8c8c-71466251a162>
Blue Swirl <blauwirbel@gmail.com> blueswir1 <blueswir1@c046a42c-6fe2-441c-8c8c-71466251a162>
Edgar E. Iglesias <edgar.iglesias@gmail.com> edgar_igl <edgar_igl@c046a42c-6fe2-441c-8c8c-71466251a162>
Fabrice Bellard <fabrice@bellard.org> bellard <bellard@c046a42c-6fe2-441c-8c8c-71466251a162>
Jocelyn Mayer <l_indien@magic.fr> j_mayer <j_mayer@c046a42c-6fe2-441c-8c8c-71466251a162>
Paul Brook <paul@codesourcery.com> pbrook <pbrook@c046a42c-6fe2-441c-8c8c-71466251a162>
Thiemo Seufer <ths@networkno.de> ths <ths@c046a42c-6fe2-441c-8c8c-71466251a162>
malc <av1474@comtv.ru> malc <malc@c046a42c-6fe2-441c-8c8c-71466251a162>
# There is also a:
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>
# for the cvs2svn initialization commit e63c3dc74bf.

View File

@@ -1,100 +0,0 @@
language: c
python:
- "2.4"
compiler:
- gcc
- clang
notifications:
irc:
channels:
- "irc.oftc.net#qemu"
on_success: change
on_failure: always
env:
global:
- TEST_CMD=""
- EXTRA_CONFIG=""
# Development packages, EXTRA_PKGS saved for additional builds
- CORE_PKGS="libusb-1.0-0-dev libiscsi-dev librados-dev libncurses5-dev"
- NET_PKGS="libseccomp-dev libgnutls-dev libssh2-1-dev libspice-server-dev libspice-protocol-dev libnss3-dev"
- GUI_PKGS="libgtk-3-dev libvte-2.90-dev libsdl1.2-dev libpng12-dev libpixman-1-dev"
- EXTRA_PKGS=""
matrix:
# Group major targets together with their linux-user counterparts
- TARGETS=alpha-softmmu,alpha-linux-user
- TARGETS=arm-softmmu,arm-linux-user,armeb-linux-user,aarch64-softmmu,aarch64-linux-user
- TARGETS=cris-softmmu,cris-linux-user
- TARGETS=i386-softmmu,i386-linux-user,x86_64-softmmu,x86_64-linux-user
- TARGETS=m68k-softmmu,m68k-linux-user
- TARGETS=microblaze-softmmu,microblazeel-softmmu,microblaze-linux-user,microblazeel-linux-user
- TARGETS=mips-softmmu,mips64-softmmu,mips64el-softmmu,mipsel-softmmu
- TARGETS=mips-linux-user,mips64-linux-user,mips64el-linux-user,mipsel-linux-user,mipsn32-linux-user,mipsn32el-linux-user
- TARGETS=or32-softmmu,or32-linux-user
- TARGETS=ppc-softmmu,ppc64-softmmu,ppcemb-softmmu,ppc-linux-user,ppc64-linux-user,ppc64abi32-linux-user,ppc64le-linux-user
- TARGETS=s390x-softmmu,s390x-linux-user
- TARGETS=sh4-softmmu,sh4eb-softmmu,sh4-linux-user sh4eb-linux-user
- TARGETS=sparc-softmmu,sparc64-softmmu,sparc-linux-user,sparc32plus-linux-user,sparc64-linux-user
- TARGETS=unicore32-softmmu,unicore32-linux-user
# Group remaining softmmu only targets into one build
- TARGETS=lm32-softmmu,moxie-softmmu,tricore-softmmu,xtensa-softmmu,xtensaeb-softmmu
git:
# we want to do this ourselves
submodules: false
before_install:
- wget -O - http://people.linaro.org/~alex.bennee/qemu-submodule-git-seed.tar.xz | tar -xvJ
- git submodule update --init --recursive
- sudo apt-get update -qq
- sudo apt-get install -qq ${CORE_PKGS} ${NET_PKGS} ${GUI_PKGS} ${EXTRA_PKGS}
before_script:
- ./configure --target-list=${TARGETS} --enable-debug-tcg ${EXTRA_CONFIG}
script:
- make -j2 && ${TEST_CMD}
matrix:
# We manually include a number of additional build for non-standard bits
include:
# Make check target (we only do this once)
- env:
- TARGETS=alpha-softmmu,arm-softmmu,aarch64-softmmu,cris-softmmu,
i386-softmmu,x86_64-softmmu,m68k-softmmu,microblaze-softmmu,
microblazeel-softmmu,mips-softmmu,mips64-softmmu,
mips64el-softmmu,mipsel-softmmu,or32-softmmu,ppc-softmmu,
ppc64-softmmu,ppcemb-softmmu,s390x-softmmu,sh4-softmmu,
sh4eb-softmmu,sparc-softmmu,sparc64-softmmu,
unicore32-softmmu,unicore32-linux-user,
lm32-softmmu,moxie-softmmu,tricore-softmmu,xtensa-softmmu,
xtensaeb-softmmu
TEST_CMD="make check"
compiler: gcc
# Debug related options
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-debug"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-debug --enable-tcg-interpreter"
compiler: gcc
# All the extra -dev packages
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="libaio-dev libcap-ng-dev libattr1-dev libbrlapi-dev uuid-dev libusb-1.0.0-dev"
compiler: gcc
# Currently configure doesn't force --disable-pie
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-gprof --enable-gcov --disable-pie"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="sparse"
EXTRA_CONFIG="--enable-sparse"
compiler: gcc
# All the trace backends (apart from dtrace)
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backends=stderr"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backends=simple"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backends=ftrace"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="liblttng-ust-dev liburcu-dev"
EXTRA_CONFIG="--enable-trace-backends=ust"
compiler: gcc

View File

@@ -1,9 +1,6 @@
QEMU Coding Style
Qemu Coding Style
=================
Please use the script checkpatch.pl in the scripts directory to check
patches before submitting.
1. Whitespace
Of course, the most important aspect in any coding style is whitespace.
@@ -44,12 +41,14 @@ Rationale:
3. Naming
Variables are lower_case_with_underscores; easy to type and read. Structured
type names are in CamelCase; harder to type but standing out. Enum type
names and function type names should also be in CamelCase. Scalar type
type names are in CamelCase; harder to type but standing out. Scalar type
names are lower_case_with_underscores_ending_with_a_t, like the POSIX
uint64_t and family. Note that this last convention contradicts POSIX
and is therefore likely to be changed.
Typedefs are used to eliminate the redundant 'struct' keyword. It is the
QEMU coding style.
When wrapping standard library functions, use the prefix qemu_ to alert
readers that they are seeing a wrapped version; otherwise avoid this prefix.
@@ -69,10 +68,6 @@ keyword. Example:
printf("a was something else entirely.\n");
}
Note that 'else if' is considered a single statement; otherwise a long if/
else if/else if/.../else sequence would need an indent for every else
statement.
An exception is the opening brace for a function; for reasons of tradition
and clarity it comes on a line by itself:
@@ -84,24 +79,3 @@ and clarity it comes on a line by itself:
Rationale: a consistent (except for functions...) bracing style reduces
ambiguity and avoids needless churn when lines are added or removed.
Furthermore, it is the QEMU coding style.
5. Declarations
Mixed declarations (interleaving statements and declarations within blocks)
are not allowed; declarations should be at the beginning of blocks. In other
words, the code should not generate warnings if using GCC's
-Wdeclaration-after-statement option.
6. Conditional statements
When comparing a variable for (in)equality with a constant, list the
constant on the right, as in:
if (a == 1) {
/* Reads like: "If a equals 1" */
do_something();
}
Rationale: Yoda conditions (as in 'if (1 == a)') are awkward to read.
Besides, good compilers already warn users when '==' is mis-typed as '=',
even when the constant is on the right.

View File

@@ -1,8 +1,4 @@
This file documents changes for QEMU releases 0.12 and earlier.
For changelog information for later releases, see
http://wiki.qemu-project.org/ChangeLog or look at the git history for
more detailed information.
See git history for Changelogs of recent releases.
version 0.12.0:
@@ -78,7 +74,7 @@ version 0.10.2:
- fix savevm/loadvm (Anthony Liguori)
- live migration: fix dirty tracking windows (Glauber Costa)
- live migration: improve error propagation (Glauber Costa)
- live migration: improve error propogation (Glauber Costa)
- qcow2: fix image creation for > ~2TB images (Chris Wright)
- hotplug: fix error handling for if= parameter (Eduardo Habkost)
- qcow2: fix data corruption (Nolan Leake)
@@ -386,7 +382,7 @@ version 0.5.3:
- support of CD-ROM change
- multiple network interface support
- initial x86-64 host support (Gwenole Beauchesne)
- lret to outer privilege fix (OS/2 install fix)
- lret to outer priviledge fix (OS/2 install fix)
- task switch fixes (SkyOS boot)
- VM save/restore commands
- new timer API
@@ -447,7 +443,7 @@ version 0.5.0:
- multi-target build
- fixed: no error code in hardware interrupts
- fixed: pop ss, mov ss, x and sti disable hardware irqs for the next insn
- correct single stepping through string operations
- correct single stepping thru string operations
- preliminary SPARC target support (Thomas M. Ogrisegg)
- tun-fd option (Rusty Russell)
- automatic IDE geometry detection
@@ -531,7 +527,7 @@ version 0.1.5:
- ppc64 support + personality() patch (Rusty Russell)
- first Alpha CPU patches (Falk Hueffner)
- removed bfd.h dependency
- removed bfd.h dependancy
- fixed shrd, shld, idivl and divl on PowerPC.
- fixed buggy glibc PowerPC rint() function (test-i386 passes now on PowerPC).

159
HACKING
View File

@@ -1,159 +0,0 @@
1. Preprocessor
For variadic macros, stick with this C99-like syntax:
#define DPRINTF(fmt, ...) \
do { printf("IRQ: " fmt, ## __VA_ARGS__); } while (0)
2. C types
It should be common sense to use the right type, but we have collected
a few useful guidelines here.
2.1. Scalars
If you're using "int" or "long", odds are good that there's a better type.
If a variable is counting something, it should be declared with an
unsigned type.
If it's host memory-size related, size_t should be a good choice (use
ssize_t only if required). Guest RAM memory offsets must use ram_addr_t,
but only for RAM, it may not cover whole guest address space.
If it's file-size related, use off_t.
If it's file-offset related (i.e., signed), use off_t.
If it's just counting small numbers use "unsigned int";
(on all but oddball embedded systems, you can assume that that
type is at least four bytes wide).
In the event that you require a specific width, use a standard type
like int32_t, uint32_t, uint64_t, etc. The specific types are
mandatory for VMState fields.
Don't use Linux kernel internal types like u32, __u32 or __le32.
Use hwaddr for guest physical addresses except pcibus_t
for PCI addresses. In addition, ram_addr_t is a QEMU internal address
space that maps guest RAM physical addresses into an intermediate
address space that can map to host virtual address spaces. Generally
speaking, the size of guest memory can always fit into ram_addr_t but
it would not be correct to store an actual guest physical address in a
ram_addr_t.
For CPU virtual addresses there are several possible types.
vaddr is the best type to use to hold a CPU virtual address in
target-independent code. It is guaranteed to be large enough to hold a
virtual address for any target, and it does not change size from target
to target. It is always unsigned.
target_ulong is a type the size of a virtual address on the CPU; this means
it may be 32 or 64 bits depending on which target is being built. It should
therefore be used only in target-specific code, and in some
performance-critical built-per-target core code such as the TLB code.
There is also a signed version, target_long.
abi_ulong is for the *-user targets, and represents a type the size of
'void *' in that target's ABI. (This may not be the same as the size of a
full CPU virtual address in the case of target ABIs which use 32 bit pointers
on 64 bit CPUs, like sparc32plus.) Definitions of structures that must match
the target's ABI must use this type for anything that on the target is defined
to be an 'unsigned long' or a pointer type.
There is also a signed version, abi_long.
Of course, take all of the above with a grain of salt. If you're about
to use some system interface that requires a type like size_t, pid_t or
off_t, use matching types for any corresponding variables.
Also, if you try to use e.g., "unsigned int" as a type, and that
conflicts with the signedness of a related variable, sometimes
it's best just to use the *wrong* type, if "pulling the thread"
and fixing all related variables would be too invasive.
Finally, while using descriptive types is important, be careful not to
go overboard. If whatever you're doing causes warnings, or requires
casts, then reconsider or ask for help.
2.2. Pointers
Ensure that all of your pointers are "const-correct".
Unless a pointer is used to modify the pointed-to storage,
give it the "const" attribute. That way, the reader knows
up-front that this is a read-only pointer. Perhaps more
importantly, if we're diligent about this, when you see a non-const
pointer, you're guaranteed that it is used to modify the storage
it points to, or it is aliased to another pointer that is.
2.3. Typedefs
Typedefs are used to eliminate the redundant 'struct' keyword.
2.4. Reserved namespaces in C and POSIX
Underscore capital, double underscore, and underscore 't' suffixes should be
avoided.
3. Low level memory management
Use of the malloc/free/realloc/calloc/valloc/memalign/posix_memalign
APIs is not allowed in the QEMU codebase. Instead of these routines,
use the GLib memory allocation routines g_malloc/g_malloc0/g_new/
g_new0/g_realloc/g_free or QEMU's qemu_memalign/qemu_blockalign/qemu_vfree
APIs.
Please note that g_malloc will exit on allocation failure, so there
is no need to test for failure (as you would have to with malloc).
Calling g_malloc with a zero size is valid and will return NULL.
Memory allocated by qemu_memalign or qemu_blockalign must be freed with
qemu_vfree, since breaking this will cause problems on Win32.
4. String manipulation
Do not use the strncpy function. As mentioned in the man page, it does *not*
guarantee a NULL-terminated buffer, which makes it extremely dangerous to use.
It also zeros trailing destination bytes out to the specified length. Instead,
use this similar function when possible, but note its different signature:
void pstrcpy(char *dest, int dest_buf_size, const char *src)
Don't use strcat because it can't check for buffer overflows, but:
char *pstrcat(char *buf, int buf_size, const char *s)
The same limitation exists with sprintf and vsprintf, so use snprintf and
vsnprintf.
QEMU provides other useful string functions:
int strstart(const char *str, const char *val, const char **ptr)
int stristart(const char *str, const char *val, const char **ptr)
int qemu_strnlen(const char *s, int max_len)
There are also replacement character processing macros for isxyz and toxyz,
so instead of e.g. isalnum you should use qemu_isalnum.
Because of the memory management rules, you must use g_strdup/g_strndup
instead of plain strdup/strndup.
5. Printf-style functions
Whenever you add a new printf-style function, i.e., one with a format
string argument and following "..." in its prototype, be sure to use
gcc's printf attribute directive in the prototype.
This makes it so gcc's -Wformat and -Wformat-security options can do
their jobs and cross-check format strings with the number and types
of arguments.
6. C standard, implementation defined and undefined behaviors
C code in QEMU should be written to the C99 language specification. A copy
of the final version of the C99 standard with corrigenda TC1, TC2, and TC3
included, formatted as a draft, can be downloaded from:
http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1256.pdf
The C language specification defines regions of undefined behavior and
implementation defined behavior (to give compiler authors enough leeway to
produce better code). In general, code in QEMU should follow the language
specification and avoid both undefined and implementation defined
constructs. ("It works fine on the gcc I tested it with" is not a valid
argument...) However there are a few areas where we allow ourselves to
assume certain behaviors because in practice all the platforms we care about
behave in the same way and writing strictly conformant code would be
painful. These are:
* you may assume that integers are 2s complement representation
* you may assume that right shift of a signed integer duplicates
the sign bit (ie it is an arithmetic shift, not a logical shift)

17
LICENSE
View File

@@ -1,21 +1,18 @@
The following points clarify the QEMU license:
1) QEMU as a whole is released under the GNU General Public License,
version 2.
1) QEMU as a whole is released under the GNU General Public License
2) Parts of QEMU have specific licenses which are compatible with the
GNU General Public License, version 2. Hence each source file contains
its own licensing information. Source files with no licensing information
are released under the GNU General Public License, version 2 or (at your
option) any later version.
GNU General Public License. Hence each source file contains its own
licensing information.
As of July 2013, contributions under version 2 of the GNU General Public
License (and no later version) are only accepted for the following files
or directories: bsd-user/, linux-user/, hw/vfio/, hw/xen/xen_pt*.
In particular, the QEMU virtual CPU core library (libqemu.a) is
released under the GNU Lesser General Public License. Many hardware
device emulation sources are released under the BSD license.
3) The Tiny Code Generator (TCG) is released under the BSD license
(see license headers in files).
4) QEMU is a trademark of Fabrice Bellard.
Fabrice Bellard and the QEMU team
Fabrice Bellard.

File diff suppressed because it is too large Load Diff

515
Makefile
View File

@@ -1,72 +1,20 @@
# Makefile for QEMU.
# Always point to the root of the build tree (needs GNU make).
BUILD_DIR=$(CURDIR)
GENERATED_HEADERS = config-host.h
# All following code might depend on configuration variables
ifneq ($(wildcard config-host.mak),)
# Put the all: rule here so that config-host.mak can contain dependencies.
all:
all: build-all
include config-host.mak
# Check that we're not trying to do an out-of-tree build from
# a tree that's been used for an in-tree build.
ifneq ($(realpath $(SRC_PATH)),$(realpath .))
ifneq ($(wildcard $(SRC_PATH)/config-host.mak),)
$(error This is an out of tree build but your source tree ($(SRC_PATH)) \
seems to have been used for an in-tree build. You can fix this by running \
"make distclean && rm -rf *-linux-user *-softmmu" in your source tree)
endif
endif
CONFIG_SOFTMMU := $(if $(filter %-softmmu,$(TARGET_DIRS)),y)
CONFIG_USER_ONLY := $(if $(filter %-user,$(TARGET_DIRS)),y)
CONFIG_ALL=y
-include config-all-devices.mak
-include config-all-disas.mak
include $(SRC_PATH)/rules.mak
config-host.mak: $(SRC_PATH)/configure
@echo $@ is out-of-date, running configure
@# TODO: The next lines include code which supports a smooth
@# transition from old configurations without config.status.
@# This code can be removed after QEMU 1.7.
@if test -x config.status; then \
./config.status; \
else \
sed -n "/.*Configured with/s/[^:]*: //p" $@ | sh; \
fi
@sed -n "/.*Configured with/s/[^:]*: //p" $@ | sh
else
config-host.mak:
ifneq ($(filter-out %clean,$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
@echo "Please call configure before running make!"
@exit 1
endif
endif
GENERATED_HEADERS = config-host.h qemu-options.def
GENERATED_HEADERS += qmp-commands.h qapi-types.h qapi-visit.h qapi-event.h
GENERATED_SOURCES += qmp-marshal.c qapi-types.c qapi-visit.c qapi-event.c
GENERATED_HEADERS += trace/generated-events.h
GENERATED_SOURCES += trace/generated-events.c
GENERATED_HEADERS += trace/generated-tracers.h
ifeq ($(findstring dtrace,$(TRACE_BACKENDS)),dtrace)
GENERATED_HEADERS += trace/generated-tracers-dtrace.h
endif
GENERATED_SOURCES += trace/generated-tracers.c
GENERATED_HEADERS += trace/generated-tcg-tracers.h
GENERATED_HEADERS += trace/generated-helpers-wrappers.h
GENERATED_HEADERS += trace/generated-helpers.h
GENERATED_SOURCES += trace/generated-helpers.c
ifeq ($(findstring ust,$(TRACE_BACKENDS)),ust)
GENERATED_HEADERS += trace/generated-ust-provider.h
GENERATED_SOURCES += trace/generated-ust.c
endif
# Don't try to regenerate Makefile or configure
# We don't generate any of them
@@ -74,44 +22,28 @@ Makefile: ;
configure: ;
.PHONY: all clean cscope distclean dvi html info install install-doc \
pdf recurse-all speed test dist
pdf recurse-all speed tar tarbin test build-all
$(call set-vpath, $(SRC_PATH))
$(call set-vpath, $(SRC_PATH):$(SRC_PATH)/hw)
LIBS+=-lz $(LIBS_TOOLS)
HELPERS-$(CONFIG_LINUX) = qemu-bridge-helper$(EXESUF)
ifdef BUILD_DOCS
DOCS=qemu-doc.html qemu-tech.html qemu.1 qemu-img.1 qemu-nbd.8 qmp-commands.txt
ifdef CONFIG_VIRTFS
DOCS+=fsdev/virtfs-proxy-helper.1
endif
DOCS=qemu-doc.html qemu-tech.html qemu.1 qemu-img.1 qemu-nbd.8 QMP/qmp-commands.txt
else
DOCS=
endif
SUBDIR_MAKEFLAGS=$(if $(V),,--no-print-directory) BUILD_DIR=$(BUILD_DIR)
SUBDIR_MAKEFLAGS=$(if $(V),,--no-print-directory)
SUBDIR_DEVICES_MAK=$(patsubst %, %/config-devices.mak, $(TARGET_DIRS))
SUBDIR_DEVICES_MAK_DEP=$(patsubst %, %-config-devices.mak.d, $(TARGET_DIRS))
ifeq ($(SUBDIR_DEVICES_MAK),)
config-all-devices.mak:
$(call quiet-command,echo '# no devices' > $@," GEN $@")
else
config-all-devices.mak: $(SUBDIR_DEVICES_MAK)
$(call quiet-command, sed -n \
's|^\([^=]*\)=\(.*\)$$|\1:=$$(findstring y,$$(\1)\2)|p' \
$(SUBDIR_DEVICES_MAK) | sort -u > $@, \
" GEN $@")
endif
-include $(SUBDIR_DEVICES_MAK_DEP)
$(call quiet-command,cat $(SUBDIR_DEVICES_MAK) | grep =y | sort -u > $@," GEN $@")
%/config-devices.mak: default-configs/%.mak
$(call quiet-command,$(SHELL) $(SRC_PATH)/scripts/make_device_config.sh $@ $<, " GEN $@")
$(call quiet-command,cat $< > $@.tmp, " GEN $@")
@if test -f $@; then \
if cmp -s $@.old $@; then \
if cmp -s $@.old $@ || cmp -s $@ $@.tmp; then \
mv $@.tmp $@; \
cp -p $@ $@.old; \
else \
@@ -131,63 +63,26 @@ endif
defconfig:
rm -f config-all-devices.mak $(SUBDIR_DEVICES_MAK)
-include config-all-devices.mak
build-all: $(DOCS) $(TOOLS) recurse-all
config-host.h: config-host.h-timestamp
config-host.h-timestamp: config-host.mak
SUBDIR_RULES=$(patsubst %,subdir-%, $(TARGET_DIRS))
subdir-%: $(GENERATED_HEADERS)
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C $* V="$(V)" TARGET_DIR="$*/" all,)
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/Makefile.objs
endif
dummy := $(call unnest-vars,, \
stub-obj-y \
util-obj-y \
qga-obj-y \
qga-vss-dll-obj-y \
block-obj-y \
block-obj-m \
common-obj-y \
common-obj-m)
$(common-obj-y): $(GENERATED_HEADERS)
$(filter %-softmmu,$(SUBDIR_RULES)): $(common-obj-y) subdir-libdis
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/tests/Makefile
endif
ifeq ($(CONFIG_SMARTCARD_NSS),y)
include $(SRC_PATH)/libcacard/Makefile
endif
all: $(DOCS) $(TOOLS) $(HELPERS-y) recurse-all modules
config-host.h: config-host.h-timestamp
config-host.h-timestamp: config-host.mak
qemu-options.def: $(SRC_PATH)/qemu-options.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $@")
SUBDIR_RULES=$(patsubst %,subdir-%, $(TARGET_DIRS))
SOFTMMU_SUBDIR_RULES=$(filter %-softmmu,$(SUBDIR_RULES))
$(SOFTMMU_SUBDIR_RULES): $(block-obj-y)
$(SOFTMMU_SUBDIR_RULES): config-all-devices.mak
subdir-%:
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C $* V="$(V)" TARGET_DIR="$*/" all,)
subdir-pixman: pixman/Makefile
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C pixman V="$(V)" all,)
pixman/Makefile: $(SRC_PATH)/pixman/configure
(cd pixman; CFLAGS="$(CFLAGS) -fPIC $(extra_cflags) $(extra_ldflags)" $(SRC_PATH)/pixman/configure $(AUTOCONF_HOST) --disable-gtk --disable-shared --enable-static)
$(SRC_PATH)/pixman/configure:
(cd $(SRC_PATH)/pixman; autoreconf -v --install)
DTC_MAKE_ARGS=-I$(SRC_PATH)/dtc VPATH=$(SRC_PATH)/dtc -C dtc V="$(V)" LIBFDT_srcdir=$(SRC_PATH)/dtc/libfdt
DTC_CFLAGS=$(CFLAGS) $(QEMU_CFLAGS)
DTC_CPPFLAGS=-I$(BUILD_DIR)/dtc -I$(SRC_PATH)/dtc -I$(SRC_PATH)/dtc/libfdt
subdir-dtc:dtc/libfdt dtc/tests
$(call quiet-command,$(MAKE) $(DTC_MAKE_ARGS) CPPFLAGS="$(DTC_CPPFLAGS)" CFLAGS="$(DTC_CFLAGS)" LDFLAGS="$(LDFLAGS)" ARFLAGS="$(ARFLAGS)" CC="$(CC)" AR="$(AR)" LD="$(LD)" $(SUBDIR_MAKEFLAGS) libfdt/libfdt.a,)
dtc/%:
mkdir -p $@
$(SUBDIR_RULES): libqemuutil.a libqemustub.a $(common-obj-y)
$(filter %-user,$(SUBDIR_RULES)): $(GENERATED_HEADERS) subdir-libdis-user subdir-libuser
ROMSUBDIR_RULES=$(patsubst %,romsubdir-%, $(ROMS))
romsubdir-%:
@@ -197,244 +92,126 @@ ALL_SUBDIRS=$(TARGET_DIRS) $(patsubst %,pc-bios/%, $(ROMS))
recurse-all: $(SUBDIR_RULES) $(ROMSUBDIR_RULES)
$(BUILD_DIR)/version.o: $(SRC_PATH)/version.rc $(BUILD_DIR)/config-host.h | $(BUILD_DIR)/version.lo
$(call quiet-command,$(WINDRES) -I$(BUILD_DIR) -o $@ $<," RC version.o")
$(BUILD_DIR)/version.lo: $(SRC_PATH)/version.rc $(BUILD_DIR)/config-host.h
$(call quiet-command,$(WINDRES) -I$(BUILD_DIR) -o $@ $<," RC version.lo")
audio/audio.o audio/fmodaudio.o: QEMU_CFLAGS += $(FMOD_CFLAGS)
Makefile: $(version-obj-y) $(version-lobj-y)
QEMU_CFLAGS+=$(CURL_CFLAGS)
######################################################################
# Build libraries
ui/cocoa.o: ui/cocoa.m
libqemustub.a: $(stub-obj-y)
libqemuutil.a: $(util-obj-y)
ui/sdl.o audio/sdlaudio.o ui/sdl_zoom.o baum.o: QEMU_CFLAGS += $(SDL_CFLAGS)
block-modules = $(foreach o,$(block-obj-m),"$(basename $(subst /,-,$o))",) NULL
util/module.o-cflags = -D'CONFIG_BLOCK_MODULES=$(block-modules)'
ui/vnc.o: QEMU_CFLAGS += $(VNC_TLS_CFLAGS)
bt-host.o: QEMU_CFLAGS += $(BLUEZ_CFLAGS)
######################################################################
qemu-img.o: qemu-img-cmds.h
qemu-img.o qemu-tool.o qemu-nbd.o qemu-io.o: $(GENERATED_HEADERS)
qemu-img$(EXESUF): qemu-img.o $(block-obj-y) libqemuutil.a libqemustub.a
qemu-nbd$(EXESUF): qemu-nbd.o $(block-obj-y) libqemuutil.a libqemustub.a
qemu-io$(EXESUF): qemu-io.o $(block-obj-y) libqemuutil.a libqemustub.a
qemu-img$(EXESUF): qemu-img.o qemu-tool.o qemu-error.o $(block-obj-y) $(qobject-obj-y)
qemu-bridge-helper$(EXESUF): qemu-bridge-helper.o
qemu-nbd$(EXESUF): qemu-nbd.o qemu-tool.o qemu-error.o $(block-obj-y) $(qobject-obj-y)
fsdev/virtfs-proxy-helper$(EXESUF): fsdev/virtfs-proxy-helper.o fsdev/virtio-9p-marshal.o libqemuutil.a libqemustub.a
fsdev/virtfs-proxy-helper$(EXESUF): LIBS += -lcap
qemu-io$(EXESUF): qemu-io.o cmd.o qemu-tool.o qemu-error.o $(block-obj-y) $(qobject-obj-y)
qemu-img-cmds.h: $(SRC_PATH)/qemu-img-cmds.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $@")
$(call quiet-command,sh $(SRC_PATH)/hxtool -h < $< > $@," GEN $@")
qemu-ga$(EXESUF): LIBS = $(LIBS_QGA)
qemu-ga$(EXESUF): QEMU_CFLAGS += -I qga/qapi-generated
check-qint.o check-qstring.o check-qdict.o check-qlist.o check-qfloat.o check-qjson.o: $(GENERATED_HEADERS)
gen-out-type = $(subst .,-,$(suffix $@))
qapi-py = $(SRC_PATH)/scripts/qapi.py $(SRC_PATH)/scripts/ordereddict.py
qga/qapi-generated/qga-qapi-types.c qga/qapi-generated/qga-qapi-types.h :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" -i $<, \
" GEN $@")
qga/qapi-generated/qga-qapi-visit.c qga/qapi-generated/qga-qapi-visit.h :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" -i $<, \
" GEN $@")
qga/qapi-generated/qga-qmp-commands.h qga/qapi-generated/qga-qmp-marshal.c :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" -i $<, \
" GEN $@")
qapi-modules = $(SRC_PATH)/qapi-schema.json $(SRC_PATH)/qapi/common.json \
$(SRC_PATH)/qapi/block.json $(SRC_PATH)/qapi/block-core.json \
$(SRC_PATH)/qapi/event.json
qapi-types.c qapi-types.h :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py \
$(gen-out-type) -o "." -b -i $<, \
" GEN $@")
qapi-visit.c qapi-visit.h :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py \
$(gen-out-type) -o "." -b -i $<, \
" GEN $@")
qapi-event.c qapi-event.h :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-event.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-event.py \
$(gen-out-type) -o "." -b -i $<, \
" GEN $@")
qmp-commands.h qmp-marshal.c :\
$(qapi-modules) $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py \
$(gen-out-type) -o "." -m -i $<, \
" GEN $@")
QGALIB_GEN=$(addprefix qga/qapi-generated/, qga-qapi-types.h qga-qapi-visit.h qga-qmp-commands.h)
$(qga-obj-y) qemu-ga.o: $(QGALIB_GEN)
qemu-ga$(EXESUF): $(qga-obj-y) libqemuutil.a libqemustub.a
$(call LINK, $^)
check-qint: check-qint.o qint.o qemu-malloc.o
check-qstring: check-qstring.o qstring.o qemu-malloc.o
check-qdict: check-qdict.o qdict.o qfloat.o qint.o qstring.o qbool.o qemu-malloc.o qlist.o
check-qlist: check-qlist.o qlist.o qint.o qemu-malloc.o
check-qfloat: check-qfloat.o qfloat.o qemu-malloc.o
check-qjson: check-qjson.o qfloat.o qint.o qdict.o qstring.o qlist.o qbool.o qjson.o json-streamer.o json-lexer.o json-parser.o qemu-malloc.o
clean:
# avoid old build problems by removing potentially incorrect old files
rm -f config.mak op-i386.h opc-i386.h gen-op-i386.h op-arm.h opc-arm.h gen-op-arm.h
rm -f qemu-options.def
find . \( -name '*.l[oa]' -o -name '*.so' -o -name '*.dll' -o -name '*.mo' -o -name '*.[oda]' \) -type f -exec rm {} +
rm -f $(filter-out %.tlb,$(TOOLS)) $(HELPERS-y) qemu-ga TAGS cscope.* *.pod *~ */*~
rm -f fsdev/*.pod
rm -rf .libs */.libs
rm -f *.o *.d *.a $(TOOLS) TAGS cscope.* *.pod *~ */*~
rm -f slirp/*.o slirp/*.d audio/*.o audio/*.d block/*.o block/*.d net/*.o net/*.d fsdev/*.o fsdev/*.d ui/*.o ui/*.d
rm -f qemu-img-cmds.h
@# May not be present in GENERATED_HEADERS
rm -f trace/generated-tracers-dtrace.dtrace*
rm -f trace/generated-tracers-dtrace.h*
rm -f $(foreach f,$(GENERATED_HEADERS),$(f) $(f)-timestamp)
rm -f $(foreach f,$(GENERATED_SOURCES),$(f) $(f)-timestamp)
rm -rf qapi-generated
rm -rf qga/qapi-generated
for d in $(ALL_SUBDIRS); do \
$(MAKE) -C tests clean
for d in $(ALL_SUBDIRS) libhw32 libhw64 libuser libdis libdis-user; do \
if test -d $$d; then $(MAKE) -C $$d $@ || exit 1; fi; \
rm -f $$d/qemu-options.def; \
done
VERSION ?= $(shell cat VERSION)
dist: qemu-$(VERSION).tar.bz2
qemu-%.tar.bz2:
$(SRC_PATH)/scripts/make-release "$(SRC_PATH)" "$(patsubst qemu-%.tar.bz2,%,$@)"
distclean: clean
rm -f config-host.mak config-host.h* config-host.ld $(DOCS) qemu-options.texi qemu-img-cmds.texi qemu-monitor.texi
rm -f config-all-devices.mak config-all-disas.mak config.status
rm -f po/*.mo tests/qemu-iotests/common.env
rm -f qemu-options.def
rm -f config-all-devices.mak
rm -f roms/seabios/config.mak roms/vgabios/config.mak
rm -f qemu-doc.info qemu-doc.aux qemu-doc.cp qemu-doc.cps qemu-doc.dvi
rm -f qemu-doc.fn qemu-doc.fns qemu-doc.info qemu-doc.ky qemu-doc.kys
rm -f qemu-doc.log qemu-doc.pdf qemu-doc.pg qemu-doc.toc qemu-doc.tp
rm -f qemu-doc.vr
rm -f config.log
rm -f linux-headers/asm
rm -f qemu-doc.info qemu-doc.aux qemu-doc.cp qemu-doc.dvi qemu-doc.fn qemu-doc.info qemu-doc.ky qemu-doc.log qemu-doc.pdf qemu-doc.pg qemu-doc.toc qemu-doc.tp qemu-doc.vr
rm -f qemu-tech.info qemu-tech.aux qemu-tech.cp qemu-tech.dvi qemu-tech.fn qemu-tech.info qemu-tech.ky qemu-tech.log qemu-tech.pdf qemu-tech.pg qemu-tech.toc qemu-tech.tp qemu-tech.vr
for d in $(TARGET_DIRS); do \
for d in $(TARGET_DIRS) libhw32 libhw64 libuser libdis libdis-user; do \
rm -rf $$d || exit 1 ; \
done
rm -Rf .sdk
if test -f pixman/config.log; then make -C pixman distclean; fi
if test -f dtc/version_gen.h; then make $(DTC_MAKE_ARGS) clean; fi
KEYMAPS=da en-gb et fr fr-ch is lt modifiers no pt-br sv \
ar de en-us fi fr-be hr it lv nl pl ru th \
common de-ch es fo fr-ca hu ja mk nl-be pt sl tr \
bepo cz
common de-ch es fo fr-ca hu ja mk nl-be pt sl tr
ifdef INSTALL_BLOBS
BLOBS=bios.bin bios-256k.bin sgabios.bin vgabios.bin vgabios-cirrus.bin \
vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin \
acpi-dsdt.aml q35-acpi-dsdt.aml \
ppc_rom.bin openbios-sparc32 openbios-sparc64 openbios-ppc QEMU,tcx.bin QEMU,cgthree.bin \
pxe-e1000.rom pxe-eepro100.rom pxe-ne2k_pci.rom \
pxe-pcnet.rom pxe-rtl8139.rom pxe-virtio.rom \
efi-e1000.rom efi-eepro100.rom efi-ne2k_pci.rom \
efi-pcnet.rom efi-rtl8139.rom efi-virtio.rom \
qemu-icon.bmp qemu_logo_no_text.svg \
bamboo.dtb petalogix-s3adsp1800.dtb petalogix-ml605.dtb \
multiboot.bin linuxboot.bin kvmvapic.bin \
s390-zipl.rom \
s390-ccw.img \
spapr-rtas.bin slof.bin \
palcode-clipper \
u-boot.e500
BLOBS=bios.bin vgabios.bin vgabios-cirrus.bin ppc_rom.bin \
video.x openbios-sparc32 openbios-sparc64 openbios-ppc \
gpxe-eepro100-80861209.rom \
gpxe-eepro100-80861229.rom \
pxe-e1000.bin \
pxe-ne2k_pci.bin pxe-pcnet.bin \
pxe-rtl8139.bin pxe-virtio.bin \
bamboo.dtb petalogix-s3adsp1800.dtb \
multiboot.bin linuxboot.bin \
s390-zipl.rom
else
BLOBS=
endif
install-doc: $(DOCS)
$(INSTALL_DIR) "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) qemu-doc.html qemu-tech.html "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) qmp-commands.txt "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DIR) "$(DESTDIR)$(docdir)"
$(INSTALL_DATA) qemu-doc.html qemu-tech.html "$(DESTDIR)$(docdir)"
ifdef CONFIG_POSIX
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DATA) qemu.1 "$(DESTDIR)$(mandir)/man1"
ifneq ($(TOOLS),)
$(INSTALL_DATA) qemu-img.1 "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DATA) qemu.1 qemu-img.1 "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man8"
$(INSTALL_DATA) qemu-nbd.8 "$(DESTDIR)$(mandir)/man8"
endif
endif
ifdef CONFIG_VIRTFS
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DATA) fsdev/virtfs-proxy-helper.1 "$(DESTDIR)$(mandir)/man1"
endif
install-datadir:
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)"
install-sysconfig:
$(INSTALL_DIR) "$(DESTDIR)$(sysconfdir)/qemu"
$(INSTALL_DATA) $(SRC_PATH)/sysconfigs/target/target-x86_64.conf "$(DESTDIR)$(sysconfdir)/qemu"
install-localstatedir:
ifdef CONFIG_POSIX
ifneq (,$(findstring qemu-ga,$(TOOLS)))
$(INSTALL_DIR) "$(DESTDIR)$(qemu_localstatedir)"/run
endif
endif
install-confdir:
$(INSTALL_DIR) "$(DESTDIR)$(qemu_confdir)"
install-sysconfig: install-datadir install-confdir
$(INSTALL_DATA) $(SRC_PATH)/sysconfigs/target/target-x86_64.conf "$(DESTDIR)$(qemu_confdir)"
install: all $(if $(BUILD_DOCS),install-doc) install-sysconfig \
install-datadir install-localstatedir
install: all $(if $(BUILD_DOCS),install-doc) install-sysconfig
$(INSTALL_DIR) "$(DESTDIR)$(bindir)"
ifneq ($(TOOLS),)
$(call install-prog,$(TOOLS),$(DESTDIR)$(bindir))
endif
ifneq ($(CONFIG_MODULES),)
$(INSTALL_DIR) "$(DESTDIR)$(qemu_moddir)"
for s in $(modules-m:.mo=$(DSOSUF)); do \
t="$(DESTDIR)$(qemu_moddir)/$$(echo $$s | tr / -)"; \
$(INSTALL_LIB) $$s "$$t"; \
test -z "$(STRIP)" || $(STRIP) "$$t"; \
done
endif
ifneq ($(HELPERS-y),)
$(call install-prog,$(HELPERS-y),$(DESTDIR)$(libexecdir))
$(INSTALL_PROG) $(STRIP_OPT) $(TOOLS) "$(DESTDIR)$(bindir)"
endif
ifneq ($(BLOBS),)
$(INSTALL_DIR) "$(DESTDIR)$(datadir)"
set -e; for x in $(BLOBS); do \
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/$$x "$(DESTDIR)$(qemu_datadir)"; \
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/$$x "$(DESTDIR)$(datadir)"; \
done
endif
ifeq ($(CONFIG_GTK),y)
$(MAKE) -C po $@
endif
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)/keymaps"
$(INSTALL_DIR) "$(DESTDIR)$(datadir)/keymaps"
set -e; for x in $(KEYMAPS); do \
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/keymaps/$$x "$(DESTDIR)$(qemu_datadir)/keymaps"; \
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/keymaps/$$x "$(DESTDIR)$(datadir)/keymaps"; \
done
$(INSTALL_DATA) $(SRC_PATH)/trace-events "$(DESTDIR)$(qemu_datadir)/trace-events"
for d in $(TARGET_DIRS); do \
$(MAKE) $(SUBDIR_MAKEFLAGS) TARGET_DIR=$$d/ -C $$d $@ || exit 1 ; \
$(MAKE) -C $$d $@ || exit 1 ; \
done
# various test targets
test speed: all
$(MAKE) -C tests/tcg $@
$(MAKE) -C tests $@
.PHONY: TAGS
TAGS:
rm -f $@
find "$(SRC_PATH)" -name '*.[hc]' -exec etags --append {} +
find "$(SRC_PATH)" -name '*.[hc]' -print0 | xargs -0 etags
cscope:
rm -f ./cscope.*
find "$(SRC_PATH)" -name "*.[chsS]" -print | sed 's,^\./,,' > ./cscope.files
find . -name "*.[ch]" -print | sed 's,^\./,,' > ./cscope.files
cscope -b
# documentation
@@ -445,7 +222,7 @@ TEXIFLAG=$(if $(V),,--quiet)
$(call quiet-command,texi2dvi $(TEXIFLAG) -I . $<," GEN $@")
%.html: %.texi
$(call quiet-command,LC_ALL=C $(MAKEINFO) $(MAKEINFOFLAGS) --html $< -o $@, \
$(call quiet-command,$(MAKEINFO) $(MAKEINFOFLAGS) --html $< -o $@, \
" GEN $@")
%.info: %.texi
@@ -455,39 +232,33 @@ TEXIFLAG=$(if $(V),,--quiet)
$(call quiet-command,texi2pdf $(TEXIFLAG) -I . $<," GEN $@")
qemu-options.texi: $(SRC_PATH)/qemu-options.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -t < $< > $@," GEN $@")
$(call quiet-command,sh $(SRC_PATH)/hxtool -t < $< > $@," GEN $@")
qemu-monitor.texi: $(SRC_PATH)/hmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -t < $< > $@," GEN $@")
qemu-monitor.texi: $(SRC_PATH)/qemu-monitor.hx
$(call quiet-command,sh $(SRC_PATH)/hxtool -t < $< > $@," GEN $@")
qmp-commands.txt: $(SRC_PATH)/qmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -q < $< > $@," GEN $@")
QMP/qmp-commands.txt: $(SRC_PATH)/qemu-monitor.hx
$(call quiet-command,sh $(SRC_PATH)/hxtool -q < $< > $@," GEN $@")
qemu-img-cmds.texi: $(SRC_PATH)/qemu-img-cmds.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -t < $< > $@," GEN $@")
$(call quiet-command,sh $(SRC_PATH)/hxtool -t < $< > $@," GEN $@")
qemu.1: qemu-doc.texi qemu-options.texi qemu-monitor.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< qemu.pod && \
$(POD2MAN) --section=1 --center=" " --release=" " qemu.pod > $@, \
perl -Ww -- $(SRC_PATH)/texi2pod.pl $< qemu.pod && \
pod2man --section=1 --center=" " --release=" " qemu.pod > $@, \
" GEN $@")
qemu-img.1: qemu-img.texi qemu-img-cmds.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< qemu-img.pod && \
$(POD2MAN) --section=1 --center=" " --release=" " qemu-img.pod > $@, \
" GEN $@")
fsdev/virtfs-proxy-helper.1: fsdev/virtfs-proxy-helper.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< fsdev/virtfs-proxy-helper.pod && \
$(POD2MAN) --section=1 --center=" " --release=" " fsdev/virtfs-proxy-helper.pod > $@, \
perl -Ww -- $(SRC_PATH)/texi2pod.pl $< qemu-img.pod && \
pod2man --section=1 --center=" " --release=" " qemu-img.pod > $@, \
" GEN $@")
qemu-nbd.8: qemu-nbd.texi
$(call quiet-command, \
perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $< qemu-nbd.pod && \
$(POD2MAN) --section=8 --center=" " --release=" " qemu-nbd.pod > $@, \
perl -Ww -- $(SRC_PATH)/texi2pod.pl $< qemu-nbd.pod && \
pod2man --section=8 --center=" " --release=" " qemu-nbd.pod > $@, \
" GEN $@")
dvi: qemu-doc.dvi qemu-tech.dvi
@@ -499,67 +270,51 @@ qemu-doc.dvi qemu-doc.html qemu-doc.info qemu-doc.pdf: \
qemu-img.texi qemu-nbd.texi qemu-options.texi \
qemu-monitor.texi qemu-img-cmds.texi
ifdef CONFIG_WIN32
VERSION ?= $(shell cat VERSION)
FILE = qemu-$(VERSION)
INSTALLER = qemu-setup-$(VERSION)$(EXESUF)
# tar release (use 'make -k tar' on a checkouted tree)
tar:
rm -rf /tmp/$(FILE)
cp -r . /tmp/$(FILE)
cd /tmp && tar zcvf ~/$(FILE).tar.gz $(FILE) --exclude CVS --exclude .git --exclude .svn
rm -rf /tmp/$(FILE)
nsisflags = -V2 -NOCD
SYSTEM_TARGETS=$(filter %-softmmu,$(TARGET_DIRS))
SYSTEM_PROGS=$(patsubst qemu-system-i386,qemu, \
$(patsubst %-softmmu,qemu-system-%, \
$(SYSTEM_TARGETS)))
ifneq ($(wildcard $(SRC_PATH)/dll),)
ifeq ($(ARCH),x86_64)
# 64 bit executables
DLL_PATH = $(SRC_PATH)/dll/w64
nsisflags += -DW64
else
# 32 bit executables
DLL_PATH = $(SRC_PATH)/dll/w32
endif
endif
USER_TARGETS=$(filter %-user,$(TARGET_DIRS))
USER_PROGS=$(patsubst %-bsd-user,qemu-%, \
$(patsubst %-darwin-user,qemu-%, \
$(patsubst %-linux-user,qemu-%, \
$(USER_TARGETS))))
.PHONY: installer
installer: $(INSTALLER)
INSTDIR=/tmp/qemu-nsis
$(INSTALLER): $(SRC_PATH)/qemu.nsi
make install prefix=${INSTDIR}
ifdef SIGNCODE
(cd ${INSTDIR}; \
for i in *.exe; do \
$(SIGNCODE) $${i}; \
done \
)
endif # SIGNCODE
(cd ${INSTDIR}; \
for i in qemu-system-*.exe; do \
arch=$${i%.exe}; \
arch=$${arch#qemu-system-}; \
echo Section \"$$arch\" Section_$$arch; \
echo SetOutPath \"\$$INSTDIR\"; \
echo File \"\$${BINDIR}\\$$i\"; \
echo SectionEnd; \
done \
) >${INSTDIR}/system-emulations.nsh
makensis $(nsisflags) \
$(if $(BUILD_DOCS),-DCONFIG_DOCUMENTATION="y") \
$(if $(CONFIG_GTK),-DCONFIG_GTK="y") \
-DBINDIR="${INSTDIR}" \
$(if $(DLL_PATH),-DDLLDIR="$(DLL_PATH)") \
-DSRCDIR="$(SRC_PATH)" \
-DOUTFILE="$(INSTALLER)" \
$(SRC_PATH)/qemu.nsi
rm -r ${INSTDIR}
ifdef SIGNCODE
$(SIGNCODE) $(INSTALLER)
endif # SIGNCODE
endif # CONFIG_WIN
# Add a dependency on the generated files, so that they are always
# rebuilt before other object files
ifneq ($(filter-out %clean,$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
Makefile: $(GENERATED_HEADERS)
endif
# generate a binary distribution
tarbin:
cd / && tar zcvf ~/qemu-$(VERSION)-$(ARCH).tar.gz \
$(patsubst %,$(bindir)/%, $(SYSTEM_PROGS)) \
$(patsubst %,$(bindir)/%, $(USER_PROGS)) \
$(bindir)/qemu-img \
$(bindir)/qemu-nbd \
$(datadir)/bios.bin \
$(datadir)/vgabios.bin \
$(datadir)/vgabios-cirrus.bin \
$(datadir)/ppc_rom.bin \
$(datadir)/video.x \
$(datadir)/openbios-sparc32 \
$(datadir)/openbios-sparc64 \
$(datadir)/openbios-ppc \
$(datadir)/pxe-ne2k_pci.bin \
$(datadir)/pxe-rtl8139.bin \
$(datadir)/pxe-pcnet.bin \
$(datadir)/pxe-e1000.bin \
$(docdir)/qemu-doc.html \
$(docdir)/qemu-tech.html \
$(mandir)/man1/qemu.1 \
$(mandir)/man1/qemu-img.1 \
$(mandir)/man8/qemu-nbd.8
# Include automatically generated dependency files
# Dependencies in Makefile.objs files come from our recursive subdir rules
-include $(wildcard *.d tests/*.d)
-include $(wildcard *.d audio/*.d slirp/*.d block/*.d net/*.d ui/*.d)

23
Makefile.dis Normal file
View File

@@ -0,0 +1,23 @@
# Makefile for disassemblers.
include ../config-host.mak
include config.mak
include $(SRC_PATH)/rules.mak
.PHONY: all
$(call set-vpath, $(SRC_PATH))
QEMU_CFLAGS+=-I..
include $(SRC_PATH)/Makefile.objs
all: $(libdis-y)
# Dummy command so that make thinks it has done something
@true
clean:
rm -f *.o *.d *.a *~
# Include automatically generated dependency files
-include $(wildcard *.d */*.d)

24
Makefile.hw Normal file
View File

@@ -0,0 +1,24 @@
# Makefile for qemu target independent devices.
include ../config-host.mak
include ../config-all-devices.mak
include config.mak
include $(SRC_PATH)/rules.mak
.PHONY: all
$(call set-vpath, $(SRC_PATH):$(SRC_PATH)/hw)
QEMU_CFLAGS+=-I.. -I$(SRC_PATH)/fpu
include $(SRC_PATH)/Makefile.objs
all: $(hw-obj-y)
# Dummy command so that make thinks it has done something
@true
clean:
rm -f *.o */*.o *.d */*.d *.a */*.a *~ */*~
# Include automatically generated dependency files
-include $(wildcard *.d */*.d)

View File

@@ -1,109 +1,278 @@
#######################################################################
# Common libraries for tools and emulators
stub-obj-y = stubs/
util-obj-y = util/ qobject/ qapi/ qapi-types.o qapi-visit.o qapi-event.o
# QObject
qobject-obj-y = qint.o qstring.o qdict.o qlist.o qfloat.o qbool.o
qobject-obj-y += qjson.o json-lexer.o json-streamer.o json-parser.o
qobject-obj-y += qerror.o
#######################################################################
# block-obj-y is code used by both qemu system emulation and qemu-img
block-obj-y = async.o thread-pool.o
block-obj-y += nbd.o block.o blockjob.o
block-obj-y += main-loop.o iohandler.o qemu-timer.o
block-obj-$(CONFIG_POSIX) += aio-posix.o
block-obj-$(CONFIG_WIN32) += aio-win32.o
block-obj-y += block/
block-obj-y += qemu-io-cmds.o
block-obj-y = cutils.o cache-utils.o qemu-malloc.o qemu-option.o module.o
block-obj-y += nbd.o block.o aio.o aes.o osdep.o qemu-config.o
block-obj-$(CONFIG_POSIX) += posix-aio-compat.o
block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
block-obj-y += qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
block-obj-y += qemu-coroutine-sleep.o
block-obj-y += coroutine-$(CONFIG_COROUTINE_BACKEND).o
block-nested-y += raw.o cow.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat.o
block-nested-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o
block-nested-y += parallels.o nbd.o blkdebug.o sheepdog.o
block-nested-$(CONFIG_WIN32) += raw-win32.o
block-nested-$(CONFIG_POSIX) += raw-posix.o
block-nested-$(CONFIG_CURL) += curl.o
block-obj-m = block/
block-obj-y += $(addprefix block/, $(block-nested-y))
net-obj-y = net.o
net-nested-y = queue.o checksum.o util.o
net-nested-y += socket.o
net-nested-y += dump.o
net-nested-$(CONFIG_POSIX) += tap.o
net-nested-$(CONFIG_LINUX) += tap-linux.o
net-nested-$(CONFIG_WIN32) += tap-win32.o
net-nested-$(CONFIG_BSD) += tap-bsd.o
net-nested-$(CONFIG_SOLARIS) += tap-solaris.o
net-nested-$(CONFIG_AIX) += tap-aix.o
net-nested-$(CONFIG_SLIRP) += slirp.o
net-nested-$(CONFIG_VDE) += vde.o
net-obj-y += $(addprefix net/, $(net-nested-y))
fsdev-nested-$(CONFIG_VIRTFS) = qemu-fsdev.o
fsdev-obj-$(CONFIG_VIRTFS) += $(addprefix fsdev/, $(fsdev-nested-y))
######################################################################
# smartcard
# libqemu_common.a: Target independent part of system emulation. The
# long term path is to suppress *all* target specific code in case of
# system emulation, i.e. a single QEMU executable should support all
# CPUs and machines.
libcacard-y += libcacard/cac.o libcacard/event.o
libcacard-y += libcacard/vcard.o libcacard/vreader.o
libcacard-y += libcacard/vcard_emul_nss.o
libcacard-y += libcacard/vcard_emul_type.o
libcacard-y += libcacard/card_7816.o
libcacard-y += libcacard/vcardt.o
libcacard/vcard_emul_nss.o-cflags := $(NSS_CFLAGS)
libcacard/vcard_emul_nss.o-libs := $(NSS_LIBS)
######################################################################
# Target independent part of system emulation. The long term path is to
# suppress *all* target specific code in case of system emulation, i.e. a
# single QEMU executable should support all CPUs and machines.
ifeq ($(CONFIG_SOFTMMU),y)
common-obj-y = blockdev.o blockdev-nbd.o block/
common-obj-y += iothread.o
common-obj-y += net/
common-obj-y += qdev-monitor.o device-hotplug.o
common-obj-y = $(block-obj-y) blockdev.o
common-obj-y += $(net-obj-y)
common-obj-y += $(qobject-obj-y)
common-obj-$(CONFIG_LINUX) += $(fsdev-obj-$(CONFIG_LINUX))
common-obj-y += readline.o console.o cursor.o async.o qemu-error.o
common-obj-$(CONFIG_WIN32) += os-win32.o
common-obj-$(CONFIG_POSIX) += os-posix.o
common-obj-$(CONFIG_LINUX) += fsdev/
common-obj-y += tcg-runtime.o host-utils.o
common-obj-y += irq.o ioport.o input.o
common-obj-$(CONFIG_PTIMER) += ptimer.o
common-obj-$(CONFIG_MAX7310) += max7310.o
common-obj-$(CONFIG_WM8750) += wm8750.o
common-obj-$(CONFIG_TWL92230) += twl92230.o
common-obj-$(CONFIG_TSC2005) += tsc2005.o
common-obj-$(CONFIG_LM832X) += lm832x.o
common-obj-$(CONFIG_TMP105) += tmp105.o
common-obj-$(CONFIG_STELLARIS_INPUT) += stellaris_input.o
common-obj-$(CONFIG_SSD0303) += ssd0303.o
common-obj-$(CONFIG_SSD0323) += ssd0323.o
common-obj-$(CONFIG_ADS7846) += ads7846.o
common-obj-$(CONFIG_MAX111X) += max111x.o
common-obj-$(CONFIG_DS1338) += ds1338.o
common-obj-y += i2c.o smbus.o smbus_eeprom.o
common-obj-y += eeprom93xx.o
common-obj-y += scsi-disk.o cdrom.o
common-obj-y += scsi-generic.o scsi-bus.o
common-obj-y += usb.o usb-hub.o usb-$(HOST_USB).o usb-hid.o usb-msd.o usb-wacom.o
common-obj-y += usb-serial.o usb-net.o usb-bus.o
common-obj-$(CONFIG_SSI) += ssi.o
common-obj-$(CONFIG_SSI_SD) += ssi-sd.o
common-obj-$(CONFIG_SD) += sd.o
common-obj-y += bt.o bt-host.o bt-vhci.o bt-l2cap.o bt-sdp.o bt-hci.o bt-hid.o usb-bt.o
common-obj-y += bt-hci-csr.o
common-obj-y += buffered_file.o migration.o migration-tcp.o qemu-sockets.o
common-obj-y += qemu-char.o savevm.o #aio.o
common-obj-y += msmouse.o ps2.o
common-obj-y += qdev.o qdev-properties.o
common-obj-y += block-migration.o
common-obj-y += migration/
common-obj-y += qemu-char.o #aio.o
common-obj-y += page_cache.o
common-obj-$(CONFIG_BRLAPI) += baum.o
common-obj-$(CONFIG_POSIX) += migration-exec.o migration-unix.o migration-fd.o
common-obj-$(CONFIG_SPICE) += spice-qemu-char.o
audio-obj-y = audio.o noaudio.o wavaudio.o mixeng.o
audio-obj-$(CONFIG_SDL) += sdlaudio.o
audio-obj-$(CONFIG_OSS) += ossaudio.o
audio-obj-$(CONFIG_COREAUDIO) += coreaudio.o
audio-obj-$(CONFIG_ALSA) += alsaaudio.o
audio-obj-$(CONFIG_DSOUND) += dsoundaudio.o
audio-obj-$(CONFIG_FMOD) += fmodaudio.o
audio-obj-$(CONFIG_ESD) += esdaudio.o
audio-obj-$(CONFIG_PA) += paaudio.o
audio-obj-$(CONFIG_WINWAVE) += winwaveaudio.o
audio-obj-$(CONFIG_AUDIO_PT_INT) += audio_pt_int.o
audio-obj-$(CONFIG_AUDIO_WIN_INT) += audio_win_int.o
audio-obj-y += wavcapture.o
common-obj-y += $(addprefix audio/, $(audio-obj-y))
common-obj-y += audio/
common-obj-y += hw/
common-obj-y += accel.o
common-obj-y += ui/
common-obj-y += bt-host.o bt-vhci.o
bt-host.o-cflags := $(BLUEZ_CFLAGS)
common-obj-y += dma-helpers.o
common-obj-y += vl.o
vl.o-cflags := $(GPROF_CFLAGS) $(SDL_CFLAGS)
common-obj-y += tpm.o
common-obj-$(CONFIG_SLIRP) += slirp/
common-obj-y += backends/
common-obj-$(CONFIG_SECCOMP) += qemu-seccomp.o
common-obj-$(CONFIG_SMARTCARD_NSS) += $(libcacard-y)
######################################################################
# qapi
common-obj-y += qmp-marshal.o
common-obj-y += qmp.o hmp.o
ui-obj-y += keymaps.o
ui-obj-$(CONFIG_SDL) += sdl.o sdl_zoom.o x_keymap.o
ui-obj-$(CONFIG_CURSES) += curses.o
ui-obj-y += vnc.o d3des.o
ui-obj-y += vnc-enc-zlib.o vnc-enc-hextile.o
ui-obj-y += vnc-enc-tight.o vnc-palette.o
ui-obj-$(CONFIG_VNC_TLS) += vnc-tls.o vnc-auth-vencrypt.o
ui-obj-$(CONFIG_VNC_SASL) += vnc-auth-sasl.o
ui-obj-$(CONFIG_COCOA) += cocoa.o
ifdef CONFIG_VNC_THREAD
ui-obj-y += vnc-jobs-async.o
else
ui-obj-y += vnc-jobs-sync.o
endif
common-obj-y += $(addprefix ui/, $(ui-obj-y))
#######################################################################
# Target-independent parts used in system and user emulation
common-obj-y += qemu-log.o
common-obj-y += tcg-runtime.o
common-obj-y += hw/
common-obj-y += qom/
common-obj-y += disas/
common-obj-y += iov.o acl.o
common-obj-$(CONFIG_THREAD) += qemu-thread.o
common-obj-y += notify.o event_notifier.o
common-obj-y += qemu-timer.o
slirp-obj-y = cksum.o if.o ip_icmp.o ip_input.o ip_output.o
slirp-obj-y += slirp.o mbuf.o misc.o sbuf.o socket.o tcp_input.o tcp_output.o
slirp-obj-y += tcp_subr.o tcp_timer.o udp.o bootp.o tftp.o
common-obj-$(CONFIG_SLIRP) += $(addprefix slirp/, $(slirp-obj-y))
# xen backend driver support
common-obj-$(CONFIG_XEN) += xen_backend.o xen_devconfig.o
common-obj-$(CONFIG_XEN) += xen_console.o xenfb.o xen_disk.o xen_nic.o
######################################################################
# Resource file for Windows executables
version-obj-$(CONFIG_WIN32) += $(BUILD_DIR)/version.o
version-lobj-$(CONFIG_WIN32) += $(BUILD_DIR)/version.lo
# libuser
user-obj-y =
user-obj-y += envlist.o path.o
user-obj-y += tcg-runtime.o host-utils.o
user-obj-y += cutils.o cache-utils.o
######################################################################
# tracing
util-obj-y += trace/
target-obj-y += trace/
# libhw
hw-obj-y =
hw-obj-y += vl.o loader.o
hw-obj-y += virtio.o virtio-console.o
hw-obj-y += fw_cfg.o pci.o pci_host.o pcie_host.o
hw-obj-y += watchdog.o
hw-obj-$(CONFIG_ISA_MMIO) += isa_mmio.o
hw-obj-$(CONFIG_ECC) += ecc.o
hw-obj-$(CONFIG_NAND) += nand.o
hw-obj-$(CONFIG_PFLASH_CFI01) += pflash_cfi01.o
hw-obj-$(CONFIG_PFLASH_CFI02) += pflash_cfi02.o
hw-obj-$(CONFIG_M48T59) += m48t59.o
hw-obj-$(CONFIG_ESCC) += escc.o
hw-obj-$(CONFIG_EMPTY_SLOT) += empty_slot.o
hw-obj-$(CONFIG_SERIAL) += serial.o
hw-obj-$(CONFIG_PARALLEL) += parallel.o
hw-obj-$(CONFIG_I8254) += i8254.o
hw-obj-$(CONFIG_PCSPK) += pcspk.o
hw-obj-$(CONFIG_PCKBD) += pckbd.o
hw-obj-$(CONFIG_USB_UHCI) += usb-uhci.o
hw-obj-$(CONFIG_FDC) += fdc.o
hw-obj-$(CONFIG_ACPI) += acpi.o acpi_piix4.o
hw-obj-$(CONFIG_APM) += pm_smbus.o apm.o
hw-obj-$(CONFIG_DMA) += dma.o
# PPC devices
hw-obj-$(CONFIG_OPENPIC) += openpic.o
hw-obj-$(CONFIG_PREP_PCI) += prep_pci.o
# Mac shared devices
hw-obj-$(CONFIG_MACIO) += macio.o
hw-obj-$(CONFIG_CUDA) += cuda.o
hw-obj-$(CONFIG_ADB) += adb.o
hw-obj-$(CONFIG_MAC_NVRAM) += mac_nvram.o
hw-obj-$(CONFIG_MAC_DBDMA) += mac_dbdma.o
# OldWorld PowerMac
hw-obj-$(CONFIG_HEATHROW_PIC) += heathrow_pic.o
hw-obj-$(CONFIG_GRACKLE_PCI) += grackle_pci.o
# NewWorld PowerMac
hw-obj-$(CONFIG_UNIN_PCI) += unin_pci.o
hw-obj-$(CONFIG_DEC_PCI) += dec_pci.o
# PowerPC E500 boards
hw-obj-$(CONFIG_PPCE500_PCI) += ppce500_pci.o
# MIPS devices
hw-obj-$(CONFIG_PIIX4) += piix4.o
# PCI watchdog devices
hw-obj-y += wdt_i6300esb.o
hw-obj-y += msix.o
# PCI network cards
hw-obj-y += ne2000.o
hw-obj-y += eepro100.o
hw-obj-y += pcnet.o
hw-obj-$(CONFIG_SMC91C111) += smc91c111.o
hw-obj-$(CONFIG_LAN9118) += lan9118.o
hw-obj-$(CONFIG_NE2000_ISA) += ne2000-isa.o
# IDE
hw-obj-$(CONFIG_IDE_CORE) += ide/core.o
hw-obj-$(CONFIG_IDE_QDEV) += ide/qdev.o
hw-obj-$(CONFIG_IDE_PCI) += ide/pci.o
hw-obj-$(CONFIG_IDE_ISA) += ide/isa.o
hw-obj-$(CONFIG_IDE_PIIX) += ide/piix.o
hw-obj-$(CONFIG_IDE_CMD646) += ide/cmd646.o
hw-obj-$(CONFIG_IDE_MACIO) += ide/macio.o
hw-obj-$(CONFIG_IDE_VIA) += ide/via.o
# SCSI layer
hw-obj-y += lsi53c895a.o
hw-obj-$(CONFIG_ESP) += esp.o
hw-obj-y += dma-helpers.o sysbus.o isa-bus.o
hw-obj-y += qdev-addr.o
# VGA
hw-obj-$(CONFIG_VGA_PCI) += vga-pci.o
hw-obj-$(CONFIG_VGA_ISA) += vga-isa.o
hw-obj-$(CONFIG_VGA_ISA_MM) += vga-isa-mm.o
hw-obj-$(CONFIG_VMWARE_VGA) += vmware_vga.o
hw-obj-$(CONFIG_RC4030) += rc4030.o
hw-obj-$(CONFIG_DP8393X) += dp8393x.o
hw-obj-$(CONFIG_DS1225Y) += ds1225y.o
hw-obj-$(CONFIG_MIPSNET) += mipsnet.o
# Sound
sound-obj-y =
sound-obj-$(CONFIG_SB16) += sb16.o
sound-obj-$(CONFIG_ES1370) += es1370.o
sound-obj-$(CONFIG_AC97) += ac97.o
sound-obj-$(CONFIG_ADLIB) += fmopl.o adlib.o
sound-obj-$(CONFIG_GUS) += gus.o gusemu_hal.o gusemu_mixer.o
sound-obj-$(CONFIG_CS4231A) += cs4231a.o
adlib.o fmopl.o: QEMU_CFLAGS += -DBUILD_Y8950=0
hw-obj-$(CONFIG_SOUND) += $(sound-obj-y)
hw-obj-$(CONFIG_VIRTFS) += virtio-9p-debug.o virtio-9p-local.o
######################################################################
# guest agent
# libdis
# NOTE: the disassembler code is only needed for debugging
libdis-y =
libdis-$(CONFIG_ALPHA_DIS) += alpha-dis.o
libdis-$(CONFIG_ARM_DIS) += arm-dis.o
libdis-$(CONFIG_CRIS_DIS) += cris-dis.o
libdis-$(CONFIG_HPPA_DIS) += hppa-dis.o
libdis-$(CONFIG_I386_DIS) += i386-dis.o
libdis-$(CONFIG_IA64_DIS) += ia64-dis.o
libdis-$(CONFIG_M68K_DIS) += m68k-dis.o
libdis-$(CONFIG_MICROBLAZE_DIS) += microblaze-dis.o
libdis-$(CONFIG_MIPS_DIS) += mips-dis.o
libdis-$(CONFIG_PPC_DIS) += ppc-dis.o
libdis-$(CONFIG_S390_DIS) += s390-dis.o
libdis-$(CONFIG_SH4_DIS) += sh4-dis.o
libdis-$(CONFIG_SPARC_DIS) += sparc-dis.o
vl.o: QEMU_CFLAGS+=$(GPROF_CFLAGS)
vl.o: QEMU_CFLAGS+=$(SDL_CFLAGS)
vl.o: qemu-options.def
os-posix.o: qemu-options.def
os-win32.o: qemu-options.def
qemu-options.def: $(SRC_PATH)/qemu-options.hx
$(call quiet-command,sh $(SRC_PATH)/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
# FIXME: a few definitions from qapi-types.o/qapi-visit.o are needed
# by libqemuutil.a. These should be moved to a separate .json schema.
qga-obj-y = qga/
qga-vss-dll-obj-y = qga/

View File

@@ -1,210 +1,336 @@
# -*- Mode: makefile -*-
GENERATED_HEADERS = config-target.h
CONFIG_NO_KVM = $(if $(subst n,,$(CONFIG_KVM)),n,y)
include ../config-host.mak
include config-target.mak
include config-devices.mak
include config-target.mak
include $(SRC_PATH)/rules.mak
$(call set-vpath, $(SRC_PATH))
ifdef CONFIG_LINUX
QEMU_CFLAGS += -I../linux-headers
ifneq ($(HWDIR),)
include $(HWDIR)/config.mak
endif
QEMU_CFLAGS += -I.. -I$(SRC_PATH)/target-$(TARGET_BASE_ARCH) -DNEED_CPU_H
QEMU_CFLAGS+=-I$(SRC_PATH)/include
TARGET_PATH=$(SRC_PATH)/target-$(TARGET_BASE_ARCH)
$(call set-vpath, $(SRC_PATH):$(TARGET_PATH):$(SRC_PATH)/hw)
QEMU_CFLAGS+= -I.. -I$(TARGET_PATH) -DNEED_CPU_H
include $(SRC_PATH)/Makefile.objs
ifdef CONFIG_USER_ONLY
# user emulator name
QEMU_PROG=qemu-$(TARGET_NAME)
QEMU_PROG_BUILD = $(QEMU_PROG)
QEMU_PROG=qemu-$(TARGET_ARCH2)
else
# system emulator name
QEMU_PROG=qemu-system-$(TARGET_NAME)$(EXESUF)
ifneq (,$(findstring -mwindows,$(libs_softmmu)))
# Terminate program name with a 'w' because the linker builds a windows executable.
QEMU_PROGW=qemu-system-$(TARGET_NAME)w$(EXESUF)
$(QEMU_PROG): $(QEMU_PROGW)
$(call quiet-command,$(OBJCOPY) --subsystem console $(QEMU_PROGW) $(QEMU_PROG)," GEN $(TARGET_DIR)$(QEMU_PROG)")
QEMU_PROG_BUILD = $(QEMU_PROGW)
ifeq ($(TARGET_ARCH), i386)
QEMU_PROG=qemu$(EXESUF)
else
QEMU_PROG_BUILD = $(QEMU_PROG)
QEMU_PROG=qemu-system-$(TARGET_ARCH2)$(EXESUF)
endif
endif
PROGS=$(QEMU_PROG) $(QEMU_PROGW)
STPFILES=
PROGS=$(QEMU_PROG)
LIBS+=-lm
kvm.o kvm-all.o vhost.o vhost_net.o: QEMU_CFLAGS+=$(KVM_CFLAGS)
config-target.h: config-target.h-timestamp
config-target.h-timestamp: config-target.mak
ifdef CONFIG_TRACE_SYSTEMTAP
stap: $(QEMU_PROG).stp-installed $(QEMU_PROG).stp $(QEMU_PROG)-simpletrace.stp
ifdef CONFIG_USER_ONLY
TARGET_TYPE=user
else
TARGET_TYPE=system
endif
$(QEMU_PROG).stp-installed: $(SRC_PATH)/trace-events
$(call quiet-command,$(TRACETOOL) \
--format=stap \
--backends=$(TRACE_BACKENDS) \
--binary=$(bindir)/$(QEMU_PROG) \
--target-name=$(TARGET_NAME) \
--target-type=$(TARGET_TYPE) \
< $< > $@," GEN $(TARGET_DIR)$(QEMU_PROG).stp-installed")
$(QEMU_PROG).stp: $(SRC_PATH)/trace-events
$(call quiet-command,$(TRACETOOL) \
--format=stap \
--backends=$(TRACE_BACKENDS) \
--binary=$(realpath .)/$(QEMU_PROG) \
--target-name=$(TARGET_NAME) \
--target-type=$(TARGET_TYPE) \
< $< > $@," GEN $(TARGET_DIR)$(QEMU_PROG).stp")
$(QEMU_PROG)-simpletrace.stp: $(SRC_PATH)/trace-events
$(call quiet-command,$(TRACETOOL) \
--format=simpletrace-stap \
--backends=$(TRACE_BACKENDS) \
--probe-prefix=qemu.$(TARGET_TYPE).$(TARGET_NAME) \
< $< > $@," GEN $(TARGET_DIR)$(QEMU_PROG)-simpletrace.stp")
else
stap:
endif
all: $(PROGS) stap
all: $(PROGS)
# Dummy command so that make thinks it has done something
@true
#########################################################
# cpu emulator library
obj-y = exec.o translate-all.o cpu-exec.o
obj-y += tcg/tcg.o tcg/optimize.o
obj-$(CONFIG_TCG_INTERPRETER) += tci.o
obj-$(CONFIG_TCG_INTERPRETER) += disas/tci.o
obj-y += fpu/softfloat.o
obj-y += target-$(TARGET_BASE_ARCH)/
obj-y += disas.o
obj-$(call notempty,$(TARGET_XML_FILES)) += gdbstub-xml.o
obj-$(call lnot,$(CONFIG_KVM)) += kvm-stub.o
libobj-y = exec.o translate-all.o cpu-exec.o translate.o
libobj-y += tcg/tcg.o
libobj-$(CONFIG_SOFTFLOAT) += fpu/softfloat.o
libobj-$(CONFIG_NOSOFTFLOAT) += fpu/softfloat-native.o
libobj-y += op_helper.o helper.o
ifeq ($(TARGET_BASE_ARCH), i386)
libobj-y += cpuid.o
endif
libobj-$(CONFIG_NEED_MMU) += mmu.o
libobj-$(TARGET_ARM) += neon_helper.o iwmmxt_helper.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/decContext.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/decNumber.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/dpd/decimal32.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/dpd/decimal64.o
obj-$(CONFIG_LIBDECNUMBER) += libdecnumber/dpd/decimal128.o
libobj-y += disas.o
$(libobj-y): $(GENERATED_HEADERS)
# libqemu
translate.o: translate.c cpu.h
translate-all.o: translate-all.c cpu.h
tcg/tcg.o: cpu.h
# HELPER_CFLAGS is used for all the code compiled with static register
# variables
op_helper.o cpu-exec.o: QEMU_CFLAGS += $(HELPER_CFLAGS)
# Note: this is a workaround. The real fix is to avoid compiling
# cpu_signal_handler() in cpu-exec.c.
signal.o: QEMU_CFLAGS += $(HELPER_CFLAGS)
#########################################################
# Linux user emulator target
ifdef CONFIG_LINUX_USER
QEMU_CFLAGS+=-I$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR) -I$(SRC_PATH)/linux-user
$(call set-vpath, $(SRC_PATH)/linux-user:$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR))
obj-y += linux-user/
obj-y += gdbstub.o thunk.o user-exec.o
QEMU_CFLAGS+=-I$(SRC_PATH)/linux-user -I$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR)
obj-y = main.o syscall.o strace.o mmap.o signal.o thunk.o \
elfload.o linuxload.o uaccess.o gdbstub.o cpu-uname.o \
qemu-malloc.o
obj-$(TARGET_HAS_BFLT) += flatload.o
obj-$(TARGET_I386) += vm86.o
obj-i386-y += ioport-user.o
nwfpe-obj-y = fpa11.o fpa11_cpdo.o fpa11_cpdt.o fpa11_cprt.o fpopcode.o
nwfpe-obj-y += single_cpdo.o double_cpdo.o extended_cpdo.o
obj-arm-y += $(addprefix nwfpe/, $(nwfpe-obj-y))
obj-arm-y += arm-semi.o
obj-m68k-y += m68k-sim.o m68k-semi.o
$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y): $(GENERATED_HEADERS)
obj-y += $(addprefix ../libuser/, $(user-obj-y))
obj-y += $(addprefix ../libdis-user/, $(libdis-y))
obj-y += $(libobj-y)
endif #CONFIG_LINUX_USER
#########################################################
# Darwin user emulator target
ifdef CONFIG_DARWIN_USER
$(call set-vpath, $(SRC_PATH)/darwin-user)
QEMU_CFLAGS+=-I$(SRC_PATH)/darwin-user -I$(SRC_PATH)/darwin-user/$(TARGET_ARCH)
# Leave some space for the regular program loading zone
LDFLAGS+=-Wl,-segaddr,__STD_PROG_ZONE,0x1000 -image_base 0x0e000000
LIBS+=-lmx
obj-y = main.o commpage.o machload.o mmap.o signal.o syscall.o thunk.o \
gdbstub.o
obj-i386-y += ioport-user.o
$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y): $(GENERATED_HEADERS)
obj-y += $(addprefix ../libuser/, $(user-obj-y))
obj-y += $(addprefix ../libdis-user/, $(libdis-y))
obj-y += $(libobj-y)
endif #CONFIG_DARWIN_USER
#########################################################
# BSD user emulator target
ifdef CONFIG_BSD_USER
QEMU_CFLAGS+=-I$(SRC_PATH)/bsd-user -I$(SRC_PATH)/bsd-user/$(TARGET_ABI_DIR) \
-I$(SRC_PATH)/bsd-user/$(HOST_VARIANT_DIR)
$(call set-vpath, $(SRC_PATH)/bsd-user)
obj-y += bsd-user/
obj-y += gdbstub.o user-exec.o
QEMU_CFLAGS+=-I$(SRC_PATH)/bsd-user -I$(SRC_PATH)/bsd-user/$(TARGET_ARCH)
obj-y = main.o bsdload.o elfload.o mmap.o signal.o strace.o syscall.o \
gdbstub.o uaccess.o
obj-i386-y += ioport-user.o
$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y): $(GENERATED_HEADERS)
obj-y += $(addprefix ../libuser/, $(user-obj-y))
obj-y += $(addprefix ../libdis-user/, $(libdis-y))
obj-y += $(libobj-y)
endif #CONFIG_BSD_USER
#########################################################
# System emulator target
ifdef CONFIG_SOFTMMU
obj-y += arch_init.o cpus.o monitor.o gdbstub.o balloon.o ioport.o numa.o
obj-y += qtest.o bootdevice.o
obj-y += hw/
obj-$(CONFIG_FDT) += device_tree.o
obj-$(CONFIG_KVM) += kvm-all.o
obj-y += memory.o savevm.o cputlb.o
obj-y += memory_mapping.o
obj-y += dump.o
LIBS+=$(libs_softmmu)
# xen support
obj-$(CONFIG_XEN) += xen-common.o
obj-$(CONFIG_XEN_I386) += xen-hvm.o xen-mapcache.o
obj-$(call lnot,$(CONFIG_XEN)) += xen-common-stub.o
obj-$(call lnot,$(CONFIG_XEN_I386)) += xen-hvm-stub.o
obj-y = arch_init.o cpus.o monitor.o machine.o gdbstub.o balloon.o
# virtio has to be here due to weird dependency between PCI and virtio-net.
# need to fix this properly
obj-y += virtio-blk.o virtio-balloon.o virtio-net.o virtio-serial-bus.o
obj-$(CONFIG_VIRTIO_PCI) += virtio-pci.o
obj-y += vhost_net.o
obj-$(CONFIG_VHOST_NET) += vhost.o
obj-$(CONFIG_VIRTFS) += virtio-9p.o
obj-y += rwhandler.o
obj-$(CONFIG_KVM) += kvm.o kvm-all.o
obj-$(CONFIG_NO_KVM) += kvm-stub.o
LIBS+=-lz
QEMU_CFLAGS += $(VNC_TLS_CFLAGS)
QEMU_CFLAGS += $(VNC_SASL_CFLAGS)
QEMU_CFLAGS += $(VNC_JPEG_CFLAGS)
QEMU_CFLAGS += $(VNC_PNG_CFLAGS)
# xen backend driver support
obj-$(CONFIG_XEN) += xen_machine_pv.o xen_domainbuild.o
# USB layer
obj-$(CONFIG_USB_OHCI) += usb-ohci.o
# PCI network cards
obj-y += rtl8139.o
obj-y += e1000.o
# Inter-VM PCI shared memory
obj-$(CONFIG_KVM) += ivshmem.o
# Hardware support
ifeq ($(TARGET_NAME), sparc64)
obj-y += hw/sparc64/
obj-i386-y += vga.o
obj-i386-y += mc146818rtc.o i8259.o pc.o
obj-i386-y += cirrus_vga.o apic.o ioapic.o piix_pci.o
obj-i386-y += vmmouse.o vmport.o hpet.o applesmc.o
obj-i386-y += device-hotplug.o pci-hotplug.o smbios.o wdt_ib700.o
obj-i386-y += debugcon.o multiboot.o
obj-i386-y += pc_piix.o
# shared objects
obj-ppc-y = ppc.o
obj-ppc-y += vga.o
# PREP target
obj-ppc-y += i8259.o mc146818rtc.o
obj-ppc-y += ppc_prep.o
# OldWorld PowerMac
obj-ppc-y += ppc_oldworld.o
# NewWorld PowerMac
obj-ppc-y += ppc_newworld.o
# PowerPC 4xx boards
obj-ppc-y += ppc4xx_devs.o ppc4xx_pci.o ppc405_uc.o ppc405_boards.o
obj-ppc-y += ppc440.o ppc440_bamboo.o
# PowerPC E500 boards
obj-ppc-y += ppce500_mpc8544ds.o
obj-ppc-$(CONFIG_KVM) += kvm_ppc.o
obj-ppc-$(CONFIG_FDT) += device_tree.o
obj-mips-y = mips_r4k.o mips_jazz.o mips_malta.o mips_mipssim.o
obj-mips-y += mips_addr.o mips_timer.o mips_int.o
obj-mips-y += vga.o i8259.o
obj-mips-y += g364fb.o jazz_led.o
obj-mips-y += gt64xxx.o mc146818rtc.o
obj-mips-y += cirrus_vga.o
obj-mips-$(CONFIG_FULONG) += bonito.o vt82c686.o mips_fulong2e.o
obj-microblaze-y = petalogix_s3adsp1800_mmu.o
obj-microblaze-y += microblaze_pic_cpu.o
obj-microblaze-y += xilinx_intc.o
obj-microblaze-y += xilinx_timer.o
obj-microblaze-y += xilinx_uartlite.o
obj-microblaze-y += xilinx_ethlite.o
obj-microblaze-$(CONFIG_FDT) += device_tree.o
# Boards
obj-cris-y = cris_pic_cpu.o
obj-cris-y += cris-boot.o
obj-cris-y += etraxfs.o axis_dev88.o
obj-cris-y += axis_dev88.o
# IO blocks
obj-cris-y += etraxfs_dma.o
obj-cris-y += etraxfs_pic.o
obj-cris-y += etraxfs_eth.o
obj-cris-y += etraxfs_timer.o
obj-cris-y += etraxfs_ser.o
ifeq ($(TARGET_ARCH), sparc64)
obj-sparc-y = sun4u.o apb_pci.o
obj-sparc-y += vga.o
obj-sparc-y += mc146818rtc.o
obj-sparc-y += cirrus_vga.o
else
obj-y += hw/$(TARGET_BASE_ARCH)/
obj-sparc-y = sun4m.o lance.o tcx.o sun4m_iommu.o slavio_intctl.o
obj-sparc-y += slavio_timer.o slavio_misc.o sparc32_dma.o
obj-sparc-y += cs4231.o eccmemctl.o sbi.o sun4c_intctl.o
endif
GENERATED_HEADERS += hmp-commands.h qmp-commands-old.h
obj-arm-y = integratorcp.o versatilepb.o arm_pic.o arm_timer.o
obj-arm-y += arm_boot.o pl011.o pl031.o pl050.o pl080.o pl110.o pl181.o pl190.o
obj-arm-y += versatile_pci.o
obj-arm-y += realview_gic.o realview.o arm_sysctl.o arm11mpcore.o a9mpcore.o
obj-arm-y += armv7m.o armv7m_nvic.o stellaris.o pl022.o stellaris_enet.o
obj-arm-y += pl061.o
obj-arm-y += arm-semi.o
obj-arm-y += pxa2xx.o pxa2xx_pic.o pxa2xx_gpio.o pxa2xx_timer.o pxa2xx_dma.o
obj-arm-y += pxa2xx_lcd.o pxa2xx_mmci.o pxa2xx_pcmcia.o pxa2xx_keypad.o
obj-arm-y += gumstix.o
obj-arm-y += zaurus.o ide/microdrive.o spitz.o tosa.o tc6393xb.o
obj-arm-y += omap1.o omap_lcdc.o omap_dma.o omap_clk.o omap_mmc.o omap_i2c.o \
omap_gpio.o omap_intc.o omap_uart.o
obj-arm-y += omap2.o omap_dss.o soc_dma.o omap_gptimer.o omap_synctimer.o \
omap_gpmc.o omap_sdrc.o omap_spi.o omap_tap.o omap_l4.o
obj-arm-y += omap_sx1.o palm.o tsc210x.o
obj-arm-y += nseries.o blizzard.o onenand.o vga.o cbus.o tusb6010.o usb-musb.o
obj-arm-y += mst_fpga.o mainstone.o
obj-arm-y += musicpal.o bitbang_i2c.o marvell_88w8618_audio.o
obj-arm-y += framebuffer.o
obj-arm-y += syborg.o syborg_fb.o syborg_interrupt.o syborg_keyboard.o
obj-arm-y += syborg_serial.o syborg_timer.o syborg_pointer.o syborg_rtc.o
obj-arm-y += syborg_virtio.o
obj-sh4-y = shix.o r2d.o sh7750.o sh7750_regnames.o tc58128.o
obj-sh4-y += sh_timer.o sh_serial.o sh_intc.o sh_pci.o sm501.o
obj-sh4-y += ide/mmio.o
obj-m68k-y = an5206.o mcf5206.o mcf_uart.o mcf_intc.o mcf5208.o mcf_fec.o
obj-m68k-y += m68k-semi.o dummy_m68k.o
obj-s390x-y = s390-virtio-bus.o s390-virtio.o
obj-alpha-y = alpha_palcode.o
main.o: QEMU_CFLAGS+=$(GPROF_CFLAGS)
monitor.o: qemu-monitor.h
$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y): $(GENERATED_HEADERS)
obj-y += $(addprefix ../, $(common-obj-y))
obj-y += $(addprefix ../libdis/, $(libdis-y))
obj-y += $(libobj-y)
obj-y += $(addprefix $(HWDIR)/, $(hw-obj-y))
endif # CONFIG_SOFTMMU
# Workaround for http://gcc.gnu.org/PR55489, see configure.
%/translate.o: QEMU_CFLAGS += $(TRANSLATE_OPT_CFLAGS)
obj-$(CONFIG_GDBSTUB_XML) += gdbstub-xml.o
dummy := $(call unnest-vars,,obj-y)
all-obj-y := $(obj-y)
$(QEMU_PROG): $(obj-y) $(obj-$(TARGET_BASE_ARCH)-y)
$(call LINK,$(obj-y) $(obj-$(TARGET_BASE_ARCH)-y))
target-obj-y :=
block-obj-y :=
common-obj-y :=
include $(SRC_PATH)/Makefile.objs
dummy := $(call unnest-vars,,target-obj-y)
target-obj-y-save := $(target-obj-y)
dummy := $(call unnest-vars,.., \
block-obj-y \
block-obj-m \
common-obj-y \
common-obj-m)
target-obj-y := $(target-obj-y-save)
all-obj-y += $(common-obj-y)
all-obj-y += $(target-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(block-obj-y)
# build either PROG or PROGW
$(QEMU_PROG_BUILD): $(all-obj-y) ../libqemuutil.a ../libqemustub.a
$(call LINK,$^)
gdbstub-xml.c: $(TARGET_XML_FILES) $(SRC_PATH)/feature_to_c.sh
$(call quiet-command,rm -f $@ && $(SHELL) $(SRC_PATH)/feature_to_c.sh $@ $(TARGET_XML_FILES)," GEN $(TARGET_DIR)$@")
gdbstub-xml.c: $(TARGET_XML_FILES) $(SRC_PATH)/scripts/feature_to_c.sh
$(call quiet-command,rm -f $@ && $(SHELL) $(SRC_PATH)/scripts/feature_to_c.sh $@ $(TARGET_XML_FILES)," GEN $(TARGET_DIR)$@")
hmp-commands.h: $(SRC_PATH)/hmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
qmp-commands-old.h: $(SRC_PATH)/qmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
qemu-monitor.h: $(SRC_PATH)/qemu-monitor.hx
$(call quiet-command,sh $(SRC_PATH)/hxtool -h < $< > $@," GEN $(TARGET_DIR)$@")
clean:
rm -f *.a *~ $(PROGS)
rm -f $(shell find . -name '*.[od]')
rm -f hmp-commands.h qmp-commands-old.h gdbstub-xml.c
ifdef CONFIG_TRACE_SYSTEMTAP
rm -f *.stp
endif
rm -f *.o *.a *~ $(PROGS) nwfpe/*.o fpu/*.o
rm -f *.d */*.d tcg/*.o ide/*.o
rm -f qemu-monitor.h gdbstub-xml.c
install: all
ifneq ($(PROGS),)
$(call install-prog,$(PROGS),$(DESTDIR)$(bindir))
endif
ifdef CONFIG_TRACE_SYSTEMTAP
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset"
$(INSTALL_DATA) $(QEMU_PROG).stp-installed "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset/$(QEMU_PROG).stp"
$(INSTALL_DATA) $(QEMU_PROG)-simpletrace.stp "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset/$(QEMU_PROG)-simpletrace.stp"
$(INSTALL) -m 755 $(STRIP_OPT) $(PROGS) "$(DESTDIR)$(bindir)"
endif
GENERATED_HEADERS += config-target.h
Makefile: $(GENERATED_HEADERS)
# Include automatically generated dependency files
-include $(wildcard *.d */*.d)

23
Makefile.user Normal file
View File

@@ -0,0 +1,23 @@
# Makefile for qemu target independent user files.
include ../config-host.mak
include $(SRC_PATH)/rules.mak
-include config.mak
.PHONY: all
$(call set-vpath, $(SRC_PATH))
QEMU_CFLAGS+=-I..
include $(SRC_PATH)/Makefile.objs
all: $(user-obj-y)
# Dummy command so that make thinks it has done something
@true
clean:
rm -f *.o *.d *.a *~
# Include automatically generated dependency files
-include $(wildcard *.d */*.d)

91
QMP/README Normal file
View File

@@ -0,0 +1,91 @@
QEMU Monitor Protocol
=====================
Introduction
-------------
The QEMU Monitor Protocol (QMP) allows applications to communicate with
QEMU's Monitor.
QMP is JSON[1] based and currently has the following features:
- Lightweight, text-based, easy to parse data format
- Asynchronous messages support (ie. events)
- Capabilities Negotiation
For detailed information on QMP's usage, please, refer to the following files:
o qmp-spec.txt QEMU Monitor Protocol current specification
o qmp-commands.txt QMP supported commands (auto-generated at build-time)
o qmp-events.txt List of available asynchronous events
There are also two simple Python scripts available:
o qmp-shell A shell
o vm-info Show some information about the Virtual Machine
IMPORTANT: It's strongly recommended to read the 'Stability Considerations'
section in the qmp-commands.txt file before making any serious use of QMP.
[1] http://www.json.org
Usage
-----
To enable QMP, you need a QEMU monitor instance in "control mode". There are
two ways of doing this.
The simplest one is using the '-qmp' command-line option. The following
example makes QMP available on localhost port 4444:
$ qemu [...] -qmp tcp:localhost:4444,server
However, in order to have more complex combinations, like multiple monitors,
the '-mon' command-line option should be used along with the '-chardev' one.
For instance, the following example creates one user monitor on stdio and one
QMP monitor on localhost port 4444.
$ qemu [...] -chardev stdio,id=mon0 -mon chardev=mon0,mode=readline \
-chardev socket,id=mon1,host=localhost,port=4444,server \
-mon chardev=mon1,mode=control
Please, refer to QEMU's manpage for more information.
Simple Testing
--------------
To manually test QMP one can connect with telnet and issue commands by hand:
$ telnet localhost 4444
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 13, "major": 0}, "package": ""}, "capabilities": []}}
{ "execute": "qmp_capabilities" }
{"return": {}}
{ "execute": "query-version" }
{"return": {"qemu": {"micro": 50, "minor": 13, "major": 0}, "package": ""}}
Development Process
-------------------
When changing QMP's interface (by adding new commands, events or modifying
existing ones) it's mandatory to update the relevant documentation, which is
one (or more) of the files listed in the 'Introduction' section*.
Also, it's strongly recommended to send the documentation patch first, before
doing any code change. This is so because:
1. Avoids the code dictating the interface
2. Review can improve your interface. Letting that happen before
you implement it can save you work.
* The qmp-commands.txt file is generated from the qemu-monitor.hx one, which
is the file that should be edited.
Homepage
--------
http://wiki.qemu.org/QMP

202
QMP/qmp-events.txt Normal file
View File

@@ -0,0 +1,202 @@
QEMU Monitor Protocol Events
============================
BLOCK_IO_ERROR
--------------
Emitted when a disk I/O error occurs.
Data:
- "device": device name (json-string)
- "operation": I/O operation (json-string, "read" or "write")
- "action": action that has been taken, it's one of the following (json-string):
"ignore": error has been ignored
"report": error has been reported to the device
"stop": error caused VM to be stopped
Example:
{ "event": "BLOCK_IO_ERROR",
"data": { "device": "ide0-hd1",
"operation": "write",
"action": "stop" },
"timestamp": { "seconds": 1265044230, "microseconds": 450486 } }
Note: If action is "stop", a STOP event will eventually follow the
BLOCK_IO_ERROR event.
RESET
-----
Emitted when the Virtual Machine is reseted.
Data: None.
Example:
{ "event": "RESET",
"timestamp": { "seconds": 1267041653, "microseconds": 9518 } }
RESUME
------
Emitted when the Virtual Machine resumes execution.
Data: None.
Example:
{ "event": "RESUME",
"timestamp": { "seconds": 1271770767, "microseconds": 582542 } }
RTC_CHANGE
----------
Emitted when the guest changes the RTC time.
Data:
- "offset": delta against the host UTC in seconds (json-number)
Example:
{ "event": "RTC_CHANGE",
"data": { "offset": 78 },
"timestamp": { "seconds": 1267020223, "microseconds": 435656 } }
SHUTDOWN
--------
Emitted when the Virtual Machine is powered down.
Data: None.
Example:
{ "event": "SHUTDOWN",
"timestamp": { "seconds": 1267040730, "microseconds": 682951 } }
Note: If the command-line option "-no-shutdown" has been specified, a STOP
event will eventually follow the SHUTDOWN event.
STOP
----
Emitted when the Virtual Machine is stopped.
Data: None.
Example:
{ "event": "SHUTDOWN",
"timestamp": { "seconds": 1267041730, "microseconds": 281295 } }
VNC_CONNECTED
-------------
Emitted when a VNC client establishes a connection.
Data:
- "server": Server information (json-object)
- "host": IP address (json-string)
- "service": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "auth": authentication method (json-string, optional)
- "client": Client information (json-object)
- "host": IP address (json-string)
- "service": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
Example:
{ "event": "VNC_CONNECTED",
"data": {
"server": { "auth": "sasl", "family": "ipv4",
"service": "5901", "host": "0.0.0.0" },
"client": { "family": "ipv4", "service": "58425",
"host": "127.0.0.1" } },
"timestamp": { "seconds": 1262976601, "microseconds": 975795 } }
Note: This event is emitted before any authentication takes place, thus
the authentication ID is not provided.
VNC_DISCONNECTED
----------------
Emitted when the conection is closed.
Data:
- "server": Server information (json-object)
- "host": IP address (json-string)
- "service": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "auth": authentication method (json-string, optional)
- "client": Client information (json-object)
- "host": IP address (json-string)
- "service": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "x509_dname": TLS dname (json-string, optional)
- "sasl_username": SASL username (json-string, optional)
Example:
{ "event": "VNC_DISCONNECTED",
"data": {
"server": { "auth": "sasl", "family": "ipv4",
"service": "5901", "host": "0.0.0.0" },
"client": { "family": "ipv4", "service": "58425",
"host": "127.0.0.1", "sasl_username": "luiz" } },
"timestamp": { "seconds": 1262976601, "microseconds": 975795 } }
VNC_INITIALIZED
---------------
Emitted after authentication takes place (if any) and the VNC session is
made active.
Data:
- "server": Server information (json-object)
- "host": IP address (json-string)
- "service": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "auth": authentication method (json-string, optional)
- "client": Client information (json-object)
- "host": IP address (json-string)
- "service": port number (json-string)
- "family": address family (json-string, "ipv4" or "ipv6")
- "x509_dname": TLS dname (json-string, optional)
- "sasl_username": SASL username (json-string, optional)
Example:
{ "event": "VNC_INITIALIZED",
"data": {
"server": { "auth": "sasl", "family": "ipv4",
"service": "5901", "host": "0.0.0.0"},
"client": { "family": "ipv4", "service": "46089",
"host": "127.0.0.1", "sasl_username": "luiz" } },
"timestamp": { "seconds": 1263475302, "microseconds": 150772 } }
WATCHDOG
--------
Emitted when the watchdog device's timer is expired.
Data:
- "action": Action that has been taken, it's one of the following (json-string):
"reset", "shutdown", "poweroff", "pause", "debug", or "none"
Example:
{ "event": "WATCHDOG",
"data": { "action": "reset" },
"timestamp": { "seconds": 1267061043, "microseconds": 959568 } }
Note: If action is "reset", "shutdown", or "pause" the WATCHDOG event is
followed respectively by the RESET, SHUTDOWN, or STOP events.

73
QMP/qmp-shell Executable file
View File

@@ -0,0 +1,73 @@
#!/usr/bin/python
#
# Simple QEMU shell on top of QMP
#
# Copyright (C) 2009 Red Hat Inc.
#
# Authors:
# Luiz Capitulino <lcapitulino@redhat.com>
#
# This work is licensed under the terms of the GNU GPL, version 2. See
# the COPYING file in the top-level directory.
#
# Usage:
#
# Start QEMU with:
#
# $ qemu [...] -monitor control,unix:./qmp,server
#
# Run the shell:
#
# $ qmp-shell ./qmp
#
# Commands have the following format:
#
# < command-name > [ arg-name1=arg1 ] ... [ arg-nameN=argN ]
#
# For example:
#
# (QEMU) info item=network
import qmp
import readline
from sys import argv,exit
def shell_help():
print 'bye exit from the shell'
def main():
if len(argv) != 2:
print 'qemu-shell <unix-socket>'
exit(1)
qemu = qmp.QEMUMonitorProtocol(argv[1])
qemu.connect()
qemu.send("qmp_capabilities")
print 'Connected!'
while True:
try:
cmd = raw_input('(QEMU) ')
except EOFError:
print
break
if cmd == '':
continue
elif cmd == 'bye':
break
elif cmd == 'help':
shell_help()
else:
try:
resp = qemu.send(cmd)
if resp == None:
print 'Disconnected'
break
print resp
except IndexError:
print '-> command format: <command-name> ',
print '[arg-name1=arg1] ... [arg-nameN=argN]'
if __name__ == '__main__':
main()

View File

@@ -1,17 +1,21 @@
QEMU Machine Protocol Specification
QEMU Monitor Protocol Specification - Version 0.1
1. Introduction
===============
This document specifies the QEMU Machine Protocol (QMP), a JSON-based protocol
which is available for applications to operate QEMU at the machine-level.
This document specifies the QEMU Monitor Protocol (QMP), a JSON-based protocol
which is available for applications to control QEMU at the machine-level.
To enable QMP support, QEMU has to be run in "control mode". This is done by
starting QEMU with the appropriate command-line options. Please, refer to the
QEMU manual page for more information.
2. Protocol Specification
=========================
This section details the protocol format. For the purpose of this document
"Client" is any application which is using QMP to communicate with QEMU and
"Server" is QEMU itself.
"Client" is any application which is communicating with QEMU in control mode,
and "Server" is QEMU itself.
JSON data structures, when mentioned in this document, are always in the
following format:
@@ -43,14 +47,14 @@ that the connection has been successfully established and that the Server is
ready for capabilities negotiation (for more information refer to section
'4. Capabilities Negotiation').
The greeting message format is:
The format is:
{ "QMP": { "version": json-object, "capabilities": json-array } }
Where,
- The "version" member contains the Server's version information (the format
is the same of the query-version command)
is the same of the 'query-version' command)
- The "capabilities" member specify the availability of features beyond the
baseline specification
@@ -79,7 +83,10 @@ of a command execution: success or error.
2.4.1 success
-------------
The format of a success response is:
The success response is issued when the command execution has finished
without errors.
The format is:
{ "return": json-object, "id": json-value }
@@ -89,22 +96,28 @@ The format of a success response is:
in a per-command basis or an empty json-object if the command does not
return data
- The "id" member contains the transaction identification associated
with the command execution if issued by the Client
with the command execution (if issued by the Client)
2.4.2 error
-----------
The format of an error response is:
The error response is issued when the command execution could not be
completed because of an error condition.
{ "error": { "class": json-string, "desc": json-string }, "id": json-value }
The format is:
{ "error": { "class": json-string, "data": json-object, "desc": json-string },
"id": json-value }
Where,
- The "class" member contains the error class name (eg. "GenericError")
- The "class" member contains the error class name (eg. "ServiceUnavailable")
- The "data" member contains specific error data and is defined in a
per-command basis, it will be an empty json-object if the error has no data
- The "desc" member is a human-readable error message. Clients should
not attempt to parse this message.
- The "id" member contains the transaction identification associated with
the command execution if issued by the Client
the command execution (if issued by the Client)
NOTE: Some errors can occur before the Server is able to read the "id" member,
in these cases the "id" member will not be part of the error response, even
@@ -114,9 +127,9 @@ if provided by the client.
-----------------------
As a result of state changes, the Server may send messages unilaterally
to the Client at any time. They are called "asynchronous events".
to the Client at any time. They are called 'asynchronous events'.
The format of asynchronous events is:
The format is:
{ "event": json-string, "data": json-object,
"timestamp": { "seconds": json-number, "microseconds": json-number } }
@@ -137,37 +150,37 @@ qmp-events.txt file.
===============
This section provides some examples of real QMP usage, in all of them
"C" stands for "Client" and "S" stands for "Server".
'C' stands for 'Client' and 'S' stands for 'Server'.
3.1 Server greeting
-------------------
S: { "QMP": { "version": { "qemu": { "micro": 50, "minor": 6, "major": 1 },
"package": ""}, "capabilities": []}}
S: {"QMP": {"version": {"qemu": "0.12.50", "package": ""}, "capabilities": []}}
3.2 Simple 'stop' execution
---------------------------
C: { "execute": "stop" }
S: { "return": {} }
S: {"return": {}}
3.3 KVM information
-------------------
C: { "execute": "query-kvm", "id": "example" }
S: { "return": { "enabled": true, "present": true }, "id": "example"}
S: {"return": {"enabled": true, "present": true}, "id": "example"}
3.4 Parsing error
------------------
C: { "execute": }
S: { "error": { "class": "GenericError", "desc": "Invalid JSON syntax" } }
S: {"error": {"class": "JSONParsing", "desc": "Invalid JSON syntax", "data":
{}}}
3.5 Powerdown event
-------------------
S: { "timestamp": { "seconds": 1258551470, "microseconds": 802384 },
"event": "POWERDOWN" }
S: {"timestamp": {"seconds": 1258551470, "microseconds": 802384}, "event":
"POWERDOWN"}
4. Capabilities Negotiation
----------------------------
@@ -175,17 +188,17 @@ S: { "timestamp": { "seconds": 1258551470, "microseconds": 802384 },
When a Client successfully establishes a connection, the Server is in
Capabilities Negotiation mode.
In this mode only the qmp_capabilities command is allowed to run, all
other commands will return the CommandNotFound error. Asynchronous
messages are not delivered either.
In this mode only the 'qmp_capabilities' command is allowed to run, all
other commands will return the CommandNotFound error. Asynchronous messages
are not delivered either.
Clients should use the qmp_capabilities command to enable capabilities
Clients should use the 'qmp_capabilities' command to enable capabilities
advertised in the Server's greeting (section '2.2 Server Greeting') they
support.
When the qmp_capabilities command is issued, and if it does not return an
When the 'qmp_capabilities' command is issued, and if it does not return an
error, the Server enters in Command mode where capabilities changes take
effect, all commands (except qmp_capabilities) are allowed and asynchronous
effect, all commands (except 'qmp_capabilities') are allowed and asynchronous
messages are delivered.
5 Compatibility Considerations
@@ -196,27 +209,13 @@ incompatible way are disabled by default and will be advertised by the
capabilities array (section '2.2 Server Greeting'). Thus, Clients can check
that array and enable the capabilities they support.
The QMP Server performs a type check on the arguments to a command. It
generates an error if a value does not have the expected type for its
key, or if it does not understand a key that the Client included. The
strictness of the Server catches wrong assumptions of Clients about
the Server's schema. Clients can assume that, when such validation
errors occur, they will be reported before the command generated any
side effect.
Additionally, Clients must not assume any particular:
However, Clients must not assume any particular:
- Length of json-arrays
- Size of json-objects; in particular, future versions of QEMU may add
new keys and Clients should be able to ignore them.
- Size of json-objects or length of json-arrays
- Order of json-object members or json-array elements
- Amount of errors generated by a command, that is, new errors can be added
to any existing command in newer versions of the Server
Of course, the Server does guarantee to send valid JSON. But apart from
this, a Client should be "conservative in what they send, and liberal in
what they accept".
6. Downstream extension of QMP
------------------------------
@@ -236,7 +235,7 @@ arguments, errors, asynchronous events, and so forth.
Any new names downstream wishes to add must begin with '__'. To
ensure compatibility with other downstreams, it is strongly
recommended that you prefix your downstream names with '__RFQDN_' where
recommended that you prefix your downstram names with '__RFQDN_' where
RFQDN is a valid, reverse fully qualified domain name which you
control. For example, a qemu-kvm specific monitor command would be:

76
QMP/qmp.py Normal file
View File

@@ -0,0 +1,76 @@
# QEMU Monitor Protocol Python class
#
# Copyright (C) 2009 Red Hat Inc.
#
# Authors:
# Luiz Capitulino <lcapitulino@redhat.com>
#
# This work is licensed under the terms of the GNU GPL, version 2. See
# the COPYING file in the top-level directory.
import socket, json
class QMPError(Exception):
pass
class QMPConnectError(QMPError):
pass
class QEMUMonitorProtocol:
def connect(self):
self.sock.connect(self.filename)
data = self.__json_read()
if data == None:
raise QMPConnectError
if not data.has_key('QMP'):
raise QMPConnectError
return data['QMP']['capabilities']
def close(self):
self.sock.close()
def send_raw(self, line):
self.sock.send(str(line))
return self.__json_read()
def send(self, cmdline):
cmd = self.__build_cmd(cmdline)
self.__json_send(cmd)
resp = self.__json_read()
if resp == None:
return
elif resp.has_key('error'):
return resp['error']
else:
return resp['return']
def __build_cmd(self, cmdline):
cmdargs = cmdline.split()
qmpcmd = { 'execute': cmdargs[0], 'arguments': {} }
for arg in cmdargs[1:]:
opt = arg.split('=')
try:
value = int(opt[1])
except ValueError:
value = opt[1]
qmpcmd['arguments'][opt[0]] = value
return qmpcmd
def __json_send(self, cmd):
# XXX: We have to send any additional char, otherwise
# the Server won't read our input
self.sock.send(json.dumps(cmd) + ' ')
def __json_read(self):
try:
while True:
line = json.loads(self.sockfile.readline())
if not 'event' in line:
return line
except ValueError:
return
def __init__(self, filename):
self.filename = filename
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
self.sockfile = self.sock.makefile()

33
QMP/vm-info Executable file
View File

@@ -0,0 +1,33 @@
#!/usr/bin/python
#
# Print Virtual Machine information
#
# Usage:
#
# Start QEMU with:
#
# $ qemu [...] -monitor control,unix:./qmp,server
#
# Run vm-info:
#
# $ vm-info ./qmp
#
# Luiz Capitulino <lcapitulino@redhat.com>
import qmp
from sys import argv,exit
def main():
if len(argv) != 2:
print 'vm-info <unix-socket>'
exit(1)
qemu = qmp.QEMUMonitorProtocol(argv[1])
qemu.connect()
qemu.send("qmp_capabilities")
for cmd in [ 'version', 'kvm', 'status', 'uuid', 'balloon' ]:
print cmd + ': ' + str(qemu.send('query-' + cmd))
if __name__ == '__main__':
main()

4
README
View File

@@ -1,3 +1,3 @@
Read the documentation in qemu-doc.html or on http://wiki.qemu-project.org
Read the documentation in qemu-doc.html.
- QEMU team
Fabrice Bellard.

37
TODO Normal file
View File

@@ -0,0 +1,37 @@
General:
-------
- cycle counter for all archs
- cpu_interrupt() win32/SMP fix
- merge PIC spurious interrupt patch
- warning for OS/2: must not use 128 MB memory (merge bochs cmos patch ?)
- config file (at least for windows/Mac OS X)
- update doc: PCI infos.
- basic VGA optimizations
- better code fetch
- do not resize vga if invalid size.
- TLB code protection support for PPC
- disable SMC handling for ARM/SPARC/PPC (not finished)
- see undefined flags for BTx insn
- keyboard output buffer filling timing emulation
- tests for each target CPU
- fix all remaining thread lock issues (must put TBs in a specific invalid
state, find a solution for tb_flush()).
ppc specific:
------------
- TLB invalidate not needed if msr_pr changes
- enable shift optimizations ?
linux-user specific:
-------------------
- remove threading support as it cannot work at this point
- improve IPC syscalls
- more syscalls (in particular all 64 bit ones, IPCs, fix 64 bit
issues, fix 16 bit uid issues)
- use kernel traps for unaligned accesses on ARM ?
lower priority:
--------------
- int15 ah=86: use better timing
- use -msoft-float on ARM

View File

@@ -1 +1 @@
2.2.50
0.13.0

430
a.out.h Normal file
View File

@@ -0,0 +1,430 @@
/* a.out.h
Copyright 1997, 1998, 1999, 2001 Red Hat, Inc.
This file is part of Cygwin.
This software is a copyrighted work licensed under the terms of the
Cygwin license. Please consult the file "CYGWIN_LICENSE" for
details. */
#ifndef _A_OUT_H_
#define _A_OUT_H_
#ifdef __cplusplus
extern "C" {
#endif
#define COFF_IMAGE_WITH_PE
#define COFF_LONG_SECTION_NAMES
/*** coff information for Intel 386/486. */
/********************** FILE HEADER **********************/
struct external_filehdr {
short f_magic; /* magic number */
short f_nscns; /* number of sections */
host_ulong f_timdat; /* time & date stamp */
host_ulong f_symptr; /* file pointer to symtab */
host_ulong f_nsyms; /* number of symtab entries */
short f_opthdr; /* sizeof(optional hdr) */
short f_flags; /* flags */
};
/* Bits for f_flags:
* F_RELFLG relocation info stripped from file
* F_EXEC file is executable (no unresolved external references)
* F_LNNO line numbers stripped from file
* F_LSYMS local symbols stripped from file
* F_AR32WR file has byte ordering of an AR32WR machine (e.g. vax)
*/
#define F_RELFLG (0x0001)
#define F_EXEC (0x0002)
#define F_LNNO (0x0004)
#define F_LSYMS (0x0008)
#define I386MAGIC 0x14c
#define I386PTXMAGIC 0x154
#define I386AIXMAGIC 0x175
/* This is Lynx's all-platform magic number for executables. */
#define LYNXCOFFMAGIC 0415
#define I386BADMAG(x) (((x).f_magic != I386MAGIC) \
&& (x).f_magic != I386AIXMAGIC \
&& (x).f_magic != I386PTXMAGIC \
&& (x).f_magic != LYNXCOFFMAGIC)
#define FILHDR struct external_filehdr
#define FILHSZ 20
/********************** AOUT "OPTIONAL HEADER"=
**********************/
typedef struct
{
unsigned short magic; /* type of file */
unsigned short vstamp; /* version stamp */
host_ulong tsize; /* text size in bytes, padded to FW bdry*/
host_ulong dsize; /* initialized data " " */
host_ulong bsize; /* uninitialized data " " */
host_ulong entry; /* entry pt. */
host_ulong text_start; /* base of text used for this file */
host_ulong data_start; /* base of data used for this file=
*/
}
AOUTHDR;
#define AOUTSZ 28
#define AOUTHDRSZ 28
#define OMAGIC 0404 /* object files, eg as output */
#define ZMAGIC 0413 /* demand load format, eg normal ld output */
#define STMAGIC 0401 /* target shlib */
#define SHMAGIC 0443 /* host shlib */
/* define some NT default values */
/* #define NT_IMAGE_BASE 0x400000 moved to internal.h */
#define NT_SECTION_ALIGNMENT 0x1000
#define NT_FILE_ALIGNMENT 0x200
#define NT_DEF_RESERVE 0x100000
#define NT_DEF_COMMIT 0x1000
/********************** SECTION HEADER **********************/
struct external_scnhdr {
char s_name[8]; /* section name */
host_ulong s_paddr; /* physical address, offset
of last addr in scn */
host_ulong s_vaddr; /* virtual address */
host_ulong s_size; /* section size */
host_ulong s_scnptr; /* file ptr to raw data for section */
host_ulong s_relptr; /* file ptr to relocation */
host_ulong s_lnnoptr; /* file ptr to line numbers */
unsigned short s_nreloc; /* number of relocation entries */
unsigned short s_nlnno; /* number of line number entries*/
host_ulong s_flags; /* flags */
};
#define SCNHDR struct external_scnhdr
#define SCNHSZ 40
/*
* names of "special" sections
*/
#define _TEXT ".text"
#define _DATA ".data"
#define _BSS ".bss"
#define _COMMENT ".comment"
#define _LIB ".lib"
/********************** LINE NUMBERS **********************/
/* 1 line number entry for every "breakpointable" source line in a section.
* Line numbers are grouped on a per function basis; first entry in a function
* grouping will have l_lnno = 0 and in place of physical address will be the
* symbol table index of the function name.
*/
struct external_lineno {
union {
host_ulong l_symndx; /* function name symbol index, iff l_lnno 0 */
host_ulong l_paddr; /* (physical) address of line number */
} l_addr;
unsigned short l_lnno; /* line number */
};
#define LINENO struct external_lineno
#define LINESZ 6
/********************** SYMBOLS **********************/
#define E_SYMNMLEN 8 /* # characters in a symbol name */
#define E_FILNMLEN 14 /* # characters in a file name */
#define E_DIMNUM 4 /* # array dimensions in auxiliary entry */
struct __attribute__((packed)) external_syment
{
union {
char e_name[E_SYMNMLEN];
struct {
host_ulong e_zeroes;
host_ulong e_offset;
} e;
} e;
host_ulong e_value;
unsigned short e_scnum;
unsigned short e_type;
char e_sclass[1];
char e_numaux[1];
};
#define N_BTMASK (0xf)
#define N_TMASK (0x30)
#define N_BTSHFT (4)
#define N_TSHIFT (2)
union external_auxent {
struct {
host_ulong x_tagndx; /* str, un, or enum tag indx */
union {
struct {
unsigned short x_lnno; /* declaration line number */
unsigned short x_size; /* str/union/array size */
} x_lnsz;
host_ulong x_fsize; /* size of function */
} x_misc;
union {
struct { /* if ISFCN, tag, or .bb */
host_ulong x_lnnoptr;/* ptr to fcn line # */
host_ulong x_endndx; /* entry ndx past block end */
} x_fcn;
struct { /* if ISARY, up to 4 dimen. */
char x_dimen[E_DIMNUM][2];
} x_ary;
} x_fcnary;
unsigned short x_tvndx; /* tv index */
} x_sym;
union {
char x_fname[E_FILNMLEN];
struct {
host_ulong x_zeroes;
host_ulong x_offset;
} x_n;
} x_file;
struct {
host_ulong x_scnlen; /* section length */
unsigned short x_nreloc; /* # relocation entries */
unsigned short x_nlinno; /* # line numbers */
host_ulong x_checksum; /* section COMDAT checksum */
unsigned short x_associated;/* COMDAT associated section index */
char x_comdat[1]; /* COMDAT selection number */
} x_scn;
struct {
host_ulong x_tvfill; /* tv fill value */
unsigned short x_tvlen; /* length of .tv */
char x_tvran[2][2]; /* tv range */
} x_tv; /* info about .tv section (in auxent of symbol .tv)) */
};
#define SYMENT struct external_syment
#define SYMESZ 18
#define AUXENT union external_auxent
#define AUXESZ 18
#define _ETEXT "etext"
/********************** RELOCATION DIRECTIVES **********************/
struct external_reloc {
char r_vaddr[4];
char r_symndx[4];
char r_type[2];
};
#define RELOC struct external_reloc
#define RELSZ 10
/* end of coff/i386.h */
/* PE COFF header information */
#ifndef _PE_H
#define _PE_H
/* NT specific file attributes */
#define IMAGE_FILE_RELOCS_STRIPPED 0x0001
#define IMAGE_FILE_EXECUTABLE_IMAGE 0x0002
#define IMAGE_FILE_LINE_NUMS_STRIPPED 0x0004
#define IMAGE_FILE_LOCAL_SYMS_STRIPPED 0x0008
#define IMAGE_FILE_BYTES_REVERSED_LO 0x0080
#define IMAGE_FILE_32BIT_MACHINE 0x0100
#define IMAGE_FILE_DEBUG_STRIPPED 0x0200
#define IMAGE_FILE_SYSTEM 0x1000
#define IMAGE_FILE_DLL 0x2000
#define IMAGE_FILE_BYTES_REVERSED_HI 0x8000
/* additional flags to be set for section headers to allow the NT loader to
read and write to the section data (to replace the addresses of data in
dlls for one thing); also to execute the section in .text's case=
*/
#define IMAGE_SCN_MEM_DISCARDABLE 0x02000000
#define IMAGE_SCN_MEM_EXECUTE 0x20000000
#define IMAGE_SCN_MEM_READ 0x40000000
#define IMAGE_SCN_MEM_WRITE 0x80000000
/*
* Section characteristics added for ppc-nt
*/
#define IMAGE_SCN_TYPE_NO_PAD 0x00000008 /* Reserved. */
#define IMAGE_SCN_CNT_CODE 0x00000020 /* Section contains code. */
#define IMAGE_SCN_CNT_INITIALIZED_DATA 0x00000040 /* Section contains initialized data. */
#define IMAGE_SCN_CNT_UNINITIALIZED_DATA 0x00000080 /* Section contains uninitialized data. */
#define IMAGE_SCN_LNK_OTHER 0x00000100 /* Reserved. */
#define IMAGE_SCN_LNK_INFO 0x00000200 /* Section contains comments or some other type of information. */
#define IMAGE_SCN_LNK_REMOVE 0x00000800 /* Section contents will not become part of image. */
#define IMAGE_SCN_LNK_COMDAT 0x00001000 /* Section contents comdat. */
#define IMAGE_SCN_MEM_FARDATA 0x00008000
#define IMAGE_SCN_MEM_PURGEABLE 0x00020000
#define IMAGE_SCN_MEM_16BIT 0x00020000
#define IMAGE_SCN_MEM_LOCKED 0x00040000
#define IMAGE_SCN_MEM_PRELOAD 0x00080000
#define IMAGE_SCN_ALIGN_1BYTES 0x00100000
#define IMAGE_SCN_ALIGN_2BYTES 0x00200000
#define IMAGE_SCN_ALIGN_4BYTES 0x00300000
#define IMAGE_SCN_ALIGN_8BYTES 0x00400000
#define IMAGE_SCN_ALIGN_16BYTES 0x00500000 /* Default alignment if no others are specified. */
#define IMAGE_SCN_ALIGN_32BYTES 0x00600000
#define IMAGE_SCN_ALIGN_64BYTES 0x00700000
#define IMAGE_SCN_LNK_NRELOC_OVFL 0x01000000 /* Section contains extended relocations. */
#define IMAGE_SCN_MEM_NOT_CACHED 0x04000000 /* Section is not cachable. */
#define IMAGE_SCN_MEM_NOT_PAGED 0x08000000 /* Section is not pageable. */
#define IMAGE_SCN_MEM_SHARED 0x10000000 /* Section is shareable. */
/* COMDAT selection codes. */
#define IMAGE_COMDAT_SELECT_NODUPLICATES (1) /* Warn if duplicates. */
#define IMAGE_COMDAT_SELECT_ANY (2) /* No warning. */
#define IMAGE_COMDAT_SELECT_SAME_SIZE (3) /* Warn if different size. */
#define IMAGE_COMDAT_SELECT_EXACT_MATCH (4) /* Warn if different. */
#define IMAGE_COMDAT_SELECT_ASSOCIATIVE (5) /* Base on other section. */
/* Magic values that are true for all dos/nt implementations */
#define DOSMAGIC 0x5a4d
#define NT_SIGNATURE 0x00004550
/* NT allows long filenames, we want to accommodate this. This may break
some of the bfd functions */
#undef FILNMLEN
#define FILNMLEN 18 /* # characters in a file name */
#ifdef COFF_IMAGE_WITH_PE
/* The filehdr is only weired in images */
#undef FILHDR
struct external_PE_filehdr
{
/* DOS header fields */
unsigned short e_magic; /* Magic number, 0x5a4d */
unsigned short e_cblp; /* Bytes on last page of file, 0x90 */
unsigned short e_cp; /* Pages in file, 0x3 */
unsigned short e_crlc; /* Relocations, 0x0 */
unsigned short e_cparhdr; /* Size of header in paragraphs, 0x4 */
unsigned short e_minalloc; /* Minimum extra paragraphs needed, 0x0 */
unsigned short e_maxalloc; /* Maximum extra paragraphs needed, 0xFFFF */
unsigned short e_ss; /* Initial (relative) SS value, 0x0 */
unsigned short e_sp; /* Initial SP value, 0xb8 */
unsigned short e_csum; /* Checksum, 0x0 */
unsigned short e_ip; /* Initial IP value, 0x0 */
unsigned short e_cs; /* Initial (relative) CS value, 0x0 */
unsigned short e_lfarlc; /* File address of relocation table, 0x40 */
unsigned short e_ovno; /* Overlay number, 0x0 */
char e_res[4][2]; /* Reserved words, all 0x0 */
unsigned short e_oemid; /* OEM identifier (for e_oeminfo), 0x0 */
unsigned short e_oeminfo; /* OEM information; e_oemid specific, 0x0 */
char e_res2[10][2]; /* Reserved words, all 0x0 */
host_ulong e_lfanew; /* File address of new exe header, 0x80 */
char dos_message[16][4]; /* other stuff, always follow DOS header */
unsigned int nt_signature; /* required NT signature, 0x4550 */
/* From standard header */
unsigned short f_magic; /* magic number */
unsigned short f_nscns; /* number of sections */
host_ulong f_timdat; /* time & date stamp */
host_ulong f_symptr; /* file pointer to symtab */
host_ulong f_nsyms; /* number of symtab entries */
unsigned short f_opthdr; /* sizeof(optional hdr) */
unsigned short f_flags; /* flags */
};
#define FILHDR struct external_PE_filehdr
#undef FILHSZ
#define FILHSZ 152
#endif
typedef struct
{
unsigned short magic; /* type of file */
unsigned short vstamp; /* version stamp */
host_ulong tsize; /* text size in bytes, padded to FW bdry*/
host_ulong dsize; /* initialized data " " */
host_ulong bsize; /* uninitialized data " " */
host_ulong entry; /* entry pt. */
host_ulong text_start; /* base of text used for this file */
host_ulong data_start; /* base of all data used for this file */
/* NT extra fields; see internal.h for descriptions */
host_ulong ImageBase;
host_ulong SectionAlignment;
host_ulong FileAlignment;
unsigned short MajorOperatingSystemVersion;
unsigned short MinorOperatingSystemVersion;
unsigned short MajorImageVersion;
unsigned short MinorImageVersion;
unsigned short MajorSubsystemVersion;
unsigned short MinorSubsystemVersion;
char Reserved1[4];
host_ulong SizeOfImage;
host_ulong SizeOfHeaders;
host_ulong CheckSum;
unsigned short Subsystem;
unsigned short DllCharacteristics;
host_ulong SizeOfStackReserve;
host_ulong SizeOfStackCommit;
host_ulong SizeOfHeapReserve;
host_ulong SizeOfHeapCommit;
host_ulong LoaderFlags;
host_ulong NumberOfRvaAndSizes;
/* IMAGE_DATA_DIRECTORY DataDirectory[IMAGE_NUMBEROF_DIRECTORY_ENTRIES]; */
char DataDirectory[16][2][4]; /* 16 entries, 2 elements/entry, 4 chars */
} PEAOUTHDR;
#undef AOUTSZ
#define AOUTSZ (AOUTHDRSZ + 196)
#undef E_FILNMLEN
#define E_FILNMLEN 18 /* # characters in a file name */
#endif
/* end of coff/pe.h */
#define DT_NON (0) /* no derived type */
#define DT_PTR (1) /* pointer */
#define DT_FCN (2) /* function */
#define DT_ARY (3) /* array */
#define ISPTR(x) (((x) & N_TMASK) == (DT_PTR << N_BTSHFT))
#define ISFCN(x) (((x) & N_TMASK) == (DT_FCN << N_BTSHFT))
#define ISARY(x) (((x) & N_TMASK) == (DT_ARY << N_BTSHFT))
#ifdef __cplusplus
}
#endif
#endif /* _A_OUT_H_ */

157
accel.c
View File

@@ -1,157 +0,0 @@
/*
* QEMU System Emulator, accelerator interfaces
*
* Copyright (c) 2003-2008 Fabrice Bellard
* Copyright (c) 2014 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "sysemu/accel.h"
#include "hw/boards.h"
#include "qemu-common.h"
#include "sysemu/arch_init.h"
#include "sysemu/sysemu.h"
#include "sysemu/kvm.h"
#include "sysemu/qtest.h"
#include "hw/xen/xen.h"
#include "qom/object.h"
#include "hw/boards.h"
int tcg_tb_size;
static bool tcg_allowed = true;
static int tcg_init(MachineState *ms)
{
tcg_exec_init(tcg_tb_size * 1024 * 1024);
return 0;
}
static const TypeInfo accel_type = {
.name = TYPE_ACCEL,
.parent = TYPE_OBJECT,
.class_size = sizeof(AccelClass),
.instance_size = sizeof(AccelState),
};
/* Lookup AccelClass from opt_name. Returns NULL if not found */
static AccelClass *accel_find(const char *opt_name)
{
char *class_name = g_strdup_printf(ACCEL_CLASS_NAME("%s"), opt_name);
AccelClass *ac = ACCEL_CLASS(object_class_by_name(class_name));
g_free(class_name);
return ac;
}
static int accel_init_machine(AccelClass *acc, MachineState *ms)
{
ObjectClass *oc = OBJECT_CLASS(acc);
const char *cname = object_class_get_name(oc);
AccelState *accel = ACCEL(object_new(cname));
int ret;
ms->accelerator = accel;
*(acc->allowed) = true;
ret = acc->init_machine(ms);
if (ret < 0) {
ms->accelerator = NULL;
*(acc->allowed) = false;
object_unref(OBJECT(accel));
}
return ret;
}
int configure_accelerator(MachineState *ms)
{
const char *p;
char buf[10];
int ret;
bool accel_initialised = false;
bool init_failed = false;
AccelClass *acc = NULL;
p = qemu_opt_get(qemu_get_machine_opts(), "accel");
if (p == NULL) {
/* Use the default "accelerator", tcg */
p = "tcg";
}
while (!accel_initialised && *p != '\0') {
if (*p == ':') {
p++;
}
p = get_opt_name(buf, sizeof(buf), p, ':');
acc = accel_find(buf);
if (!acc) {
fprintf(stderr, "\"%s\" accelerator not found.\n", buf);
continue;
}
if (acc->available && !acc->available()) {
printf("%s not supported for this target\n",
acc->name);
continue;
}
ret = accel_init_machine(acc, ms);
if (ret < 0) {
init_failed = true;
fprintf(stderr, "failed to initialize %s: %s\n",
acc->name,
strerror(-ret));
} else {
accel_initialised = true;
}
}
if (!accel_initialised) {
if (!init_failed) {
fprintf(stderr, "No accelerator found!\n");
}
exit(1);
}
if (init_failed) {
fprintf(stderr, "Back to %s accelerator.\n", acc->name);
}
return !accel_initialised;
}
static void tcg_accel_class_init(ObjectClass *oc, void *data)
{
AccelClass *ac = ACCEL_CLASS(oc);
ac->name = "tcg";
ac->init_machine = tcg_init;
ac->allowed = &tcg_allowed;
}
#define TYPE_TCG_ACCEL ACCEL_CLASS_NAME("tcg")
static const TypeInfo tcg_accel_type = {
.name = TYPE_TCG_ACCEL,
.parent = TYPE_ACCEL,
.class_init = tcg_accel_class_init,
};
static void register_accel_types(void)
{
type_register_static(&accel_type);
type_register_static(&tcg_accel_type);
}
type_init(register_accel_types);

View File

@@ -24,7 +24,8 @@
#include "qemu-common.h"
#include "qemu/acl.h"
#include "sysemu.h"
#include "acl.h"
#ifdef CONFIG_FNMATCH
#include <fnmatch.h>
@@ -55,8 +56,8 @@ qemu_acl *qemu_acl_init(const char *aclname)
if (acl)
return acl;
acl = g_malloc(sizeof(*acl));
acl->aclname = g_strdup(aclname);
acl = qemu_malloc(sizeof(*acl));
acl->aclname = qemu_strdup(aclname);
/* Deny by default, so there is no window of "open
* access" between QEMU starting, and the user setting
* up ACLs in the monitor */
@@ -65,7 +66,7 @@ qemu_acl *qemu_acl_init(const char *aclname)
acl->nentries = 0;
QTAILQ_INIT(&acl->entries);
acls = g_realloc(acls, sizeof(*acls) * (nacls +1));
acls = qemu_realloc(acls, sizeof(*acls) * (nacls +1));
acls[nacls] = acl;
nacls++;
@@ -95,16 +96,16 @@ int qemu_acl_party_is_allowed(qemu_acl *acl,
void qemu_acl_reset(qemu_acl *acl)
{
qemu_acl_entry *entry, *next_entry;
qemu_acl_entry *entry;
/* Put back to deny by default, so there is no window
* of "open access" while the user re-initializes the
* access control list */
acl->defaultDeny = 1;
QTAILQ_FOREACH_SAFE(entry, &acl->entries, next, next_entry) {
QTAILQ_FOREACH(entry, &acl->entries, next) {
QTAILQ_REMOVE(&acl->entries, entry, next);
g_free(entry->match);
g_free(entry);
free(entry->match);
free(entry);
}
acl->nentries = 0;
}
@@ -116,8 +117,8 @@ int qemu_acl_append(qemu_acl *acl,
{
qemu_acl_entry *entry;
entry = g_malloc(sizeof(*entry));
entry->match = g_strdup(match);
entry = qemu_malloc(sizeof(*entry));
entry->match = qemu_strdup(match);
entry->deny = deny;
QTAILQ_INSERT_TAIL(&acl->entries, entry, next);
@@ -132,23 +133,23 @@ int qemu_acl_insert(qemu_acl *acl,
const char *match,
int index)
{
qemu_acl_entry *entry;
qemu_acl_entry *tmp;
int i = 0;
if (index <= 0)
return -1;
if (index > acl->nentries) {
if (index >= acl->nentries)
return qemu_acl_append(acl, deny, match);
}
entry = qemu_malloc(sizeof(*entry));
entry->match = qemu_strdup(match);
entry->deny = deny;
QTAILQ_FOREACH(tmp, &acl->entries, next) {
i++;
if (i == index) {
qemu_acl_entry *entry;
entry = g_malloc(sizeof(*entry));
entry->match = g_strdup(match);
entry->deny = deny;
QTAILQ_INSERT_BEFORE(tmp, entry, next);
acl->nentries++;
break;
@@ -168,9 +169,6 @@ int qemu_acl_remove(qemu_acl *acl,
i++;
if (strcmp(entry->match, match) == 0) {
QTAILQ_REMOVE(&acl->entries, entry, next);
acl->nentries--;
g_free(entry->match);
g_free(entry);
return i;
}
}

View File

@@ -25,7 +25,7 @@
#ifndef __QEMU_ACL_H__
#define __QEMU_ACL_H__
#include "qemu/queue.h"
#include "qemu-queue.h"
typedef struct qemu_acl_entry qemu_acl_entry;
typedef struct qemu_acl qemu_acl;

1314
aes.c Normal file

File diff suppressed because it is too large Load Diff

26
aes.h Normal file
View File

@@ -0,0 +1,26 @@
#ifndef QEMU_AES_H
#define QEMU_AES_H
#define AES_MAXNR 14
#define AES_BLOCK_SIZE 16
struct aes_key_st {
uint32_t rd_key[4 *(AES_MAXNR + 1)];
int rounds;
};
typedef struct aes_key_st AES_KEY;
int AES_set_encrypt_key(const unsigned char *userKey, const int bits,
AES_KEY *key);
int AES_set_decrypt_key(const unsigned char *userKey, const int bits,
AES_KEY *key);
void AES_encrypt(const unsigned char *in, unsigned char *out,
const AES_KEY *key);
void AES_decrypt(const unsigned char *in, unsigned char *out,
const AES_KEY *key);
void AES_cbc_encrypt(const unsigned char *in, unsigned char *out,
const unsigned long length, const AES_KEY *key,
unsigned char *ivec, const int enc);
#endif

View File

@@ -1,254 +0,0 @@
/*
* QEMU aio implementation
*
* Copyright IBM, Corp. 2008
*
* Authors:
* Anthony Liguori <aliguori@us.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu-common.h"
#include "block/block.h"
#include "qemu/queue.h"
#include "qemu/sockets.h"
struct AioHandler
{
GPollFD pfd;
IOHandler *io_read;
IOHandler *io_write;
int deleted;
int pollfds_idx;
void *opaque;
QLIST_ENTRY(AioHandler) node;
};
static AioHandler *find_aio_handler(AioContext *ctx, int fd)
{
AioHandler *node;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->pfd.fd == fd)
if (!node->deleted)
return node;
}
return NULL;
}
void aio_set_fd_handler(AioContext *ctx,
int fd,
IOHandler *io_read,
IOHandler *io_write,
void *opaque)
{
AioHandler *node;
node = find_aio_handler(ctx, fd);
/* Are we deleting the fd handler? */
if (!io_read && !io_write) {
if (node) {
g_source_remove_poll(&ctx->source, &node->pfd);
/* If the lock is held, just mark the node as deleted */
if (ctx->walking_handlers) {
node->deleted = 1;
node->pfd.revents = 0;
} else {
/* Otherwise, delete it for real. We can't just mark it as
* deleted because deleted nodes are only cleaned up after
* releasing the walking_handlers lock.
*/
QLIST_REMOVE(node, node);
g_free(node);
}
}
} else {
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_new0(AioHandler, 1);
node->pfd.fd = fd;
QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node);
g_source_add_poll(&ctx->source, &node->pfd);
}
/* Update handler with latest information */
node->io_read = io_read;
node->io_write = io_write;
node->opaque = opaque;
node->pollfds_idx = -1;
node->pfd.events = (io_read ? G_IO_IN | G_IO_HUP | G_IO_ERR : 0);
node->pfd.events |= (io_write ? G_IO_OUT | G_IO_ERR : 0);
}
aio_notify(ctx);
}
void aio_set_event_notifier(AioContext *ctx,
EventNotifier *notifier,
EventNotifierHandler *io_read)
{
aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
(IOHandler *)io_read, NULL, notifier);
}
bool aio_prepare(AioContext *ctx)
{
return false;
}
bool aio_pending(AioContext *ctx)
{
AioHandler *node;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
int revents;
revents = node->pfd.revents & node->pfd.events;
if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) {
return true;
}
if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) {
return true;
}
}
return false;
}
bool aio_dispatch(AioContext *ctx)
{
AioHandler *node;
bool progress = false;
/*
* If there are callbacks left that have been queued, we need to call them.
* Do not call select in this case, because it is possible that the caller
* does not need a complete flush (as is the case for aio_poll loops).
*/
if (aio_bh_poll(ctx)) {
progress = true;
}
/*
* We have to walk very carefully in case aio_set_fd_handler is
* called while we're walking.
*/
node = QLIST_FIRST(&ctx->aio_handlers);
while (node) {
AioHandler *tmp;
int revents;
ctx->walking_handlers++;
revents = node->pfd.revents & node->pfd.events;
node->pfd.revents = 0;
if (!node->deleted &&
(revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
node->io_read) {
node->io_read(node->opaque);
/* aio_notify() does not count as progress */
if (node->opaque != &ctx->notifier) {
progress = true;
}
}
if (!node->deleted &&
(revents & (G_IO_OUT | G_IO_ERR)) &&
node->io_write) {
node->io_write(node->opaque);
progress = true;
}
tmp = node;
node = QLIST_NEXT(node, node);
ctx->walking_handlers--;
if (!ctx->walking_handlers && tmp->deleted) {
QLIST_REMOVE(tmp, node);
g_free(tmp);
}
}
/* Run our timers */
progress |= timerlistgroup_run_timers(&ctx->tlg);
return progress;
}
bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
bool was_dispatching;
int ret;
bool progress;
was_dispatching = ctx->dispatching;
progress = false;
/* aio_notify can avoid the expensive event_notifier_set if
* everything (file descriptors, bottom halves, timers) will
* be re-evaluated before the next blocking poll(). This is
* already true when aio_poll is called with blocking == false;
* if blocking == true, it is only true after poll() returns.
*
* If we're in a nested event loop, ctx->dispatching might be true.
* In that case we can restore it just before returning, but we
* have to clear it now.
*/
aio_set_dispatching(ctx, !blocking);
ctx->walking_handlers++;
g_array_set_size(ctx->pollfds, 0);
/* fill pollfds */
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
node->pollfds_idx = -1;
if (!node->deleted && node->pfd.events) {
GPollFD pfd = {
.fd = node->pfd.fd,
.events = node->pfd.events,
};
node->pollfds_idx = ctx->pollfds->len;
g_array_append_val(ctx->pollfds, pfd);
}
}
ctx->walking_handlers--;
/* wait until next event */
ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
ctx->pollfds->len,
blocking ? aio_compute_timeout(ctx) : 0);
/* if we have any readable fds, dispatch event */
if (ret > 0) {
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->pollfds_idx != -1) {
GPollFD *pfd = &g_array_index(ctx->pollfds, GPollFD,
node->pollfds_idx);
node->pfd.revents = pfd->revents;
}
}
}
/* Run dispatch even if there were no readable fds to run timers */
aio_set_dispatching(ctx, true);
if (aio_dispatch(ctx)) {
progress = true;
}
aio_set_dispatching(ctx, was_dispatching);
return progress;
}

View File

@@ -1,353 +0,0 @@
/*
* QEMU aio implementation
*
* Copyright IBM Corp., 2008
* Copyright Red Hat Inc., 2012
*
* Authors:
* Anthony Liguori <aliguori@us.ibm.com>
* Paolo Bonzini <pbonzini@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "qemu-common.h"
#include "block/block.h"
#include "qemu/queue.h"
#include "qemu/sockets.h"
struct AioHandler {
EventNotifier *e;
IOHandler *io_read;
IOHandler *io_write;
EventNotifierHandler *io_notify;
GPollFD pfd;
int deleted;
void *opaque;
QLIST_ENTRY(AioHandler) node;
};
void aio_set_fd_handler(AioContext *ctx,
int fd,
IOHandler *io_read,
IOHandler *io_write,
void *opaque)
{
/* fd is a SOCKET in our case */
AioHandler *node;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->pfd.fd == fd && !node->deleted) {
break;
}
}
/* Are we deleting the fd handler? */
if (!io_read && !io_write) {
if (node) {
/* If the lock is held, just mark the node as deleted */
if (ctx->walking_handlers) {
node->deleted = 1;
node->pfd.revents = 0;
} else {
/* Otherwise, delete it for real. We can't just mark it as
* deleted because deleted nodes are only cleaned up after
* releasing the walking_handlers lock.
*/
QLIST_REMOVE(node, node);
g_free(node);
}
}
} else {
HANDLE event;
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_new0(AioHandler, 1);
node->pfd.fd = fd;
QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node);
}
node->pfd.events = 0;
if (node->io_read) {
node->pfd.events |= G_IO_IN;
}
if (node->io_write) {
node->pfd.events |= G_IO_OUT;
}
node->e = &ctx->notifier;
/* Update handler with latest information */
node->opaque = opaque;
node->io_read = io_read;
node->io_write = io_write;
event = event_notifier_get_handle(&ctx->notifier);
WSAEventSelect(node->pfd.fd, event,
FD_READ | FD_ACCEPT | FD_CLOSE |
FD_CONNECT | FD_WRITE | FD_OOB);
}
aio_notify(ctx);
}
void aio_set_event_notifier(AioContext *ctx,
EventNotifier *e,
EventNotifierHandler *io_notify)
{
AioHandler *node;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->e == e && !node->deleted) {
break;
}
}
/* Are we deleting the fd handler? */
if (!io_notify) {
if (node) {
g_source_remove_poll(&ctx->source, &node->pfd);
/* If the lock is held, just mark the node as deleted */
if (ctx->walking_handlers) {
node->deleted = 1;
node->pfd.revents = 0;
} else {
/* Otherwise, delete it for real. We can't just mark it as
* deleted because deleted nodes are only cleaned up after
* releasing the walking_handlers lock.
*/
QLIST_REMOVE(node, node);
g_free(node);
}
}
} else {
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = g_new0(AioHandler, 1);
node->e = e;
node->pfd.fd = (uintptr_t)event_notifier_get_handle(e);
node->pfd.events = G_IO_IN;
QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node);
g_source_add_poll(&ctx->source, &node->pfd);
}
/* Update handler with latest information */
node->io_notify = io_notify;
}
aio_notify(ctx);
}
bool aio_prepare(AioContext *ctx)
{
static struct timeval tv0;
AioHandler *node;
bool have_select_revents = false;
fd_set rfds, wfds;
/* fill fd sets */
FD_ZERO(&rfds);
FD_ZERO(&wfds);
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->io_read) {
FD_SET ((SOCKET)node->pfd.fd, &rfds);
}
if (node->io_write) {
FD_SET ((SOCKET)node->pfd.fd, &wfds);
}
}
if (select(0, &rfds, &wfds, NULL, &tv0) > 0) {
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
node->pfd.revents = 0;
if (FD_ISSET(node->pfd.fd, &rfds)) {
node->pfd.revents |= G_IO_IN;
have_select_revents = true;
}
if (FD_ISSET(node->pfd.fd, &wfds)) {
node->pfd.revents |= G_IO_OUT;
have_select_revents = true;
}
}
}
return have_select_revents;
}
bool aio_pending(AioContext *ctx)
{
AioHandler *node;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->pfd.revents && node->io_notify) {
return true;
}
if ((node->pfd.revents & G_IO_IN) && node->io_read) {
return true;
}
if ((node->pfd.revents & G_IO_OUT) && node->io_write) {
return true;
}
}
return false;
}
static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
{
AioHandler *node;
bool progress = false;
/*
* We have to walk very carefully in case aio_set_fd_handler is
* called while we're walking.
*/
node = QLIST_FIRST(&ctx->aio_handlers);
while (node) {
AioHandler *tmp;
int revents = node->pfd.revents;
ctx->walking_handlers++;
if (!node->deleted &&
(revents || event_notifier_get_handle(node->e) == event) &&
node->io_notify) {
node->pfd.revents = 0;
node->io_notify(node->e);
/* aio_notify() does not count as progress */
if (node->e != &ctx->notifier) {
progress = true;
}
}
if (!node->deleted &&
(node->io_read || node->io_write)) {
node->pfd.revents = 0;
if ((revents & G_IO_IN) && node->io_read) {
node->io_read(node->opaque);
progress = true;
}
if ((revents & G_IO_OUT) && node->io_write) {
node->io_write(node->opaque);
progress = true;
}
/* if the next select() will return an event, we have progressed */
if (event == event_notifier_get_handle(&ctx->notifier)) {
WSANETWORKEVENTS ev;
WSAEnumNetworkEvents(node->pfd.fd, event, &ev);
if (ev.lNetworkEvents) {
progress = true;
}
}
}
tmp = node;
node = QLIST_NEXT(node, node);
ctx->walking_handlers--;
if (!ctx->walking_handlers && tmp->deleted) {
QLIST_REMOVE(tmp, node);
g_free(tmp);
}
}
return progress;
}
bool aio_dispatch(AioContext *ctx)
{
bool progress;
progress = aio_bh_poll(ctx);
progress |= aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
progress |= timerlistgroup_run_timers(&ctx->tlg);
return progress;
}
bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
bool was_dispatching, progress, have_select_revents, first;
int count;
int timeout;
have_select_revents = aio_prepare(ctx);
if (have_select_revents) {
blocking = false;
}
was_dispatching = ctx->dispatching;
progress = false;
/* aio_notify can avoid the expensive event_notifier_set if
* everything (file descriptors, bottom halves, timers) will
* be re-evaluated before the next blocking poll(). This is
* already true when aio_poll is called with blocking == false;
* if blocking == true, it is only true after poll() returns.
*
* If we're in a nested event loop, ctx->dispatching might be true.
* In that case we can restore it just before returning, but we
* have to clear it now.
*/
aio_set_dispatching(ctx, !blocking);
ctx->walking_handlers++;
/* fill fd sets */
count = 0;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (!node->deleted && node->io_notify) {
events[count++] = event_notifier_get_handle(node->e);
}
}
ctx->walking_handlers--;
first = true;
/* wait until next event */
while (count > 0) {
HANDLE event;
int ret;
timeout = blocking
? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
ret = WaitForMultipleObjects(count, events, FALSE, timeout);
aio_set_dispatching(ctx, true);
if (first && aio_bh_poll(ctx)) {
progress = true;
}
first = false;
/* if we have any signaled events, dispatch event */
event = NULL;
if ((DWORD) (ret - WAIT_OBJECT_0) < count) {
event = events[ret - WAIT_OBJECT_0];
events[ret - WAIT_OBJECT_0] = events[--count];
} else if (!have_select_revents) {
break;
}
have_select_revents = false;
blocking = false;
progress |= aio_dispatch_handlers(ctx, event);
}
progress |= timerlistgroup_run_timers(&ctx->tlg);
aio_set_dispatching(ctx, was_dispatching);
return progress;
}

230
aio.c Normal file
View File

@@ -0,0 +1,230 @@
/*
* QEMU aio implementation
*
* Copyright IBM, Corp. 2008
*
* Authors:
* Anthony Liguori <aliguori@us.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
*/
#include "qemu-common.h"
#include "block.h"
#include "qemu-queue.h"
#include "qemu_socket.h"
typedef struct AioHandler AioHandler;
/* The list of registered AIO handlers */
static QLIST_HEAD(, AioHandler) aio_handlers;
/* This is a simple lock used to protect the aio_handlers list. Specifically,
* it's used to ensure that no callbacks are removed while we're walking and
* dispatching callbacks.
*/
static int walking_handlers;
struct AioHandler
{
int fd;
IOHandler *io_read;
IOHandler *io_write;
AioFlushHandler *io_flush;
AioProcessQueue *io_process_queue;
int deleted;
void *opaque;
QLIST_ENTRY(AioHandler) node;
};
static AioHandler *find_aio_handler(int fd)
{
AioHandler *node;
QLIST_FOREACH(node, &aio_handlers, node) {
if (node->fd == fd)
if (!node->deleted)
return node;
}
return NULL;
}
int qemu_aio_set_fd_handler(int fd,
IOHandler *io_read,
IOHandler *io_write,
AioFlushHandler *io_flush,
AioProcessQueue *io_process_queue,
void *opaque)
{
AioHandler *node;
node = find_aio_handler(fd);
/* Are we deleting the fd handler? */
if (!io_read && !io_write) {
if (node) {
/* If the lock is held, just mark the node as deleted */
if (walking_handlers)
node->deleted = 1;
else {
/* Otherwise, delete it for real. We can't just mark it as
* deleted because deleted nodes are only cleaned up after
* releasing the walking_handlers lock.
*/
QLIST_REMOVE(node, node);
qemu_free(node);
}
}
} else {
if (node == NULL) {
/* Alloc and insert if it's not already there */
node = qemu_mallocz(sizeof(AioHandler));
node->fd = fd;
QLIST_INSERT_HEAD(&aio_handlers, node, node);
}
/* Update handler with latest information */
node->io_read = io_read;
node->io_write = io_write;
node->io_flush = io_flush;
node->io_process_queue = io_process_queue;
node->opaque = opaque;
}
qemu_set_fd_handler2(fd, NULL, io_read, io_write, opaque);
return 0;
}
void qemu_aio_flush(void)
{
AioHandler *node;
int ret;
do {
ret = 0;
/*
* If there are pending emulated aio start them now so flush
* will be able to return 1.
*/
qemu_aio_wait();
QLIST_FOREACH(node, &aio_handlers, node) {
if (node->io_flush) {
ret |= node->io_flush(node->opaque);
}
}
} while (qemu_bh_poll() || ret > 0);
}
int qemu_aio_process_queue(void)
{
AioHandler *node;
int ret = 0;
walking_handlers = 1;
QLIST_FOREACH(node, &aio_handlers, node) {
if (node->io_process_queue) {
if (node->io_process_queue(node->opaque)) {
ret = 1;
}
}
}
walking_handlers = 0;
return ret;
}
void qemu_aio_wait(void)
{
int ret;
if (qemu_bh_poll())
return;
/*
* If there are callbacks left that have been queued, we need to call then.
* Return afterwards to avoid waiting needlessly in select().
*/
if (qemu_aio_process_queue())
return;
do {
AioHandler *node;
fd_set rdfds, wrfds;
int max_fd = -1;
walking_handlers = 1;
FD_ZERO(&rdfds);
FD_ZERO(&wrfds);
/* fill fd sets */
QLIST_FOREACH(node, &aio_handlers, node) {
/* If there aren't pending AIO operations, don't invoke callbacks.
* Otherwise, if there are no AIO requests, qemu_aio_wait() would
* wait indefinitely.
*/
if (node->io_flush && node->io_flush(node->opaque) == 0)
continue;
if (!node->deleted && node->io_read) {
FD_SET(node->fd, &rdfds);
max_fd = MAX(max_fd, node->fd + 1);
}
if (!node->deleted && node->io_write) {
FD_SET(node->fd, &wrfds);
max_fd = MAX(max_fd, node->fd + 1);
}
}
walking_handlers = 0;
/* No AIO operations? Get us out of here */
if (max_fd == -1)
break;
/* wait until next event */
ret = select(max_fd, &rdfds, &wrfds, NULL, NULL);
if (ret == -1 && errno == EINTR)
continue;
/* if we have any readable fds, dispatch event */
if (ret > 0) {
walking_handlers = 1;
/* we have to walk very carefully in case
* qemu_aio_set_fd_handler is called while we're walking */
node = QLIST_FIRST(&aio_handlers);
while (node) {
AioHandler *tmp;
if (!node->deleted &&
FD_ISSET(node->fd, &rdfds) &&
node->io_read) {
node->io_read(node->opaque);
}
if (!node->deleted &&
FD_ISSET(node->fd, &wrfds) &&
node->io_write) {
node->io_write(node->opaque);
}
tmp = node;
node = QLIST_NEXT(node, node);
if (tmp->deleted) {
QLIST_REMOVE(tmp, node);
qemu_free(tmp);
}
}
walking_handlers = 0;
}
} while (ret == 0);
}

1917
alpha-dis.c Normal file

File diff suppressed because it is too large Load Diff

127
alpha.ld Normal file
View File

@@ -0,0 +1,127 @@
OUTPUT_FORMAT("elf64-alpha", "elf64-alpha",
"elf64-alpha")
OUTPUT_ARCH(alpha)
ENTRY(__start)
SECTIONS
{
/* Read-only sections, merged into text segment: */
. = 0x60000000 + SIZEOF_HEADERS;
.interp : { *(.interp) }
.hash : { *(.hash) }
.dynsym : { *(.dynsym) }
.dynstr : { *(.dynstr) }
.gnu.version : { *(.gnu.version) }
.gnu.version_d : { *(.gnu.version_d) }
.gnu.version_r : { *(.gnu.version_r) }
.rel.text :
{ *(.rel.text) *(.rel.gnu.linkonce.t*) }
.rela.text :
{ *(.rela.text) *(.rela.gnu.linkonce.t*) }
.rel.data :
{ *(.rel.data) *(.rel.gnu.linkonce.d*) }
.rela.data :
{ *(.rela.data) *(.rela.gnu.linkonce.d*) }
.rel.rodata :
{ *(.rel.rodata) *(.rel.gnu.linkonce.r*) }
.rela.rodata :
{ *(.rela.rodata) *(.rela.gnu.linkonce.r*) }
.rel.got : { *(.rel.got) }
.rela.got : { *(.rela.got) }
.rel.ctors : { *(.rel.ctors) }
.rela.ctors : { *(.rela.ctors) }
.rel.dtors : { *(.rel.dtors) }
.rela.dtors : { *(.rela.dtors) }
.rel.init : { *(.rel.init) }
.rela.init : { *(.rela.init) }
.rel.fini : { *(.rel.fini) }
.rela.fini : { *(.rela.fini) }
.rel.bss : { *(.rel.bss) }
.rela.bss : { *(.rela.bss) }
.rel.plt : { *(.rel.plt) }
.rela.plt : { *(.rela.plt) }
.init : { *(.init) } =0x47ff041f
.text :
{
*(.text)
/* .gnu.warning sections are handled specially by elf32.em. */
*(.gnu.warning)
*(.gnu.linkonce.t*)
} =0x47ff041f
_etext = .;
PROVIDE (etext = .);
.fini : { *(.fini) } =0x47ff041f
.rodata : { *(.rodata) *(.gnu.linkonce.r*) }
.rodata1 : { *(.rodata1) }
.reginfo : { *(.reginfo) }
/* Adjust the address for the data segment. We want to adjust up to
the same address within the page on the next page up. */
. = ALIGN(0x100000) + (. & (0x100000 - 1));
.data :
{
*(.data)
*(.gnu.linkonce.d*)
CONSTRUCTORS
}
.data1 : { *(.data1) }
.ctors :
{
*(.ctors)
}
.dtors :
{
*(.dtors)
}
.plt : { *(.plt) }
.got : { *(.got.plt) *(.got) }
.dynamic : { *(.dynamic) }
/* We want the small data sections together, so single-instruction offsets
can access them all, and initialized data all before uninitialized, so
we can shorten the on-disk segment size. */
.sdata : { *(.sdata) }
_edata = .;
PROVIDE (edata = .);
__bss_start = .;
.sbss : { *(.sbss) *(.scommon) }
.bss :
{
*(.dynbss)
*(.bss)
*(COMMON)
}
_end = . ;
PROVIDE (end = .);
/* Stabs debugging sections. */
.stab 0 : { *(.stab) }
.stabstr 0 : { *(.stabstr) }
.stab.excl 0 : { *(.stab.excl) }
.stab.exclstr 0 : { *(.stab.exclstr) }
.stab.index 0 : { *(.stab.index) }
.stab.indexstr 0 : { *(.stab.indexstr) }
.comment 0 : { *(.comment) }
/* DWARF debug sections.
Symbols in the DWARF debugging sections are relative to the beginning
of the section so we begin them at 0. */
/* DWARF 1 */
.debug 0 : { *(.debug) }
.line 0 : { *(.line) }
/* GNU DWARF 1 extensions */
.debug_srcinfo 0 : { *(.debug_srcinfo) }
.debug_sfnames 0 : { *(.debug_sfnames) }
/* DWARF 1.1 and DWARF 2 */
.debug_aranges 0 : { *(.debug_aranges) }
.debug_pubnames 0 : { *(.debug_pubnames) }
/* DWARF 2 */
.debug_info 0 : { *(.debug_info) }
.debug_abbrev 0 : { *(.debug_abbrev) }
.debug_line 0 : { *(.debug_line) }
.debug_frame 0 : { *(.debug_frame) }
.debug_str 0 : { *(.debug_str) }
.debug_loc 0 : { *(.debug_loc) }
.debug_macinfo 0 : { *(.debug_macinfo) }
/* SGI/MIPS DWARF 2 extensions */
.debug_weaknames 0 : { *(.debug_weaknames) }
.debug_funcnames 0 : { *(.debug_funcnames) }
.debug_typenames 0 : { *(.debug_typenames) }
.debug_varnames 0 : { *(.debug_varnames) }
/* These must appear regardless of . */
}

File diff suppressed because it is too large Load Diff

33
arch_init.h Normal file
View File

@@ -0,0 +1,33 @@
#ifndef QEMU_ARCH_INIT_H
#define QEMU_ARCH_INIT_H
extern const char arch_config_name[];
enum {
QEMU_ARCH_ALL = -1,
QEMU_ARCH_ALPHA = 1,
QEMU_ARCH_ARM = 2,
QEMU_ARCH_CRIS = 4,
QEMU_ARCH_I386 = 8,
QEMU_ARCH_M68K = 16,
QEMU_ARCH_MICROBLAZE = 32,
QEMU_ARCH_MIPS = 64,
QEMU_ARCH_PPC = 128,
QEMU_ARCH_S390X = 256,
QEMU_ARCH_SH4 = 512,
QEMU_ARCH_SPARC = 1024,
};
extern const uint32_t arch_type;
void select_soundhw(const char *optarg);
int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque);
int ram_load(QEMUFile *f, void *opaque, int version_id);
void do_acpitable_option(const char *optarg);
void do_smbios_option(const char *optarg);
void cpudef_init(void);
int audio_available(void);
int kvm_available(void);
int xen_available(void);
#endif

4112
arm-dis.c Normal file

File diff suppressed because it is too large Load Diff

469
arm-semi.c Normal file
View File

@@ -0,0 +1,469 @@
/*
* Arm "Angel" semihosting syscalls
*
* Copyright (c) 2005, 2007 CodeSourcery.
* Written by Paul Brook.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#include "cpu.h"
#ifdef CONFIG_USER_ONLY
#include "qemu.h"
#define ARM_ANGEL_HEAP_SIZE (128 * 1024 * 1024)
#else
#include "qemu-common.h"
#include "sysemu.h"
#include "gdbstub.h"
#endif
#define SYS_OPEN 0x01
#define SYS_CLOSE 0x02
#define SYS_WRITEC 0x03
#define SYS_WRITE0 0x04
#define SYS_WRITE 0x05
#define SYS_READ 0x06
#define SYS_READC 0x07
#define SYS_ISTTY 0x09
#define SYS_SEEK 0x0a
#define SYS_FLEN 0x0c
#define SYS_TMPNAM 0x0d
#define SYS_REMOVE 0x0e
#define SYS_RENAME 0x0f
#define SYS_CLOCK 0x10
#define SYS_TIME 0x11
#define SYS_SYSTEM 0x12
#define SYS_ERRNO 0x13
#define SYS_GET_CMDLINE 0x15
#define SYS_HEAPINFO 0x16
#define SYS_EXIT 0x18
#ifndef O_BINARY
#define O_BINARY 0
#endif
#define GDB_O_RDONLY 0x000
#define GDB_O_WRONLY 0x001
#define GDB_O_RDWR 0x002
#define GDB_O_APPEND 0x008
#define GDB_O_CREAT 0x200
#define GDB_O_TRUNC 0x400
#define GDB_O_BINARY 0
static int gdb_open_modeflags[12] = {
GDB_O_RDONLY,
GDB_O_RDONLY | GDB_O_BINARY,
GDB_O_RDWR,
GDB_O_RDWR | GDB_O_BINARY,
GDB_O_WRONLY | GDB_O_CREAT | GDB_O_TRUNC,
GDB_O_WRONLY | GDB_O_CREAT | GDB_O_TRUNC | GDB_O_BINARY,
GDB_O_RDWR | GDB_O_CREAT | GDB_O_TRUNC,
GDB_O_RDWR | GDB_O_CREAT | GDB_O_TRUNC | GDB_O_BINARY,
GDB_O_WRONLY | GDB_O_CREAT | GDB_O_APPEND,
GDB_O_WRONLY | GDB_O_CREAT | GDB_O_APPEND | GDB_O_BINARY,
GDB_O_RDWR | GDB_O_CREAT | GDB_O_APPEND,
GDB_O_RDWR | GDB_O_CREAT | GDB_O_APPEND | GDB_O_BINARY
};
static int open_modeflags[12] = {
O_RDONLY,
O_RDONLY | O_BINARY,
O_RDWR,
O_RDWR | O_BINARY,
O_WRONLY | O_CREAT | O_TRUNC,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY,
O_RDWR | O_CREAT | O_TRUNC,
O_RDWR | O_CREAT | O_TRUNC | O_BINARY,
O_WRONLY | O_CREAT | O_APPEND,
O_WRONLY | O_CREAT | O_APPEND | O_BINARY,
O_RDWR | O_CREAT | O_APPEND,
O_RDWR | O_CREAT | O_APPEND | O_BINARY
};
#ifdef CONFIG_USER_ONLY
static inline uint32_t set_swi_errno(TaskState *ts, uint32_t code)
{
if (code == (uint32_t)-1)
ts->swi_errno = errno;
return code;
}
#else
static inline uint32_t set_swi_errno(CPUState *env, uint32_t code)
{
return code;
}
#include "softmmu-semi.h"
#endif
static target_ulong arm_semi_syscall_len;
#if !defined(CONFIG_USER_ONLY)
static target_ulong syscall_err;
#endif
static void arm_semi_cb(CPUState *env, target_ulong ret, target_ulong err)
{
#ifdef CONFIG_USER_ONLY
TaskState *ts = env->opaque;
#endif
if (ret == (target_ulong)-1) {
#ifdef CONFIG_USER_ONLY
ts->swi_errno = err;
#else
syscall_err = err;
#endif
env->regs[0] = ret;
} else {
/* Fixup syscalls that use nonstardard return conventions. */
switch (env->regs[0]) {
case SYS_WRITE:
case SYS_READ:
env->regs[0] = arm_semi_syscall_len - ret;
break;
case SYS_SEEK:
env->regs[0] = 0;
break;
default:
env->regs[0] = ret;
break;
}
}
}
static void arm_semi_flen_cb(CPUState *env, target_ulong ret, target_ulong err)
{
/* The size is always stored in big-endian order, extract
the value. We assume the size always fit in 32 bits. */
uint32_t size;
cpu_memory_rw_debug(env, env->regs[13]-64+32, (uint8_t *)&size, 4, 0);
env->regs[0] = be32_to_cpu(size);
#ifdef CONFIG_USER_ONLY
((TaskState *)env->opaque)->swi_errno = err;
#else
syscall_err = err;
#endif
}
#define ARG(n) \
({ \
target_ulong __arg; \
/* FIXME - handle get_user() failure */ \
get_user_ual(__arg, args + (n) * 4); \
__arg; \
})
#define SET_ARG(n, val) put_user_ual(val, args + (n) * 4)
uint32_t do_arm_semihosting(CPUState *env)
{
target_ulong args;
char * s;
int nr;
uint32_t ret;
uint32_t len;
#ifdef CONFIG_USER_ONLY
TaskState *ts = env->opaque;
#else
CPUState *ts = env;
#endif
nr = env->regs[0];
args = env->regs[1];
switch (nr) {
case SYS_OPEN:
if (!(s = lock_user_string(ARG(0))))
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
if (ARG(1) >= 12)
return (uint32_t)-1;
if (strcmp(s, ":tt") == 0) {
if (ARG(1) < 4)
return STDIN_FILENO;
else
return STDOUT_FILENO;
}
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "open,%s,%x,1a4", ARG(0),
(int)ARG(2)+1, gdb_open_modeflags[ARG(1)]);
return env->regs[0];
} else {
ret = set_swi_errno(ts, open(s, open_modeflags[ARG(1)], 0644));
}
unlock_user(s, ARG(0), 0);
return ret;
case SYS_CLOSE:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "close,%x", ARG(0));
return env->regs[0];
} else {
return set_swi_errno(ts, close(ARG(0)));
}
case SYS_WRITEC:
{
char c;
if (get_user_u8(c, args))
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
/* Write to debug console. stderr is near enough. */
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "write,2,%x,1", args);
return env->regs[0];
} else {
return write(STDERR_FILENO, &c, 1);
}
}
case SYS_WRITE0:
if (!(s = lock_user_string(args)))
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
len = strlen(s);
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "write,2,%x,%x\n", args, len);
ret = env->regs[0];
} else {
ret = write(STDERR_FILENO, s, len);
}
unlock_user(s, args, 0);
return ret;
case SYS_WRITE:
len = ARG(2);
if (use_gdb_syscalls()) {
arm_semi_syscall_len = len;
gdb_do_syscall(arm_semi_cb, "write,%x,%x,%x", ARG(0), ARG(1), len);
return env->regs[0];
} else {
if (!(s = lock_user(VERIFY_READ, ARG(1), len, 1)))
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
ret = set_swi_errno(ts, write(ARG(0), s, len));
unlock_user(s, ARG(1), 0);
if (ret == (uint32_t)-1)
return -1;
return len - ret;
}
case SYS_READ:
len = ARG(2);
if (use_gdb_syscalls()) {
arm_semi_syscall_len = len;
gdb_do_syscall(arm_semi_cb, "read,%x,%x,%x", ARG(0), ARG(1), len);
return env->regs[0];
} else {
if (!(s = lock_user(VERIFY_WRITE, ARG(1), len, 0)))
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
do
ret = set_swi_errno(ts, read(ARG(0), s, len));
while (ret == -1 && errno == EINTR);
unlock_user(s, ARG(1), len);
if (ret == (uint32_t)-1)
return -1;
return len - ret;
}
case SYS_READC:
/* XXX: Read from debug cosole. Not implemented. */
return 0;
case SYS_ISTTY:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "isatty,%x", ARG(0));
return env->regs[0];
} else {
return isatty(ARG(0));
}
case SYS_SEEK:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "lseek,%x,%x,0", ARG(0), ARG(1));
return env->regs[0];
} else {
ret = set_swi_errno(ts, lseek(ARG(0), ARG(1), SEEK_SET));
if (ret == (uint32_t)-1)
return -1;
return 0;
}
case SYS_FLEN:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_flen_cb, "fstat,%x,%x",
ARG(0), env->regs[13]-64);
return env->regs[0];
} else {
struct stat buf;
ret = set_swi_errno(ts, fstat(ARG(0), &buf));
if (ret == (uint32_t)-1)
return -1;
return buf.st_size;
}
case SYS_TMPNAM:
/* XXX: Not implemented. */
return -1;
case SYS_REMOVE:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "unlink,%s", ARG(0), (int)ARG(1)+1);
ret = env->regs[0];
} else {
if (!(s = lock_user_string(ARG(0))))
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
ret = set_swi_errno(ts, remove(s));
unlock_user(s, ARG(0), 0);
}
return ret;
case SYS_RENAME:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "rename,%s,%s",
ARG(0), (int)ARG(1)+1, ARG(2), (int)ARG(3)+1);
return env->regs[0];
} else {
char *s2;
s = lock_user_string(ARG(0));
s2 = lock_user_string(ARG(2));
if (!s || !s2)
/* FIXME - should this error code be -TARGET_EFAULT ? */
ret = (uint32_t)-1;
else
ret = set_swi_errno(ts, rename(s, s2));
if (s2)
unlock_user(s2, ARG(2), 0);
if (s)
unlock_user(s, ARG(0), 0);
return ret;
}
case SYS_CLOCK:
return clock() / (CLOCKS_PER_SEC / 100);
case SYS_TIME:
return set_swi_errno(ts, time(NULL));
case SYS_SYSTEM:
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "system,%s", ARG(0), (int)ARG(1)+1);
return env->regs[0];
} else {
if (!(s = lock_user_string(ARG(0))))
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
ret = set_swi_errno(ts, system(s));
unlock_user(s, ARG(0), 0);
return ret;
}
case SYS_ERRNO:
#ifdef CONFIG_USER_ONLY
return ts->swi_errno;
#else
return syscall_err;
#endif
case SYS_GET_CMDLINE:
#ifdef CONFIG_USER_ONLY
/* Build a commandline from the original argv. */
{
char **arg = ts->info->host_argv;
int len = ARG(1);
/* lock the buffer on the ARM side */
char *cmdline_buffer = (char*)lock_user(VERIFY_WRITE, ARG(0), len, 0);
if (!cmdline_buffer)
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
s = cmdline_buffer;
while (*arg && len > 2) {
int n = strlen(*arg);
if (s != cmdline_buffer) {
*(s++) = ' ';
len--;
}
if (n >= len)
n = len - 1;
memcpy(s, *arg, n);
s += n;
len -= n;
arg++;
}
/* Null terminate the string. */
*s = 0;
len = s - cmdline_buffer;
/* Unlock the buffer on the ARM side. */
unlock_user(cmdline_buffer, ARG(0), len);
/* Adjust the commandline length argument. */
SET_ARG(1, len);
/* Return success if commandline fit into buffer. */
return *arg ? -1 : 0;
}
#else
return -1;
#endif
case SYS_HEAPINFO:
{
uint32_t *ptr;
uint32_t limit;
#ifdef CONFIG_USER_ONLY
/* Some C libraries assume the heap immediately follows .bss, so
allocate it using sbrk. */
if (!ts->heap_limit) {
long ret;
ts->heap_base = do_brk(0);
limit = ts->heap_base + ARM_ANGEL_HEAP_SIZE;
/* Try a big heap, and reduce the size if that fails. */
for (;;) {
ret = do_brk(limit);
if (ret != -1)
break;
limit = (ts->heap_base >> 1) + (limit >> 1);
}
ts->heap_limit = limit;
}
if (!(ptr = lock_user(VERIFY_WRITE, ARG(0), 16, 0)))
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
ptr[0] = tswap32(ts->heap_base);
ptr[1] = tswap32(ts->heap_limit);
ptr[2] = tswap32(ts->stack_base);
ptr[3] = tswap32(0); /* Stack limit. */
unlock_user(ptr, ARG(0), 16);
#else
limit = ram_size;
if (!(ptr = lock_user(VERIFY_WRITE, ARG(0), 16, 0)))
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
/* TODO: Make this use the limit of the loaded application. */
ptr[0] = tswap32(limit / 2);
ptr[1] = tswap32(limit);
ptr[2] = tswap32(limit); /* Stack base */
ptr[3] = tswap32(0); /* Stack limit. */
unlock_user(ptr, ARG(0), 16);
#endif
return 0;
}
case SYS_EXIT:
gdb_exit(env, 0);
exit(0);
default:
fprintf(stderr, "qemu: Unsupported SemiHosting SWI 0x%02x\n", nr);
cpu_dump_state(env, stderr, fprintf, 0);
abort();
}
}

153
arm.ld Normal file
View File

@@ -0,0 +1,153 @@
OUTPUT_FORMAT("elf32-littlearm", "elf32-littlearm",
"elf32-littlearm")
OUTPUT_ARCH(arm)
ENTRY(_start)
SECTIONS
{
/* Read-only sections, merged into text segment: */
. = 0x60000000 + SIZEOF_HEADERS;
.interp : { *(.interp) }
.hash : { *(.hash) }
.dynsym : { *(.dynsym) }
.dynstr : { *(.dynstr) }
.gnu.version : { *(.gnu.version) }
.gnu.version_d : { *(.gnu.version_d) }
.gnu.version_r : { *(.gnu.version_r) }
.rel.text :
{ *(.rel.text) *(.rel.gnu.linkonce.t*) }
.rela.text :
{ *(.rela.text) *(.rela.gnu.linkonce.t*) }
.rel.data :
{ *(.rel.data) *(.rel.gnu.linkonce.d*) }
.rela.data :
{ *(.rela.data) *(.rela.gnu.linkonce.d*) }
.rel.rodata :
{ *(.rel.rodata) *(.rel.gnu.linkonce.r*) }
.rela.rodata :
{ *(.rela.rodata) *(.rela.gnu.linkonce.r*) }
.rel.got : { *(.rel.got) }
.rela.got : { *(.rela.got) }
.rel.ctors : { *(.rel.ctors) }
.rela.ctors : { *(.rela.ctors) }
.rel.dtors : { *(.rel.dtors) }
.rela.dtors : { *(.rela.dtors) }
.rel.init : { *(.rel.init) }
.rela.init : { *(.rela.init) }
.rel.fini : { *(.rel.fini) }
.rela.fini : { *(.rela.fini) }
.rel.bss : { *(.rel.bss) }
.rela.bss : { *(.rela.bss) }
.rel.plt : { *(.rel.plt) }
.rela.plt : { *(.rela.plt) }
.init : { *(.init) } =0x47ff041f
.text :
{
*(.text)
/* .gnu.warning sections are handled specially by elf32.em. */
*(.gnu.warning)
*(.gnu.linkonce.t*)
} =0x47ff041f
_etext = .;
PROVIDE (etext = .);
.fini : { *(.fini) } =0x47ff041f
.rodata : { *(.rodata) *(.gnu.linkonce.r*) }
.rodata1 : { *(.rodata1) }
.ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) }
__exidx_start = .;
.ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) }
__exidx_end = .;
.reginfo : { *(.reginfo) }
/* Adjust the address for the data segment. We want to adjust up to
the same address within the page on the next page up. */
. = ALIGN(0x100000) + (. & (0x100000 - 1));
.data :
{
*(.gen_code)
*(.data)
*(.gnu.linkonce.d*)
CONSTRUCTORS
}
.tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) }
.data1 : { *(.data1) }
.preinit_array :
{
PROVIDE_HIDDEN (__preinit_array_start = .);
KEEP (*(.preinit_array))
PROVIDE_HIDDEN (__preinit_array_end = .);
}
.init_array :
{
PROVIDE_HIDDEN (__init_array_start = .);
KEEP (*(SORT(.init_array.*)))
KEEP (*(.init_array))
PROVIDE_HIDDEN (__init_array_end = .);
}
.fini_array :
{
PROVIDE_HIDDEN (__fini_array_start = .);
KEEP (*(.fini_array))
KEEP (*(SORT(.fini_array.*)))
PROVIDE_HIDDEN (__fini_array_end = .);
}
.ctors :
{
*(.ctors)
}
.dtors :
{
*(.dtors)
}
.plt : { *(.plt) }
.got : { *(.got.plt) *(.got) }
.dynamic : { *(.dynamic) }
/* We want the small data sections together, so single-instruction offsets
can access them all, and initialized data all before uninitialized, so
we can shorten the on-disk segment size. */
.sdata : { *(.sdata) }
_edata = .;
PROVIDE (edata = .);
__bss_start = .;
.sbss : { *(.sbss) *(.scommon) }
.bss :
{
*(.dynbss)
*(.bss)
*(COMMON)
}
_end = . ;
PROVIDE (end = .);
/* Stabs debugging sections. */
.stab 0 : { *(.stab) }
.stabstr 0 : { *(.stabstr) }
.stab.excl 0 : { *(.stab.excl) }
.stab.exclstr 0 : { *(.stab.exclstr) }
.stab.index 0 : { *(.stab.index) }
.stab.indexstr 0 : { *(.stab.indexstr) }
.comment 0 : { *(.comment) }
/* DWARF debug sections.
Symbols in the DWARF debugging sections are relative to the beginning
of the section so we begin them at 0. */
/* DWARF 1 */
.debug 0 : { *(.debug) }
.line 0 : { *(.line) }
/* GNU DWARF 1 extensions */
.debug_srcinfo 0 : { *(.debug_srcinfo) }
.debug_sfnames 0 : { *(.debug_sfnames) }
/* DWARF 1.1 and DWARF 2 */
.debug_aranges 0 : { *(.debug_aranges) }
.debug_pubnames 0 : { *(.debug_pubnames) }
/* DWARF 2 */
.debug_info 0 : { *(.debug_info) }
.debug_abbrev 0 : { *(.debug_abbrev) }
.debug_line 0 : { *(.debug_line) }
.debug_frame 0 : { *(.debug_frame) }
.debug_str 0 : { *(.debug_str) }
.debug_loc 0 : { *(.debug_loc) }
.debug_macinfo 0 : { *(.debug_macinfo) }
/* SGI/MIPS DWARF 2 extensions */
.debug_weaknames 0 : { *(.debug_weaknames) }
.debug_funcnames 0 : { *(.debug_funcnames) }
.debug_typenames 0 : { *(.debug_typenames) }
.debug_varnames 0 : { *(.debug_varnames) }
/* These must appear regardless of . */
}

342
async.c
View File

@@ -23,61 +23,127 @@
*/
#include "qemu-common.h"
#include "block/aio.h"
#include "block/thread-pool.h"
#include "qemu/main-loop.h"
#include "qemu/atomic.h"
#include "qemu-aio.h"
/*
* An AsyncContext protects the callbacks of AIO requests and Bottom Halves
* against interfering with each other. A typical example is qcow2 that accepts
* asynchronous requests, but relies for manipulation of its metadata on
* synchronous bdrv_read/write that doesn't trigger any callbacks.
*
* However, these functions are often emulated using AIO which means that AIO
* callbacks must be run - but at the same time we must not run callbacks of
* other requests as they might start to modify metadata and corrupt the
* internal state of the caller of bdrv_read/write.
*
* To achieve the desired semantics we switch into a new AsyncContext.
* Callbacks must only be run if they belong to the current AsyncContext.
* Otherwise they need to be queued until their own context is active again.
* This is how you can make qemu_aio_wait() wait only for your own callbacks.
*
* The AsyncContexts form a stack. When you leave a AsyncContexts, you always
* return to the old ("parent") context.
*/
struct AsyncContext {
/* Consecutive number of the AsyncContext (position in the stack) */
int id;
/* Anchor of the list of Bottom Halves belonging to the context */
struct QEMUBH *first_bh;
/* Link to parent context */
struct AsyncContext *parent;
};
/* The currently active AsyncContext */
static struct AsyncContext *async_context = &(struct AsyncContext) { 0 };
/*
* Enter a new AsyncContext. Already scheduled Bottom Halves and AIO callbacks
* won't be called until this context is left again.
*/
void async_context_push(void)
{
struct AsyncContext *new = qemu_mallocz(sizeof(*new));
new->parent = async_context;
new->id = async_context->id + 1;
async_context = new;
}
/* Run queued AIO completions and destroy Bottom Half */
static void bh_run_aio_completions(void *opaque)
{
QEMUBH **bh = opaque;
qemu_bh_delete(*bh);
qemu_free(bh);
qemu_aio_process_queue();
}
/*
* Leave the currently active AsyncContext. All Bottom Halves belonging to the
* old context are executed before changing the context.
*/
void async_context_pop(void)
{
struct AsyncContext *old = async_context;
QEMUBH **bh;
/* Flush the bottom halves, we don't want to lose them */
while (qemu_bh_poll());
/* Switch back to the parent context */
async_context = async_context->parent;
qemu_free(old);
if (async_context == NULL) {
abort();
}
/* Schedule BH to run any queued AIO completions as soon as possible */
bh = qemu_malloc(sizeof(*bh));
*bh = qemu_bh_new(bh_run_aio_completions, bh);
qemu_bh_schedule(*bh);
}
/*
* Returns the ID of the currently active AsyncContext
*/
int get_async_context_id(void)
{
return async_context->id;
}
/***********************************************************/
/* bottom halves (can be seen as timers which expire ASAP) */
struct QEMUBH {
AioContext *ctx;
QEMUBHFunc *cb;
void *opaque;
int scheduled;
int idle;
int deleted;
QEMUBH *next;
bool scheduled;
bool idle;
bool deleted;
};
QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
QEMUBH *qemu_bh_new(QEMUBHFunc *cb, void *opaque)
{
QEMUBH *bh;
bh = g_new(QEMUBH, 1);
*bh = (QEMUBH){
.ctx = ctx,
.cb = cb,
.opaque = opaque,
};
qemu_mutex_lock(&ctx->bh_lock);
bh->next = ctx->first_bh;
/* Make sure that the members are ready before putting bh into list */
smp_wmb();
ctx->first_bh = bh;
qemu_mutex_unlock(&ctx->bh_lock);
bh = qemu_mallocz(sizeof(QEMUBH));
bh->cb = cb;
bh->opaque = opaque;
bh->next = async_context->first_bh;
async_context->first_bh = bh;
return bh;
}
/* Multiple occurrences of aio_bh_poll cannot be called concurrently */
int aio_bh_poll(AioContext *ctx)
int qemu_bh_poll(void)
{
QEMUBH *bh, **bhp, *next;
QEMUBH *bh, **bhp;
int ret;
ctx->walking_bh++;
ret = 0;
for (bh = ctx->first_bh; bh; bh = next) {
/* Make sure that fetching bh happens before accessing its members */
smp_read_barrier_depends();
next = bh->next;
for (bh = async_context->first_bh; bh; bh = bh->next) {
if (!bh->deleted && bh->scheduled) {
bh->scheduled = 0;
/* Paired with write barrier in bh schedule to ensure reading for
* idle & callbacks coming after bh's scheduling.
*/
smp_rmb();
if (!bh->idle)
ret = 1;
bh->idle = 0;
@@ -85,23 +151,16 @@ int aio_bh_poll(AioContext *ctx)
}
}
ctx->walking_bh--;
/* remove deleted bhs */
if (!ctx->walking_bh) {
qemu_mutex_lock(&ctx->bh_lock);
bhp = &ctx->first_bh;
bhp = &async_context->first_bh;
while (*bhp) {
bh = *bhp;
if (bh->deleted) {
*bhp = bh->next;
g_free(bh);
} else {
qemu_free(bh);
} else
bhp = &bh->next;
}
}
qemu_mutex_unlock(&ctx->bh_lock);
}
return ret;
}
@@ -110,227 +169,48 @@ void qemu_bh_schedule_idle(QEMUBH *bh)
{
if (bh->scheduled)
return;
bh->idle = 1;
/* Make sure that idle & any writes needed by the callback are done
* before the locations are read in the aio_bh_poll.
*/
smp_wmb();
bh->scheduled = 1;
bh->idle = 1;
}
void qemu_bh_schedule(QEMUBH *bh)
{
AioContext *ctx;
if (bh->scheduled)
return;
ctx = bh->ctx;
bh->idle = 0;
/* Make sure that:
* 1. idle & any writes needed by the callback are done before the
* locations are read in the aio_bh_poll.
* 2. ctx is loaded before scheduled is set and the callback has a chance
* to execute.
*/
smp_mb();
bh->scheduled = 1;
aio_notify(ctx);
bh->idle = 0;
/* stop the currently executing CPU to execute the BH ASAP */
qemu_notify_event();
}
/* This func is async.
*/
void qemu_bh_cancel(QEMUBH *bh)
{
bh->scheduled = 0;
}
/* This func is async.The bottom half will do the delete action at the finial
* end.
*/
void qemu_bh_delete(QEMUBH *bh)
{
bh->scheduled = 0;
bh->deleted = 1;
}
int64_t
aio_compute_timeout(AioContext *ctx)
void qemu_bh_update_timeout(int *timeout)
{
int64_t deadline;
int timeout = -1;
QEMUBH *bh;
for (bh = ctx->first_bh; bh; bh = bh->next) {
for (bh = async_context->first_bh; bh; bh = bh->next) {
if (!bh->deleted && bh->scheduled) {
if (bh->idle) {
/* idle bottom halves will be polled at least
* every 10ms */
timeout = 10000000;
*timeout = MIN(10, *timeout);
} else {
/* non-idle bottom halves will be executed
* immediately */
return 0;
}
}
}
deadline = timerlistgroup_deadline_ns(&ctx->tlg);
if (deadline == 0) {
return 0;
} else {
return qemu_soonest_timeout(timeout, deadline);
}
}
static gboolean
aio_ctx_prepare(GSource *source, gint *timeout)
{
AioContext *ctx = (AioContext *) source;
/* We assume there is no timeout already supplied */
*timeout = qemu_timeout_ns_to_ms(aio_compute_timeout(ctx));
if (aio_prepare(ctx)) {
*timeout = 0;
}
return *timeout == 0;
}
static gboolean
aio_ctx_check(GSource *source)
{
AioContext *ctx = (AioContext *) source;
QEMUBH *bh;
for (bh = ctx->first_bh; bh; bh = bh->next) {
if (!bh->deleted && bh->scheduled) {
return true;
break;
}
}
return aio_pending(ctx) || (timerlistgroup_deadline_ns(&ctx->tlg) == 0);
}
static gboolean
aio_ctx_dispatch(GSource *source,
GSourceFunc callback,
gpointer user_data)
{
AioContext *ctx = (AioContext *) source;
assert(callback == NULL);
aio_dispatch(ctx);
return true;
}
static void
aio_ctx_finalize(GSource *source)
{
AioContext *ctx = (AioContext *) source;
thread_pool_free(ctx->thread_pool);
aio_set_event_notifier(ctx, &ctx->notifier, NULL);
event_notifier_cleanup(&ctx->notifier);
rfifolock_destroy(&ctx->lock);
qemu_mutex_destroy(&ctx->bh_lock);
g_array_free(ctx->pollfds, TRUE);
timerlistgroup_deinit(&ctx->tlg);
}
static GSourceFuncs aio_source_funcs = {
aio_ctx_prepare,
aio_ctx_check,
aio_ctx_dispatch,
aio_ctx_finalize
};
GSource *aio_get_g_source(AioContext *ctx)
{
g_source_ref(&ctx->source);
return &ctx->source;
}
ThreadPool *aio_get_thread_pool(AioContext *ctx)
{
if (!ctx->thread_pool) {
ctx->thread_pool = thread_pool_new(ctx);
}
return ctx->thread_pool;
}
void aio_set_dispatching(AioContext *ctx, bool dispatching)
{
ctx->dispatching = dispatching;
if (!dispatching) {
/* Write ctx->dispatching before reading e.g. bh->scheduled.
* Optimization: this is only needed when we're entering the "unsafe"
* phase where other threads must call event_notifier_set.
*/
smp_mb();
}
}
void aio_notify(AioContext *ctx)
{
/* Write e.g. bh->scheduled before reading ctx->dispatching. */
smp_mb();
if (!ctx->dispatching) {
event_notifier_set(&ctx->notifier);
}
}
static void aio_timerlist_notify(void *opaque)
{
aio_notify(opaque);
}
static void aio_rfifolock_cb(void *opaque)
{
/* Kick owner thread in case they are blocked in aio_poll() */
aio_notify(opaque);
}
AioContext *aio_context_new(Error **errp)
{
int ret;
AioContext *ctx;
ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext));
ret = event_notifier_init(&ctx->notifier, false);
if (ret < 0) {
g_source_destroy(&ctx->source);
error_setg_errno(errp, -ret, "Failed to initialize event notifier");
return NULL;
}
g_source_set_can_recurse(&ctx->source, true);
aio_set_event_notifier(ctx, &ctx->notifier,
(EventNotifierHandler *)
event_notifier_test_and_clear);
ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
ctx->thread_pool = NULL;
qemu_mutex_init(&ctx->bh_lock);
rfifolock_init(&ctx->lock, aio_rfifolock_cb, ctx);
timerlistgroup_init(&ctx->tlg, aio_timerlist_notify, ctx);
return ctx;
}
void aio_context_ref(AioContext *ctx)
{
g_source_ref(&ctx->source);
}
void aio_context_unref(AioContext *ctx)
{
g_source_unref(&ctx->source);
}
void aio_context_acquire(AioContext *ctx)
{
rfifolock_lock(&ctx->lock);
}
void aio_context_release(AioContext *ctx)
{
rfifolock_unlock(&ctx->lock);
}

View File

@@ -1,17 +0,0 @@
common-obj-y = audio.o noaudio.o wavaudio.o mixeng.o
common-obj-$(CONFIG_SDL) += sdlaudio.o
common-obj-$(CONFIG_OSS) += ossaudio.o
common-obj-$(CONFIG_SPICE) += spiceaudio.o
common-obj-$(CONFIG_COREAUDIO) += coreaudio.o
common-obj-$(CONFIG_ALSA) += alsaaudio.o
common-obj-$(CONFIG_DSOUND) += dsoundaudio.o
common-obj-$(CONFIG_FMOD) += fmodaudio.o
common-obj-$(CONFIG_ESD) += esdaudio.o
common-obj-$(CONFIG_PA) += paaudio.o
common-obj-$(CONFIG_WINWAVE) += winwaveaudio.o
common-obj-$(CONFIG_AUDIO_PT_INT) += audio_pt_int.o
common-obj-$(CONFIG_AUDIO_WIN_INT) += audio_win_int.o
common-obj-y += wavcapture.o
$(obj)/audio.o $(obj)/fmodaudio.o: QEMU_CFLAGS += $(FMOD_CFLAGS)
sdlaudio.o-cflags := $(SDL_CFLAGS)

View File

@@ -23,7 +23,7 @@
*/
#include <alsa/asoundlib.h>
#include "qemu-common.h"
#include "qemu/main-loop.h"
#include "qemu-char.h"
#include "audio.h"
#if QEMU_GNUC_PREREQ(4, 3)
@@ -136,7 +136,7 @@ static void alsa_fini_poll (struct pollhlp *hlp)
for (i = 0; i < hlp->count; ++i) {
qemu_set_fd_handler (pfds[i].fd, NULL, NULL, NULL);
}
g_free (pfds);
qemu_free (pfds);
}
hlp->pfds = NULL;
hlp->count = 0;
@@ -260,7 +260,7 @@ static int alsa_poll_helper (snd_pcm_t *handle, struct pollhlp *hlp, int mask)
if (err < 0) {
alsa_logerr (err, "Could not initialize poll mode\n"
"Could not obtain poll descriptors\n");
g_free (pfds);
qemu_free (pfds);
return -1;
}
@@ -288,7 +288,7 @@ static int alsa_poll_helper (snd_pcm_t *handle, struct pollhlp *hlp, int mask)
while (i--) {
qemu_set_fd_handler (pfds[i].fd, NULL, NULL, NULL);
}
g_free (pfds);
qemu_free (pfds);
return -1;
}
}
@@ -318,7 +318,7 @@ static int alsa_write (SWVoiceOut *sw, void *buf, int len)
return audio_pcm_sw_write (sw, buf, len);
}
static snd_pcm_format_t aud_to_alsafmt (audfmt_e fmt, int endianness)
static snd_pcm_format_t aud_to_alsafmt (audfmt_e fmt)
{
switch (fmt) {
case AUD_FMT_S8:
@@ -328,36 +328,16 @@ static snd_pcm_format_t aud_to_alsafmt (audfmt_e fmt, int endianness)
return SND_PCM_FORMAT_U8;
case AUD_FMT_S16:
if (endianness) {
return SND_PCM_FORMAT_S16_BE;
}
else {
return SND_PCM_FORMAT_S16_LE;
}
case AUD_FMT_U16:
if (endianness) {
return SND_PCM_FORMAT_U16_BE;
}
else {
return SND_PCM_FORMAT_U16_LE;
}
case AUD_FMT_S32:
if (endianness) {
return SND_PCM_FORMAT_S32_BE;
}
else {
return SND_PCM_FORMAT_S32_LE;
}
case AUD_FMT_U32:
if (endianness) {
return SND_PCM_FORMAT_U32_BE;
}
else {
return SND_PCM_FORMAT_U32_LE;
}
default:
dolog ("Internal logic error: Bad audio format %d\n", fmt);
@@ -815,8 +795,10 @@ static void alsa_fini_out (HWVoiceOut *hw)
ldebug ("alsa_fini\n");
alsa_anal_close (&alsa->handle, &alsa->pollhlp);
g_free(alsa->pcm_buf);
if (alsa->pcm_buf) {
qemu_free (alsa->pcm_buf);
alsa->pcm_buf = NULL;
}
}
static int alsa_init_out (HWVoiceOut *hw, struct audsettings *as)
@@ -827,7 +809,7 @@ static int alsa_init_out (HWVoiceOut *hw, struct audsettings *as)
snd_pcm_t *handle;
struct audsettings obt_as;
req.fmt = aud_to_alsafmt (as->fmt, as->endianness);
req.fmt = aud_to_alsafmt (as->fmt);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.period_size = conf.period_size_out;
@@ -861,15 +843,11 @@ static int alsa_init_out (HWVoiceOut *hw, struct audsettings *as)
return 0;
}
#define VOICE_CTL_PAUSE 0
#define VOICE_CTL_PREPARE 1
#define VOICE_CTL_START 2
static int alsa_voice_ctl (snd_pcm_t *handle, const char *typ, int ctl)
static int alsa_voice_ctl (snd_pcm_t *handle, const char *typ, int pause)
{
int err;
if (ctl == VOICE_CTL_PAUSE) {
if (pause) {
err = snd_pcm_drop (handle);
if (err < 0) {
alsa_logerr (err, "Could not stop %s\n", typ);
@@ -882,13 +860,6 @@ static int alsa_voice_ctl (snd_pcm_t *handle, const char *typ, int ctl)
alsa_logerr (err, "Could not prepare handle for %s\n", typ);
return -1;
}
if (ctl == VOICE_CTL_START) {
err = snd_pcm_start(handle);
if (err < 0) {
alsa_logerr (err, "Could not start handle for %s\n", typ);
return -1;
}
}
}
return 0;
@@ -913,16 +884,12 @@ static int alsa_ctl_out (HWVoiceOut *hw, int cmd, ...)
poll_mode = 0;
}
hw->poll_mode = poll_mode;
return alsa_voice_ctl (alsa->handle, "playback", VOICE_CTL_PREPARE);
return alsa_voice_ctl (alsa->handle, "playback", 0);
}
case VOICE_DISABLE:
ldebug ("disabling voice\n");
if (hw->poll_mode) {
hw->poll_mode = 0;
alsa_fini_poll (&alsa->pollhlp);
}
return alsa_voice_ctl (alsa->handle, "playback", VOICE_CTL_PAUSE);
return alsa_voice_ctl (alsa->handle, "playback", 1);
}
return -1;
@@ -936,7 +903,7 @@ static int alsa_init_in (HWVoiceIn *hw, struct audsettings *as)
snd_pcm_t *handle;
struct audsettings obt_as;
req.fmt = aud_to_alsafmt (as->fmt, as->endianness);
req.fmt = aud_to_alsafmt (as->fmt);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.period_size = conf.period_size_in;
@@ -976,8 +943,10 @@ static void alsa_fini_in (HWVoiceIn *hw)
alsa_anal_close (&alsa->handle, &alsa->pollhlp);
g_free(alsa->pcm_buf);
if (alsa->pcm_buf) {
qemu_free (alsa->pcm_buf);
alsa->pcm_buf = NULL;
}
}
static int alsa_run_in (HWVoiceIn *hw)
@@ -1093,7 +1062,7 @@ static int alsa_run_in (HWVoiceIn *hw)
}
}
hw->conv (dst, src, nread);
hw->conv (dst, src, nread, &nominal_volume);
src = advance (src, nread << hwshift);
dst += nread;
@@ -1133,7 +1102,7 @@ static int alsa_ctl_in (HWVoiceIn *hw, int cmd, ...)
}
hw->poll_mode = poll_mode;
return alsa_voice_ctl (alsa->handle, "capture", VOICE_CTL_START);
return alsa_voice_ctl (alsa->handle, "capture", 0);
}
case VOICE_DISABLE:
@@ -1142,7 +1111,7 @@ static int alsa_ctl_in (HWVoiceIn *hw, int cmd, ...)
hw->poll_mode = 0;
alsa_fini_poll (&alsa->pollhlp);
}
return alsa_voice_ctl (alsa->handle, "capture", VOICE_CTL_PAUSE);
return alsa_voice_ctl (alsa->handle, "capture", 1);
}
return -1;

View File

@@ -23,9 +23,9 @@
*/
#include "hw/hw.h"
#include "audio.h"
#include "monitor/monitor.h"
#include "qemu/timer.h"
#include "sysemu/sysemu.h"
#include "monitor.h"
#include "qemu-timer.h"
#include "sysemu.h"
#define AUDIO_CAP "audio"
#include "audio_int.h"
@@ -44,9 +44,6 @@
that we generate the list.
*/
static struct audio_driver *drvtab[] = {
#ifdef CONFIG_SPICE
&spice_audio_driver,
#endif
CONFIG_AUDIO_DRIVERS
&no_audio_driver,
&wav_audio_driver
@@ -95,7 +92,7 @@ static struct {
}
},
.period = { .hertz = 100 },
.period = { .hertz = 250 },
.plive = 0,
.log_to_monitor = 0,
.try_poll_in = 1,
@@ -104,7 +101,7 @@ static struct {
static AudioState glob_audio_state;
const struct mixeng_volume nominal_volume = {
struct mixeng_volume nominal_volume = {
.mute = 0,
#ifdef FLOAT_MIXENG
.r = 1.0,
@@ -196,7 +193,7 @@ void *audio_calloc (const char *funcname, int nmemb, size_t size)
return NULL;
}
return g_malloc0 (len);
return qemu_mallocz (len);
}
static char *audio_alloc_prefix (const char *s)
@@ -210,7 +207,7 @@ static char *audio_alloc_prefix (const char *s)
}
len = strlen (s);
r = g_malloc (len + sizeof (qemu_prefix));
r = qemu_malloc (len + sizeof (qemu_prefix));
u = r + sizeof (qemu_prefix) - 1;
@@ -425,7 +422,7 @@ static void audio_print_options (const char *prefix,
printf (" %s\n", opt->descr);
}
g_free (uprefix);
qemu_free (uprefix);
}
static void audio_process_options (const char *prefix,
@@ -462,7 +459,7 @@ static void audio_process_options (const char *prefix,
* (includes trailing zero) + zero + underscore (on behalf of
* sizeof) */
optlen = len + preflen + sizeof (qemu_prefix) + 1;
optname = g_malloc (optlen);
optname = qemu_malloc (optlen);
pstrcpy (optname, optlen, qemu_prefix);
@@ -507,7 +504,7 @@ static void audio_process_options (const char *prefix,
opt->overriddenp = &opt->overridden;
}
*opt->overriddenp = !def;
g_free (optname);
qemu_free (optname);
}
}
@@ -585,20 +582,17 @@ static int audio_pcm_info_eq (struct audio_pcm_info *info, struct audsettings *a
switch (as->fmt) {
case AUD_FMT_S8:
sign = 1;
/* fall through */
case AUD_FMT_U8:
break;
case AUD_FMT_S16:
sign = 1;
/* fall through */
case AUD_FMT_U16:
bits = 16;
break;
case AUD_FMT_S32:
sign = 1;
/* fall through */
case AUD_FMT_U32:
bits = 32;
break;
@@ -705,11 +699,13 @@ void audio_pcm_info_clear_buf (struct audio_pcm_info *info, void *buf, int len)
/*
* Capture
*/
static void noop_conv (struct st_sample *dst, const void *src, int samples)
static void noop_conv (struct st_sample *dst, const void *src,
int samples, struct mixeng_volume *vol)
{
(void) src;
(void) dst;
(void) samples;
(void) vol;
}
static CaptureVoiceOut *audio_pcm_capture_find_specific (
@@ -781,7 +777,7 @@ static void audio_detach_capture (HWVoiceOut *hw)
QLIST_REMOVE (sw, entries);
QLIST_REMOVE (sc, entries);
g_free (sc);
qemu_free (sc);
if (was_active) {
/* We have removed soft voice from the capture:
this might have changed the overall status of the capture
@@ -818,19 +814,17 @@ static int audio_attach_capture (HWVoiceOut *hw)
sw->active = hw->enabled;
sw->conv = noop_conv;
sw->ratio = ((int64_t) hw_cap->info.freq << 32) / sw->info.freq;
sw->vol = nominal_volume;
sw->rate = st_rate_start (sw->info.freq, hw_cap->info.freq);
if (!sw->rate) {
dolog ("Could not start rate conversion for `%s'\n", SW_NAME (sw));
g_free (sw);
qemu_free (sw);
return -1;
}
QLIST_INSERT_HEAD (&hw_cap->sw_head, sw, entries);
QLIST_INSERT_HEAD (&hw->cap_head, sc, entries);
#ifdef DEBUG_CAPTURE
sw->name = g_strdup_printf ("for %p %d,%d,%d",
hw, sw->info.freq, sw->info.bits,
sw->info.nchannels);
asprintf (&sw->name, "for %p %d,%d,%d",
hw, sw->info.freq, sw->info.bits, sw->info.nchannels);
dolog ("Added %s active = %d\n", sw->name, sw->active);
#endif
if (sw->active) {
@@ -959,10 +953,6 @@ int audio_pcm_sw_read (SWVoiceIn *sw, void *buf, int size)
total += isamp;
}
if (!(hw->ctl_caps & VOICE_VOLUME_CAP)) {
mixeng_volume (sw->buf, ret, &sw->vol);
}
sw->clip (buf, sw->buf, ret);
sw->total_hw_samples_acquired += total;
return ret << sw->info.shift;
@@ -1044,11 +1034,7 @@ int audio_pcm_sw_write (SWVoiceOut *sw, void *buf, int size)
swlim = ((int64_t) dead << 32) / sw->ratio;
swlim = audio_MIN (swlim, samples);
if (swlim) {
sw->conv (sw->buf, buf, swlim);
if (!(sw->hw->ctl_caps & VOICE_VOLUME_CAP)) {
mixeng_volume (sw->buf, swlim, &sw->vol);
}
sw->conv (sw->buf, buf, swlim, &sw->vol);
}
while (swlim) {
@@ -1107,6 +1093,15 @@ static void audio_pcm_print_info (const char *cap, struct audio_pcm_info *info)
/*
* Timer
*/
static void audio_timer (void *opaque)
{
AudioState *s = opaque;
audio_run ("timer");
qemu_mod_timer (s->ts, qemu_get_clock (vm_clock) + conf.period.ticks);
}
static int audio_is_timer_needed (void)
{
HWVoiceIn *hwi = NULL;
@@ -1121,23 +1116,18 @@ static int audio_is_timer_needed (void)
return 0;
}
static void audio_reset_timer (AudioState *s)
static void audio_reset_timer (void)
{
AudioState *s = &glob_audio_state;
if (audio_is_timer_needed ()) {
timer_mod (s->ts,
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + conf.period.ticks);
qemu_mod_timer (s->ts, qemu_get_clock (vm_clock) + 1);
}
else {
timer_del (s->ts);
qemu_del_timer (s->ts);
}
}
static void audio_timer (void *opaque)
{
audio_run ("timer");
audio_reset_timer (opaque);
}
/*
* Public API
*/
@@ -1202,7 +1192,7 @@ void AUD_set_active_out (SWVoiceOut *sw, int on)
hw->enabled = 1;
if (s->vm_running) {
hw->pcm_ops->ctl_out (hw, VOICE_ENABLE, conf.try_poll_out);
audio_reset_timer (s);
audio_reset_timer ();
}
}
}
@@ -1247,7 +1237,6 @@ void AUD_set_active_in (SWVoiceIn *sw, int on)
hw->enabled = 1;
if (s->vm_running) {
hw->pcm_ops->ctl_in (hw, VOICE_ENABLE, conf.try_poll_in);
audio_reset_timer (s);
}
}
sw->total_hw_samples_acquired = hw->total_samples_captured;
@@ -1676,7 +1665,7 @@ static void audio_pp_nb_voices (const char *typ, int nb)
printf ("Theoretically supports many %s voices\n", typ);
break;
default:
printf ("Theoretically supports up to %d %s voices\n", nb, typ);
printf ("Theoretically supports upto %d %s voices\n", nb, typ);
break;
}
@@ -1754,7 +1743,7 @@ static int audio_driver_init (AudioState *s, struct audio_driver *drv)
}
static void audio_vm_change_state_handler (void *opaque, int running,
RunState state)
int reason)
{
AudioState *s = opaque;
HWVoiceOut *hwo = NULL;
@@ -1769,7 +1758,7 @@ static void audio_vm_change_state_handler (void *opaque, int running,
while ((hwi = audio_pcm_hw_find_any_enabled_in (hwi))) {
hwi->pcm_ops->ctl_in (hwi, op, conf.try_poll_in);
}
audio_reset_timer (s);
audio_reset_timer ();
}
static void audio_atexit (void)
@@ -1778,12 +1767,10 @@ static void audio_atexit (void)
HWVoiceOut *hwo = NULL;
HWVoiceIn *hwi = NULL;
while ((hwo = audio_pcm_hw_find_any_out (hwo))) {
while ((hwo = audio_pcm_hw_find_any_enabled_out (hwo))) {
SWVoiceCap *sc;
if (hwo->enabled) {
hwo->pcm_ops->ctl_out (hwo, VOICE_DISABLE);
}
hwo->pcm_ops->fini_out (hwo);
for (sc = hwo->cap_head.lh_first; sc; sc = sc->entries.le_next) {
@@ -1796,10 +1783,8 @@ static void audio_atexit (void)
}
}
while ((hwi = audio_pcm_hw_find_any_in (hwi))) {
if (hwi->enabled) {
while ((hwi = audio_pcm_hw_find_any_enabled_in (hwi))) {
hwi->pcm_ops->ctl_in (hwi, VOICE_DISABLE);
}
hwi->pcm_ops->fini_in (hwi);
}
@@ -1812,7 +1797,8 @@ static const VMStateDescription vmstate_audio = {
.name = "audio",
.version_id = 1,
.minimum_version_id = 1,
.fields = (VMStateField[]) {
.minimum_version_id_old = 1,
.fields = (VMStateField []) {
VMSTATE_END_OF_LIST()
}
};
@@ -1834,7 +1820,7 @@ static void audio_init (void)
QLIST_INIT (&s->cap_head);
atexit (audio_atexit);
s->ts = timer_new_ns(QEMU_CLOCK_VIRTUAL, audio_timer, s);
s->ts = qemu_new_timer (vm_clock, audio_timer, s);
if (!s->ts) {
hw_error("Could not create audio timer\n");
}
@@ -1921,7 +1907,7 @@ static void audio_init (void)
void AUD_register_card (const char *name, QEMUSoundCard *card)
{
audio_init ();
card->name = g_strdup (name);
card->name = qemu_strdup (name);
memset (&card->entries, 0, sizeof (card->entries));
QLIST_INSERT_HEAD (&glob_audio_state.card_head, card, entries);
}
@@ -1929,7 +1915,7 @@ void AUD_register_card (const char *name, QEMUSoundCard *card)
void AUD_remove_card (QEMUSoundCard *card)
{
QLIST_REMOVE (card, entries);
g_free (card->name);
qemu_free (card->name);
}
@@ -2014,11 +2000,11 @@ CaptureVoiceOut *AUD_add_capture (
return cap;
err3:
g_free (cap->hw.mix_buf);
qemu_free (cap->hw.mix_buf);
err2:
g_free (cap);
qemu_free (cap);
err1:
g_free (cb);
qemu_free (cb);
err0:
return NULL;
}
@@ -2032,7 +2018,7 @@ void AUD_del_capture (CaptureVoiceOut *cap, void *cb_opaque)
if (cb->opaque == cb_opaque) {
cb->ops.destroy (cb_opaque);
QLIST_REMOVE (cb, entries);
g_free (cb);
qemu_free (cb);
if (!cap->cb_head.lh_first) {
SWVoiceOut *sw = cap->hw.sw_head.lh_first, *sw1;
@@ -2050,11 +2036,11 @@ void AUD_del_capture (CaptureVoiceOut *cap, void *cb_opaque)
}
QLIST_REMOVE (sw, entries);
QLIST_REMOVE (sc, entries);
g_free (sc);
qemu_free (sc);
sw = sw1;
}
QLIST_REMOVE (cap, entries);
g_free (cap);
qemu_free (cap);
}
return;
}
@@ -2064,29 +2050,17 @@ void AUD_del_capture (CaptureVoiceOut *cap, void *cb_opaque)
void AUD_set_volume_out (SWVoiceOut *sw, int mute, uint8_t lvol, uint8_t rvol)
{
if (sw) {
HWVoiceOut *hw = sw->hw;
sw->vol.mute = mute;
sw->vol.l = nominal_volume.l * lvol / 255;
sw->vol.r = nominal_volume.r * rvol / 255;
if (hw->pcm_ops->ctl_out) {
hw->pcm_ops->ctl_out (hw, VOICE_VOLUME, sw);
}
}
}
void AUD_set_volume_in (SWVoiceIn *sw, int mute, uint8_t lvol, uint8_t rvol)
{
if (sw) {
HWVoiceIn *hw = sw->hw;
sw->vol.mute = mute;
sw->vol.l = nominal_volume.l * lvol / 255;
sw->vol.r = nominal_volume.r * rvol / 255;
if (hw->pcm_ops->ctl_in) {
hw->pcm_ops->ctl_in (hw, VOICE_VOLUME, sw);
}
}
}

View File

@@ -25,7 +25,7 @@
#define QEMU_AUDIO_H
#include "config-host.h"
#include "qemu/queue.h"
#include "qemu-queue.h"
typedef void (*audio_callback_fn) (void *opaque, int avail);
@@ -86,8 +86,12 @@ typedef struct QEMUAudioTimeStamp {
uint64_t old_ts;
} QEMUAudioTimeStamp;
void AUD_vlog (const char *cap, const char *fmt, va_list ap) GCC_FMT_ATTR(2, 0);
void AUD_log (const char *cap, const char *fmt, ...) GCC_FMT_ATTR(2, 3);
void AUD_vlog (const char *cap, const char *fmt, va_list ap);
void AUD_log (const char *cap, const char *fmt, ...)
#ifdef __GNUC__
__attribute__ ((__format__ (__printf__, 2, 3)))
#endif
;
void AUD_help (void);
void AUD_register_card (const char *name, QEMUSoundCard *card);

View File

@@ -82,7 +82,6 @@ typedef struct HWVoiceOut {
int samples;
QLIST_HEAD (sw_out_listhead, SWVoiceOut) sw_head;
QLIST_HEAD (sw_cap_listhead, SWVoiceCap) cap_head;
int ctl_caps;
struct audio_pcm_ops *pcm_ops;
QLIST_ENTRY (HWVoiceOut) entries;
} HWVoiceOut;
@@ -102,7 +101,6 @@ typedef struct HWVoiceIn {
int samples;
QLIST_HEAD (sw_in_listhead, SWVoiceIn) sw_head;
int ctl_caps;
struct audio_pcm_ops *pcm_ops;
QLIST_ENTRY (HWVoiceIn) entries;
} HWVoiceIn;
@@ -152,7 +150,6 @@ struct audio_driver {
int max_voices_in;
int voice_size_out;
int voice_size_in;
int ctl_caps;
};
struct audio_pcm_ops {
@@ -212,9 +209,8 @@ extern struct audio_driver coreaudio_audio_driver;
extern struct audio_driver dsound_audio_driver;
extern struct audio_driver esd_audio_driver;
extern struct audio_driver pa_audio_driver;
extern struct audio_driver spice_audio_driver;
extern struct audio_driver winwave_audio_driver;
extern const struct mixeng_volume nominal_volume;
extern struct mixeng_volume nominal_volume;
void audio_pcm_init_info (struct audio_pcm_info *info, struct audsettings *as);
void audio_pcm_info_clear_buf (struct audio_pcm_info *info, void *buf, int len);
@@ -234,22 +230,52 @@ void audio_run (const char *msg);
#define VOICE_ENABLE 1
#define VOICE_DISABLE 2
#define VOICE_VOLUME 3
#define VOICE_VOLUME_CAP (1 << VOICE_VOLUME)
static inline int audio_ring_dist (int dst, int src, int len)
{
return (dst >= src) ? (dst - src) : (len - src + dst);
}
#define dolog(fmt, ...) AUD_log(AUDIO_CAP, fmt, ## __VA_ARGS__)
#if defined __GNUC__
#define GCC_ATTR __attribute__ ((__unused__, __format__ (__printf__, 1, 2)))
#define GCC_FMT_ATTR(n, m) __attribute__ ((__format__ (__printf__, n, m)))
#else
#define GCC_ATTR /**/
#define GCC_FMT_ATTR(n, m)
#endif
static void GCC_ATTR dolog (const char *fmt, ...)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
}
#ifdef DEBUG
#define ldebug(fmt, ...) AUD_log(AUDIO_CAP, fmt, ## __VA_ARGS__)
static void GCC_ATTR ldebug (const char *fmt, ...)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
}
#else
#define ldebug(fmt, ...) (void)0
#if defined NDEBUG && defined __GNUC__
#define ldebug(...)
#elif defined NDEBUG && defined _MSC_VER
#define ldebug __noop
#else
static void GCC_ATTR ldebug (const char *fmt, ...)
{
(void) fmt;
}
#endif
#endif
#undef GCC_ATTR
#define AUDIO_STRINGIFY_(n) #n
#define AUDIO_STRINGIFY(n) AUDIO_STRINGIFY_(n)

View File

@@ -6,8 +6,7 @@
#include "audio_int.h"
#include "audio_pt_int.h"
static void GCC_FMT_ATTR(3, 4) logerr (struct audio_pt *pt, int err,
const char *fmt, ...)
static void logerr (struct audio_pt *pt, int err, const char *fmt, ...)
{
va_list ap;
@@ -24,16 +23,9 @@ int audio_pt_init (struct audio_pt *p, void *(*func) (void *),
{
int err, err2;
const char *efunc;
sigset_t set, old_set;
p->drv = drv;
err = sigfillset (&set);
if (err) {
logerr (p, errno, "%s(%s): sigfillset failed", cap, AUDIO_FUNC);
return -1;
}
err = pthread_mutex_init (&p->mutex, NULL);
if (err) {
efunc = "pthread_mutex_init";
@@ -46,23 +38,7 @@ int audio_pt_init (struct audio_pt *p, void *(*func) (void *),
goto err1;
}
err = pthread_sigmask (SIG_BLOCK, &set, &old_set);
if (err) {
efunc = "pthread_sigmask";
goto err2;
}
err = pthread_create (&p->thread, NULL, func, opaque);
err2 = pthread_sigmask (SIG_SETMASK, &old_set, NULL);
if (err2) {
logerr (p, err2, "%s(%s): pthread_sigmask (restore) failed",
cap, AUDIO_FUNC);
/* We have failed to restore original signal mask, all bets are off,
so terminate the process */
exit (EXIT_FAILURE);
}
if (err) {
efunc = "pthread_create";
goto err2;

View File

@@ -71,7 +71,10 @@ static void glue (audio_init_nb_voices_, TYPE) (struct audio_driver *drv)
static void glue (audio_pcm_hw_free_resources_, TYPE) (HW *hw)
{
g_free (HWBUF);
if (HWBUF) {
qemu_free (HWBUF);
}
HWBUF = NULL;
}
@@ -89,7 +92,9 @@ static int glue (audio_pcm_hw_alloc_resources_, TYPE) (HW *hw)
static void glue (audio_pcm_sw_free_resources_, TYPE) (SW *sw)
{
g_free (sw->buf);
if (sw->buf) {
qemu_free (sw->buf);
}
if (sw->rate) {
st_rate_stop (sw->rate);
@@ -103,7 +108,11 @@ static int glue (audio_pcm_sw_alloc_resources_, TYPE) (SW *sw)
{
int samples;
#ifdef DAC
samples = sw->hw->samples;
#else
samples = ((int64_t) sw->hw->samples << 32) / sw->ratio;
#endif
sw->buf = audio_calloc (AUDIO_FUNC, samples, sizeof (struct st_sample));
if (!sw->buf) {
@@ -118,7 +127,7 @@ static int glue (audio_pcm_sw_alloc_resources_, TYPE) (SW *sw)
sw->rate = st_rate_start (sw->hw->info.freq, sw->info.freq);
#endif
if (!sw->rate) {
g_free (sw->buf);
qemu_free (sw->buf);
sw->buf = NULL;
return -1;
}
@@ -155,10 +164,10 @@ static int glue (audio_pcm_sw_init_, TYPE) (
[sw->info.swap_endianness]
[audio_bits_to_index (sw->info.bits)];
sw->name = g_strdup (name);
sw->name = qemu_strdup (name);
err = glue (audio_pcm_sw_alloc_resources_, TYPE) (sw);
if (err) {
g_free (sw->name);
qemu_free (sw->name);
sw->name = NULL;
}
return err;
@@ -167,8 +176,10 @@ static int glue (audio_pcm_sw_init_, TYPE) (
static void glue (audio_pcm_sw_fini_, TYPE) (SW *sw)
{
glue (audio_pcm_sw_free_resources_, TYPE) (sw);
g_free (sw->name);
if (sw->name) {
qemu_free (sw->name);
sw->name = NULL;
}
}
static void glue (audio_pcm_hw_add_sw_, TYPE) (HW *hw, SW *sw)
@@ -191,10 +202,10 @@ static void glue (audio_pcm_hw_gc_, TYPE) (HW **hwp)
audio_detach_capture (hw);
#endif
QLIST_REMOVE (hw, entries);
glue (hw->pcm_ops->fini_, TYPE) (hw);
glue (s->nb_hw_voices_, TYPE) += 1;
glue (audio_pcm_hw_free_resources_ ,TYPE) (hw);
g_free (hw);
glue (hw->pcm_ops->fini_, TYPE) (hw);
qemu_free (hw);
*hwp = NULL;
}
}
@@ -256,8 +267,6 @@ static HW *glue (audio_pcm_hw_add_new_, TYPE) (struct audsettings *as)
}
hw->pcm_ops = drv->pcm_ops;
hw->ctl_caps = drv->ctl_caps;
QLIST_INIT (&hw->sw_head);
#ifdef DAC
QLIST_INIT (&hw->cap_head);
@@ -295,7 +304,7 @@ static HW *glue (audio_pcm_hw_add_new_, TYPE) (struct audsettings *as)
err1:
glue (hw->pcm_ops->fini_, TYPE) (hw);
err0:
g_free (hw);
qemu_free (hw);
return NULL;
}
@@ -363,7 +372,7 @@ err3:
glue (audio_pcm_hw_del_sw_, TYPE) (sw);
glue (audio_pcm_hw_gc_, TYPE) (&hw);
err2:
g_free (sw);
qemu_free (sw);
err1:
return NULL;
}
@@ -373,7 +382,7 @@ static void glue (audio_close_, TYPE) (SW *sw)
glue (audio_pcm_sw_fini_, TYPE) (sw);
glue (audio_pcm_hw_del_sw_, TYPE) (sw);
glue (audio_pcm_hw_gc_, TYPE) (&sw->hw);
g_free (sw);
qemu_free (sw);
}
void glue (AUD_close_, TYPE) (QEMUSoundCard *card, SW *sw)
@@ -403,15 +412,15 @@ SW *glue (AUD_open_, TYPE) (
SW *old_sw = NULL;
#endif
ldebug ("open %s, freq %d, nchannels %d, fmt %d\n",
name, as->freq, as->nchannels, as->fmt);
if (audio_bug (AUDIO_FUNC, !card || !name || !callback_fn || !as)) {
dolog ("card=%p name=%p callback_fn=%p as=%p\n",
card, name, callback_fn, as);
goto fail;
}
ldebug ("open %s, freq %d, nchannels %d, fmt %d\n",
name, as->freq, as->nchannels, as->fmt);
if (audio_bug (AUDIO_FUNC, audio_validate_settings (as))) {
audio_print_settings (as);
goto fail;

View File

@@ -1,6 +1,7 @@
/* public domain */
#include "qemu-common.h"
#include "audio.h"
#define AUDIO_CAP "win-int"
#include <windows.h>

View File

@@ -56,7 +56,7 @@ typedef struct coreaudioVoiceOut {
static void coreaudio_logstatus (OSStatus status)
{
const char *str = "BUG";
char *str = "BUG";
switch(status) {
case kAudioHardwareNoError:
@@ -104,7 +104,7 @@ static void coreaudio_logstatus (OSStatus status)
break;
default:
AUD_log (AUDIO_CAP, "Reason: status code %" PRId32 "\n", (int32_t)status);
AUD_log (AUDIO_CAP, "Reason: status code %ld\n", status);
return;
}
@@ -360,8 +360,8 @@ static int coreaudio_init_out (HWVoiceOut *hw, struct audsettings *as)
&core->audioDevicePropertyBufferFrameSize);
if (status != kAudioHardwareNoError) {
coreaudio_logerr2 (status, typ,
"Could not set device buffer frame size %" PRIu32 "\n",
(uint32_t)core->audioDevicePropertyBufferFrameSize);
"Could not set device buffer frame size %ld\n",
core->audioDevicePropertyBufferFrameSize);
return -1;
}

View File

@@ -831,11 +831,11 @@ static int dsound_run_in (HWVoiceIn *hw)
decr = len1 + len2;
if (p1 && len1) {
hw->conv (hw->conv_buf + hw->wpos, p1, len1);
hw->conv (hw->conv_buf + hw->wpos, p1, len1, &nominal_volume);
}
if (p2 && len2) {
hw->conv (hw->conv_buf, p2, len2);
hw->conv (hw->conv_buf, p2, len2, &nominal_volume);
}
dsound_unlock_in (dscb, p1, p2, blen1, blen2);

View File

@@ -24,6 +24,7 @@
#include <esd.h>
#include "qemu-common.h"
#include "audio.h"
#include <signal.h>
#define AUDIO_CAP "esd"
#include "audio_int.h"
@@ -189,6 +190,10 @@ static int qesd_init_out (HWVoiceOut *hw, struct audsettings *as)
ESDVoiceOut *esd = (ESDVoiceOut *) hw;
struct audsettings obt_as = *as;
int esdfmt = ESD_STREAM | ESD_PLAY;
int err;
sigset_t set, old_set;
sigfillset (&set);
esdfmt |= (as->nchannels == 2) ? ESD_STEREO : ESD_MONO;
switch (as->fmt) {
@@ -201,7 +206,7 @@ static int qesd_init_out (HWVoiceOut *hw, struct audsettings *as)
case AUD_FMT_S32:
case AUD_FMT_U32:
dolog ("Will use 16 instead of 32 bit samples\n");
/* fall through */
case AUD_FMT_S16:
case AUD_FMT_U16:
deffmt:
@@ -226,27 +231,45 @@ static int qesd_init_out (HWVoiceOut *hw, struct audsettings *as)
return -1;
}
esd->fd = esd_play_stream (esdfmt, as->freq, conf.dac_host, NULL);
if (esd->fd < 0) {
qesd_logerr (errno, "esd_play_stream failed\n");
esd->fd = -1;
err = pthread_sigmask (SIG_BLOCK, &set, &old_set);
if (err) {
qesd_logerr (err, "pthread_sigmask failed\n");
goto fail1;
}
if (audio_pt_init (&esd->pt, qesd_thread_out, esd, AUDIO_CAP, AUDIO_FUNC)) {
esd->fd = esd_play_stream (esdfmt, as->freq, conf.dac_host, NULL);
if (esd->fd < 0) {
qesd_logerr (errno, "esd_play_stream failed\n");
goto fail2;
}
if (audio_pt_init (&esd->pt, qesd_thread_out, esd, AUDIO_CAP, AUDIO_FUNC)) {
goto fail3;
}
err = pthread_sigmask (SIG_SETMASK, &old_set, NULL);
if (err) {
qesd_logerr (err, "pthread_sigmask(restore) failed\n");
}
return 0;
fail2:
fail3:
if (close (esd->fd)) {
qesd_logerr (errno, "%s: close on esd socket(%d) failed\n",
AUDIO_FUNC, esd->fd);
}
esd->fd = -1;
fail2:
err = pthread_sigmask (SIG_SETMASK, &old_set, NULL);
if (err) {
qesd_logerr (err, "pthread_sigmask(restore) failed\n");
}
fail1:
g_free (esd->pcm_buf);
qemu_free (esd->pcm_buf);
esd->pcm_buf = NULL;
return -1;
}
@@ -270,7 +293,7 @@ static void qesd_fini_out (HWVoiceOut *hw)
audio_pt_fini (&esd->pt, AUDIO_FUNC);
g_free (esd->pcm_buf);
qemu_free (esd->pcm_buf);
esd->pcm_buf = NULL;
}
@@ -346,7 +369,8 @@ static void *qesd_thread_in (void *arg)
break;
}
hw->conv (hw->conv_buf + wpos, buf, nread >> hw->info.shift);
hw->conv (hw->conv_buf + wpos, buf, nread >> hw->info.shift,
&nominal_volume);
wpos = (wpos + chunk) % hw->samples;
to_grab -= chunk;
}
@@ -399,6 +423,10 @@ static int qesd_init_in (HWVoiceIn *hw, struct audsettings *as)
ESDVoiceIn *esd = (ESDVoiceIn *) hw;
struct audsettings obt_as = *as;
int esdfmt = ESD_STREAM | ESD_RECORD;
int err;
sigset_t set, old_set;
sigfillset (&set);
esdfmt |= (as->nchannels == 2) ? ESD_STEREO : ESD_MONO;
switch (as->fmt) {
@@ -433,27 +461,46 @@ static int qesd_init_in (HWVoiceIn *hw, struct audsettings *as)
return -1;
}
esd->fd = esd_record_stream (esdfmt, as->freq, conf.adc_host, NULL);
if (esd->fd < 0) {
qesd_logerr (errno, "esd_record_stream failed\n");
esd->fd = -1;
err = pthread_sigmask (SIG_BLOCK, &set, &old_set);
if (err) {
qesd_logerr (err, "pthread_sigmask failed\n");
goto fail1;
}
if (audio_pt_init (&esd->pt, qesd_thread_in, esd, AUDIO_CAP, AUDIO_FUNC)) {
esd->fd = esd_record_stream (esdfmt, as->freq, conf.adc_host, NULL);
if (esd->fd < 0) {
qesd_logerr (errno, "esd_record_stream failed\n");
goto fail2;
}
if (audio_pt_init (&esd->pt, qesd_thread_in, esd, AUDIO_CAP, AUDIO_FUNC)) {
goto fail3;
}
err = pthread_sigmask (SIG_SETMASK, &old_set, NULL);
if (err) {
qesd_logerr (err, "pthread_sigmask(restore) failed\n");
}
return 0;
fail2:
fail3:
if (close (esd->fd)) {
qesd_logerr (errno, "%s: close on esd socket(%d) failed\n",
AUDIO_FUNC, esd->fd);
}
esd->fd = -1;
fail2:
err = pthread_sigmask (SIG_SETMASK, &old_set, NULL);
if (err) {
qesd_logerr (err, "pthread_sigmask(restore) failed\n");
}
fail1:
g_free (esd->pcm_buf);
qemu_free (esd->pcm_buf);
esd->pcm_buf = NULL;
return -1;
}
@@ -477,7 +524,7 @@ static void qesd_fini_in (HWVoiceIn *hw)
audio_pt_fini (&esd->pt, AUDIO_FUNC);
g_free (esd->pcm_buf);
qemu_free (esd->pcm_buf);
esd->pcm_buf = NULL;
}

View File

@@ -343,7 +343,7 @@ static void fmod_fini_out (HWVoiceOut *hw)
static int fmod_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int mode, channel;
int bits16, mode, channel;
FMODVoiceOut *fmd = (FMODVoiceOut *) hw;
struct audsettings obt_as = *as;
@@ -374,6 +374,7 @@ static int fmod_init_out (HWVoiceOut *hw, struct audsettings *as)
/* FMOD always operates on little endian frames? */
obt_as.endianness = 0;
audio_pcm_init_info (&hw->info, &obt_as);
bits16 = (mode & FSOUND_16BITS) != 0;
hw->samples = conf.nb_samples;
return 0;
}
@@ -404,7 +405,7 @@ static int fmod_ctl_out (HWVoiceOut *hw, int cmd, ...)
static int fmod_init_in (HWVoiceIn *hw, struct audsettings *as)
{
int mode;
int bits16, mode;
FMODVoiceIn *fmd = (FMODVoiceIn *) hw;
struct audsettings obt_as = *as;
@@ -431,6 +432,7 @@ static int fmod_init_in (HWVoiceIn *hw, struct audsettings *as)
/* FMOD always operates on little endian frames? */
obt_as.endianness = 0;
audio_pcm_init_info (&hw->info, &obt_as);
bits16 = (mode & FSOUND_16BITS) != 0;
hw->samples = conf.nb_samples;
return 0;
}
@@ -486,10 +488,10 @@ static int fmod_run_in (HWVoiceIn *hw)
decr = len1 + len2;
if (p1 && blen1) {
hw->conv (hw->conv_buf + hw->wpos, p1, len1);
hw->conv (hw->conv_buf + hw->wpos, p1, len1, &nominal_volume);
}
if (p2 && len2) {
hw->conv (hw->conv_buf, p2, len2);
hw->conv (hw->conv_buf, p2, len2, &nominal_volume);
}
fmod_unlock_sample (fmd->fmod_sample, p1, p2, blen1, blen2);

View File

@@ -33,8 +33,7 @@
#define ENDIAN_CONVERT(v) (v)
/* Signed 8 bit */
#define BSIZE 8
#define ITYPE int
#define IN_T int8_t
#define IN_MIN SCHAR_MIN
#define IN_MAX SCHAR_MAX
#define SIGNED
@@ -43,29 +42,25 @@
#undef SIGNED
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
/* Unsigned 8 bit */
#define BSIZE 8
#define ITYPE uint
#define IN_T uint8_t
#define IN_MIN 0
#define IN_MAX UCHAR_MAX
#define SHIFT 8
#include "mixeng_template.h"
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
#undef ENDIAN_CONVERT
#undef ENDIAN_CONVERSION
/* Signed 16 bit */
#define BSIZE 16
#define ITYPE int
#define IN_T int16_t
#define IN_MIN SHRT_MIN
#define IN_MAX SHRT_MAX
#define SIGNED
@@ -83,13 +78,11 @@
#undef SIGNED
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
/* Unsigned 16 bit */
#define BSIZE 16
#define ITYPE uint
#define IN_T uint16_t
#define IN_MIN 0
#define IN_MAX USHRT_MAX
#define SHIFT 16
@@ -105,13 +98,11 @@
#undef ENDIAN_CONVERSION
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
/* Signed 32 bit */
#define BSIZE 32
#define ITYPE int
#define IN_T int32_t
#define IN_MIN INT32_MIN
#define IN_MAX INT32_MAX
#define SIGNED
@@ -129,13 +120,11 @@
#undef SIGNED
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
/* Unsigned 32 bit */
#define BSIZE 32
#define ITYPE uint
#define IN_T uint32_t
#define IN_MIN 0
#define IN_MAX UINT32_MAX
#define SHIFT 32
@@ -151,8 +140,7 @@
#undef ENDIAN_CONVERSION
#undef IN_MAX
#undef IN_MIN
#undef BSIZE
#undef ITYPE
#undef IN_T
#undef SHIFT
t_sample *mixeng_conv[2][2][2][3] = {
@@ -338,29 +326,10 @@ void *st_rate_start (int inrate, int outrate)
void st_rate_stop (void *opaque)
{
g_free (opaque);
qemu_free (opaque);
}
void mixeng_clear (struct st_sample *buf, int len)
{
memset (buf, 0, len * sizeof (struct st_sample));
}
void mixeng_volume (struct st_sample *buf, int len, struct mixeng_volume *vol)
{
if (vol->mute) {
mixeng_clear (buf, len);
return;
}
while (len--) {
#ifdef FLOAT_MIXENG
buf->l = buf->l * vol->l;
buf->r = buf->r * vol->r;
#else
buf->l = (buf->l * vol->l) >> 32;
buf->r = (buf->r * vol->r) >> 32;
#endif
buf += 1;
}
}

View File

@@ -33,7 +33,8 @@ struct mixeng_volume { int mute; int64_t r; int64_t l; };
struct st_sample { int64_t l; int64_t r; };
#endif
typedef void (t_sample) (struct st_sample *dst, const void *src, int samples);
typedef void (t_sample) (struct st_sample *dst, const void *src,
int samples, struct mixeng_volume *vol);
typedef void (f_sample) (void *dst, const struct st_sample *src, int samples);
extern t_sample *mixeng_conv[2][2][2][3];
@@ -46,6 +47,5 @@ void st_rate_flow_mix (void *opaque, struct st_sample *ibuf, struct st_sample *o
int *isamp, int *osamp);
void st_rate_stop (void *opaque);
void mixeng_clear (struct st_sample *buf, int len);
void mixeng_volume (struct st_sample *buf, int len, struct mixeng_volume *vol);
#endif /* mixeng.h */

View File

@@ -31,11 +31,20 @@
#define HALF (IN_MAX >> 1)
#endif
#define ET glue (ENDIAN_CONVERSION, glue (glue (glue (_, ITYPE), BSIZE), _t))
#define IN_T glue (glue (ITYPE, BSIZE), _t)
#ifdef CONFIG_MIXEMU
#ifdef FLOAT_MIXENG
#define VOL(a, b) ((a) * (b))
#else
#define VOL(a, b) ((a) * (b)) >> 32
#endif
#else
#define VOL(a, b) a
#endif
#define ET glue (ENDIAN_CONVERSION, glue (_, IN_T))
#ifdef FLOAT_MIXENG
static inline mixeng_real glue (conv_, ET) (IN_T v)
static mixeng_real inline glue (conv_, ET) (IN_T v)
{
IN_T nv = ENDIAN_CONVERT (v);
@@ -47,14 +56,14 @@ static inline mixeng_real glue (conv_, ET) (IN_T v)
#endif
#else /* !RECIPROCAL */
#ifdef SIGNED
return nv / (mixeng_real) ((mixeng_real) IN_MAX - IN_MIN);
return nv / (mixeng_real) (IN_MAX - IN_MIN);
#else
return (nv - HALF) / (mixeng_real) IN_MAX;
#endif
#endif
}
static inline IN_T glue (clip_, ET) (mixeng_real v)
static IN_T inline glue (clip_, ET) (mixeng_real v)
{
if (v >= 0.5) {
return IN_MAX;
@@ -64,7 +73,7 @@ static inline IN_T glue (clip_, ET) (mixeng_real v)
}
#ifdef SIGNED
return ENDIAN_CONVERT ((IN_T) (v * ((mixeng_real) IN_MAX - IN_MIN)));
return ENDIAN_CONVERT ((IN_T) (v * (IN_MAX - IN_MIN)));
#else
return ENDIAN_CONVERT ((IN_T) ((v * IN_MAX) + HALF));
#endif
@@ -100,26 +109,40 @@ static inline IN_T glue (clip_, ET) (int64_t v)
#endif
static void glue (glue (conv_, ET), _to_stereo)
(struct st_sample *dst, const void *src, int samples)
(struct st_sample *dst, const void *src, int samples, struct mixeng_volume *vol)
{
struct st_sample *out = dst;
IN_T *in = (IN_T *) src;
#ifdef CONFIG_MIXEMU
if (vol->mute) {
mixeng_clear (dst, samples);
return;
}
#else
(void) vol;
#endif
while (samples--) {
out->l = glue (conv_, ET) (*in++);
out->r = glue (conv_, ET) (*in++);
out->l = VOL (glue (conv_, ET) (*in++), vol->l);
out->r = VOL (glue (conv_, ET) (*in++), vol->r);
out += 1;
}
}
static void glue (glue (conv_, ET), _to_mono)
(struct st_sample *dst, const void *src, int samples)
(struct st_sample *dst, const void *src, int samples, struct mixeng_volume *vol)
{
struct st_sample *out = dst;
IN_T *in = (IN_T *) src;
#ifdef CONFIG_MIXEMU
if (vol->mute) {
mixeng_clear (dst, samples);
return;
}
#else
(void) vol;
#endif
while (samples--) {
out->l = glue (conv_, ET) (in[0]);
out->l = VOL (glue (conv_, ET) (in[0]), vol->l);
out->r = out->l;
out += 1;
in += 1;
@@ -151,4 +174,4 @@ static void glue (glue (clip_, ET), _from_mono)
#undef ET
#undef HALF
#undef IN_T
#undef VOL

View File

@@ -23,7 +23,7 @@
*/
#include "qemu-common.h"
#include "audio.h"
#include "qemu/timer.h"
#include "qemu-timer.h"
#define AUDIO_CAP "noaudio"
#include "audio_int.h"
@@ -46,7 +46,7 @@ static int no_run_out (HWVoiceOut *hw, int live)
int64_t ticks;
int64_t bytes;
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
now = qemu_get_clock (vm_clock);
ticks = now - no->old_ticks;
bytes = muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());
bytes = audio_MIN (bytes, INT_MAX);
@@ -102,7 +102,7 @@ static int no_run_in (HWVoiceIn *hw)
int samples = 0;
if (dead) {
int64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
int64_t now = qemu_get_clock (vm_clock);
int64_t ticks = now - no->old_ticks;
int64_t bytes =
muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());
@@ -117,14 +117,11 @@ static int no_run_in (HWVoiceIn *hw)
static int no_read (SWVoiceIn *sw, void *buf, int size)
{
/* use custom code here instead of audio_pcm_sw_read() to avoid
* useless resampling/mixing */
int samples = size >> sw->info.shift;
int total = sw->hw->total_samples_captured - sw->total_hw_samples_acquired;
int to_clear = audio_MIN (samples, total);
sw->total_hw_samples_acquired += total;
audio_pcm_info_clear_buf (&sw->info, buf, to_clear);
return to_clear << sw->info.shift;
return to_clear;
}
static int no_ctl_in (HWVoiceIn *hw, int cmd, ...)

View File

@@ -25,10 +25,14 @@
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/ioctl.h>
#ifdef __OpenBSD__
#include <soundcard.h>
#else
#include <sys/soundcard.h>
#endif
#include "qemu-common.h"
#include "qemu/main-loop.h"
#include "qemu/host-utils.h"
#include "host-utils.h"
#include "qemu-char.h"
#include "audio.h"
#define AUDIO_CAP "oss"
@@ -157,7 +161,7 @@ static int oss_write (SWVoiceOut *sw, void *buf, int len)
return audio_pcm_sw_write (sw, buf, len);
}
static int aud_to_ossfmt (audfmt_e fmt, int endianness)
static int aud_to_ossfmt (audfmt_e fmt)
{
switch (fmt) {
case AUD_FMT_S8:
@@ -167,20 +171,10 @@ static int aud_to_ossfmt (audfmt_e fmt, int endianness)
return AFMT_U8;
case AUD_FMT_S16:
if (endianness) {
return AFMT_S16_BE;
}
else {
return AFMT_S16_LE;
}
case AUD_FMT_U16:
if (endianness) {
return AFMT_U16_BE;
}
else {
return AFMT_U16_LE;
}
default:
dolog ("Internal logic error: Bad audio format %d\n", fmt);
@@ -504,7 +498,7 @@ static void oss_fini_out (HWVoiceOut *hw)
}
}
else {
g_free (oss->pcm_buf);
qemu_free (oss->pcm_buf);
}
oss->pcm_buf = NULL;
}
@@ -522,7 +516,7 @@ static int oss_init_out (HWVoiceOut *hw, struct audsettings *as)
oss->fd = -1;
req.fmt = aud_to_ossfmt (as->fmt, as->endianness);
req.fmt = aud_to_ossfmt (as->fmt);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.fragsize = conf.fragsize;
@@ -688,7 +682,7 @@ static int oss_init_in (HWVoiceIn *hw, struct audsettings *as)
oss->fd = -1;
req.fmt = aud_to_ossfmt (as->fmt, as->endianness);
req.fmt = aud_to_ossfmt (as->fmt);
req.freq = as->freq;
req.nchannels = as->nchannels;
req.fragsize = conf.fragsize;
@@ -736,8 +730,10 @@ static void oss_fini_in (HWVoiceIn *hw)
oss_anal_close (&oss->fd);
g_free(oss->pcm_buf);
if (oss->pcm_buf) {
qemu_free (oss->pcm_buf);
oss->pcm_buf = NULL;
}
}
static int oss_run_in (HWVoiceIn *hw)
@@ -782,7 +778,8 @@ static int oss_run_in (HWVoiceIn *hw)
hw->info.align + 1);
}
read_samples += nread >> hwshift;
hw->conv (hw->conv_buf + bufs[i].add, p, nread >> hwshift);
hw->conv (hw->conv_buf + bufs[i].add, p, nread >> hwshift,
&nominal_volume);
}
if (bufs[i].len - nread) {
@@ -847,10 +844,6 @@ static int oss_ctl_in (HWVoiceIn *hw, int cmd, ...)
static void *oss_audio_init (void)
{
if (access(conf.devpath_in, R_OK | W_OK) < 0 ||
access(conf.devpath_out, R_OK | W_OK) < 0) {
return NULL;
}
return &conf;
}

View File

@@ -2,7 +2,8 @@
#include "qemu-common.h"
#include "audio.h"
#include <pulse/pulseaudio.h>
#include <pulse/simple.h>
#include <pulse/error.h>
#define AUDIO_CAP "pulseaudio"
#include "audio_int.h"
@@ -14,7 +15,7 @@ typedef struct {
int live;
int decr;
int rpos;
pa_stream *stream;
pa_simple *s;
void *pcm_buf;
struct audio_pt pt;
} PAVoiceOut;
@@ -25,24 +26,20 @@ typedef struct {
int dead;
int incr;
int wpos;
pa_stream *stream;
pa_simple *s;
void *pcm_buf;
struct audio_pt pt;
const void *read_data;
size_t read_index, read_length;
} PAVoiceIn;
typedef struct {
static struct {
int samples;
int divisor;
char *server;
char *sink;
char *source;
pa_threaded_mainloop *mainloop;
pa_context *context;
} paaudio;
static paaudio glob_paaudio = {
.samples = 4096,
} conf = {
.samples = 1024,
.divisor = 2,
};
static void GCC_FMT_ATTR (2, 3) qpa_logerr (int err, const char *fmt, ...)
@@ -56,150 +53,13 @@ static void GCC_FMT_ATTR (2, 3) qpa_logerr (int err, const char *fmt, ...)
AUD_log (AUDIO_CAP, "Reason: %s\n", pa_strerror (err));
}
#ifndef PA_CONTEXT_IS_GOOD
static inline int PA_CONTEXT_IS_GOOD(pa_context_state_t x)
{
return
x == PA_CONTEXT_CONNECTING ||
x == PA_CONTEXT_AUTHORIZING ||
x == PA_CONTEXT_SETTING_NAME ||
x == PA_CONTEXT_READY;
}
#endif
#ifndef PA_STREAM_IS_GOOD
static inline int PA_STREAM_IS_GOOD(pa_stream_state_t x)
{
return
x == PA_STREAM_CREATING ||
x == PA_STREAM_READY;
}
#endif
#define CHECK_SUCCESS_GOTO(c, rerror, expression, label) \
do { \
if (!(expression)) { \
if (rerror) { \
*(rerror) = pa_context_errno ((c)->context); \
} \
goto label; \
} \
} while (0);
#define CHECK_DEAD_GOTO(c, stream, rerror, label) \
do { \
if (!(c)->context || !PA_CONTEXT_IS_GOOD (pa_context_get_state((c)->context)) || \
!(stream) || !PA_STREAM_IS_GOOD (pa_stream_get_state ((stream)))) { \
if (((c)->context && pa_context_get_state ((c)->context) == PA_CONTEXT_FAILED) || \
((stream) && pa_stream_get_state ((stream)) == PA_STREAM_FAILED)) { \
if (rerror) { \
*(rerror) = pa_context_errno ((c)->context); \
} \
} else { \
if (rerror) { \
*(rerror) = PA_ERR_BADSTATE; \
} \
} \
goto label; \
} \
} while (0);
static int qpa_simple_read (PAVoiceIn *p, void *data, size_t length, int *rerror)
{
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_lock (g->mainloop);
CHECK_DEAD_GOTO (g, p->stream, rerror, unlock_and_fail);
while (length > 0) {
size_t l;
while (!p->read_data) {
int r;
r = pa_stream_peek (p->stream, &p->read_data, &p->read_length);
CHECK_SUCCESS_GOTO (g, rerror, r == 0, unlock_and_fail);
if (!p->read_data) {
pa_threaded_mainloop_wait (g->mainloop);
CHECK_DEAD_GOTO (g, p->stream, rerror, unlock_and_fail);
} else {
p->read_index = 0;
}
}
l = p->read_length < length ? p->read_length : length;
memcpy (data, (const uint8_t *) p->read_data+p->read_index, l);
data = (uint8_t *) data + l;
length -= l;
p->read_index += l;
p->read_length -= l;
if (!p->read_length) {
int r;
r = pa_stream_drop (p->stream);
p->read_data = NULL;
p->read_length = 0;
p->read_index = 0;
CHECK_SUCCESS_GOTO (g, rerror, r == 0, unlock_and_fail);
}
}
pa_threaded_mainloop_unlock (g->mainloop);
return 0;
unlock_and_fail:
pa_threaded_mainloop_unlock (g->mainloop);
return -1;
}
static int qpa_simple_write (PAVoiceOut *p, const void *data, size_t length, int *rerror)
{
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_lock (g->mainloop);
CHECK_DEAD_GOTO (g, p->stream, rerror, unlock_and_fail);
while (length > 0) {
size_t l;
int r;
while (!(l = pa_stream_writable_size (p->stream))) {
pa_threaded_mainloop_wait (g->mainloop);
CHECK_DEAD_GOTO (g, p->stream, rerror, unlock_and_fail);
}
CHECK_SUCCESS_GOTO (g, rerror, l != (size_t) -1, unlock_and_fail);
if (l > length) {
l = length;
}
r = pa_stream_write (p->stream, data, l, NULL, 0LL, PA_SEEK_RELATIVE);
CHECK_SUCCESS_GOTO (g, rerror, r >= 0, unlock_and_fail);
data = (const uint8_t *) data + l;
length -= l;
}
pa_threaded_mainloop_unlock (g->mainloop);
return 0;
unlock_and_fail:
pa_threaded_mainloop_unlock (g->mainloop);
return -1;
}
static void *qpa_thread_out (void *arg)
{
PAVoiceOut *pa = arg;
HWVoiceOut *hw = &pa->hw;
int threshold;
threshold = conf.divisor ? hw->samples / conf.divisor : 0;
if (audio_pt_lock (&pa->pt, AUDIO_FUNC)) {
return NULL;
@@ -213,7 +73,7 @@ static void *qpa_thread_out (void *arg)
goto exit;
}
if (pa->live > 0) {
if (pa->live > threshold) {
break;
}
@@ -222,8 +82,8 @@ static void *qpa_thread_out (void *arg)
}
}
decr = to_mix = audio_MIN (pa->live, glob_paaudio.samples >> 2);
rpos = pa->rpos;
decr = to_mix = pa->live;
rpos = hw->rpos;
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
return NULL;
@@ -236,7 +96,7 @@ static void *qpa_thread_out (void *arg)
hw->clip (pa->pcm_buf, src, chunk);
if (qpa_simple_write (pa, pa->pcm_buf,
if (pa_simple_write (pa->s, pa->pcm_buf,
chunk << hw->info.shift, &error) < 0) {
qpa_logerr (error, "pa_simple_write failed\n");
return NULL;
@@ -292,6 +152,9 @@ static void *qpa_thread_in (void *arg)
{
PAVoiceIn *pa = arg;
HWVoiceIn *hw = &pa->hw;
int threshold;
threshold = conf.divisor ? hw->samples / conf.divisor : 0;
if (audio_pt_lock (&pa->pt, AUDIO_FUNC)) {
return NULL;
@@ -305,7 +168,7 @@ static void *qpa_thread_in (void *arg)
goto exit;
}
if (pa->dead > 0) {
if (pa->dead > threshold) {
break;
}
@@ -314,8 +177,8 @@ static void *qpa_thread_in (void *arg)
}
}
incr = to_grab = audio_MIN (pa->dead, glob_paaudio.samples >> 2);
wpos = pa->wpos;
incr = to_grab = pa->dead;
wpos = hw->wpos;
if (audio_pt_unlock (&pa->pt, AUDIO_FUNC)) {
return NULL;
@@ -326,13 +189,13 @@ static void *qpa_thread_in (void *arg)
int chunk = audio_MIN (to_grab, hw->samples - wpos);
void *buf = advance (pa->pcm_buf, wpos);
if (qpa_simple_read (pa, buf,
if (pa_simple_read (pa->s, buf,
chunk << hw->info.shift, &error) < 0) {
qpa_logerr (error, "pa_simple_read failed\n");
return NULL;
}
hw->conv (hw->conv_buf + wpos, buf, chunk);
hw->conv (hw->conv_buf + wpos, buf, chunk, &nominal_volume);
wpos = (wpos + chunk) % hw->samples;
to_grab -= chunk;
}
@@ -428,117 +291,10 @@ static audfmt_e pa_to_audfmt (pa_sample_format_t fmt, int *endianness)
}
}
static void context_state_cb (pa_context *c, void *userdata)
{
paaudio *g = &glob_paaudio;
switch (pa_context_get_state(c)) {
case PA_CONTEXT_READY:
case PA_CONTEXT_TERMINATED:
case PA_CONTEXT_FAILED:
pa_threaded_mainloop_signal (g->mainloop, 0);
break;
case PA_CONTEXT_UNCONNECTED:
case PA_CONTEXT_CONNECTING:
case PA_CONTEXT_AUTHORIZING:
case PA_CONTEXT_SETTING_NAME:
break;
}
}
static void stream_state_cb (pa_stream *s, void * userdata)
{
paaudio *g = &glob_paaudio;
switch (pa_stream_get_state (s)) {
case PA_STREAM_READY:
case PA_STREAM_FAILED:
case PA_STREAM_TERMINATED:
pa_threaded_mainloop_signal (g->mainloop, 0);
break;
case PA_STREAM_UNCONNECTED:
case PA_STREAM_CREATING:
break;
}
}
static void stream_request_cb (pa_stream *s, size_t length, void *userdata)
{
paaudio *g = &glob_paaudio;
pa_threaded_mainloop_signal (g->mainloop, 0);
}
static pa_stream *qpa_simple_new (
const char *server,
const char *name,
pa_stream_direction_t dir,
const char *dev,
const char *stream_name,
const pa_sample_spec *ss,
const pa_channel_map *map,
const pa_buffer_attr *attr,
int *rerror)
{
paaudio *g = &glob_paaudio;
int r;
pa_stream *stream;
pa_threaded_mainloop_lock (g->mainloop);
stream = pa_stream_new (g->context, name, ss, map);
if (!stream) {
goto fail;
}
pa_stream_set_state_callback (stream, stream_state_cb, g);
pa_stream_set_read_callback (stream, stream_request_cb, g);
pa_stream_set_write_callback (stream, stream_request_cb, g);
if (dir == PA_STREAM_PLAYBACK) {
r = pa_stream_connect_playback (stream, dev, attr,
PA_STREAM_INTERPOLATE_TIMING
#ifdef PA_STREAM_ADJUST_LATENCY
|PA_STREAM_ADJUST_LATENCY
#endif
|PA_STREAM_AUTO_TIMING_UPDATE, NULL, NULL);
} else {
r = pa_stream_connect_record (stream, dev, attr,
PA_STREAM_INTERPOLATE_TIMING
#ifdef PA_STREAM_ADJUST_LATENCY
|PA_STREAM_ADJUST_LATENCY
#endif
|PA_STREAM_AUTO_TIMING_UPDATE);
}
if (r < 0) {
goto fail;
}
pa_threaded_mainloop_unlock (g->mainloop);
return stream;
fail:
pa_threaded_mainloop_unlock (g->mainloop);
if (stream) {
pa_stream_unref (stream);
}
*rerror = pa_context_errno (g->context);
return NULL;
}
static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
{
int error;
static pa_sample_spec ss;
static pa_buffer_attr ba;
struct audsettings obt_as = *as;
PAVoiceOut *pa = (PAVoiceOut *) hw;
@@ -546,37 +302,27 @@ static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
ss.channels = as->nchannels;
ss.rate = as->freq;
/*
* qemu audio tick runs at 100 Hz (by default), so processing
* data chunks worth 10 ms of sound should be a good fit.
*/
ba.tlength = pa_usec_to_bytes (10 * 1000, &ss);
ba.minreq = pa_usec_to_bytes (5 * 1000, &ss);
ba.maxlength = -1;
ba.prebuf = -1;
obt_as.fmt = pa_to_audfmt (ss.format, &obt_as.endianness);
pa->stream = qpa_simple_new (
glob_paaudio.server,
pa->s = pa_simple_new (
conf.server,
"qemu",
PA_STREAM_PLAYBACK,
glob_paaudio.sink,
conf.sink,
"pcm.playback",
&ss,
NULL, /* channel map */
&ba, /* buffering attributes */
NULL, /* buffering attributes */
&error
);
if (!pa->stream) {
if (!pa->s) {
qpa_logerr (error, "pa_simple_new for playback failed\n");
goto fail1;
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = glob_paaudio.samples;
hw->samples = conf.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->rpos = hw->rpos;
if (!pa->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
hw->samples << hw->info.shift);
@@ -590,13 +336,11 @@ static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
return 0;
fail3:
g_free (pa->pcm_buf);
qemu_free (pa->pcm_buf);
pa->pcm_buf = NULL;
fail2:
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
}
pa_simple_free (pa->s);
pa->s = NULL;
fail1:
return -1;
}
@@ -614,26 +358,25 @@ static int qpa_init_in (HWVoiceIn *hw, struct audsettings *as)
obt_as.fmt = pa_to_audfmt (ss.format, &obt_as.endianness);
pa->stream = qpa_simple_new (
glob_paaudio.server,
pa->s = pa_simple_new (
conf.server,
"qemu",
PA_STREAM_RECORD,
glob_paaudio.source,
conf.source,
"pcm.capture",
&ss,
NULL, /* channel map */
NULL, /* buffering attributes */
&error
);
if (!pa->stream) {
if (!pa->s) {
qpa_logerr (error, "pa_simple_new for capture failed\n");
goto fail1;
}
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = glob_paaudio.samples;
hw->samples = conf.samples;
pa->pcm_buf = audio_calloc (AUDIO_FUNC, hw->samples, 1 << hw->info.shift);
pa->wpos = hw->wpos;
if (!pa->pcm_buf) {
dolog ("Could not allocate buffer (%d bytes)\n",
hw->samples << hw->info.shift);
@@ -647,13 +390,11 @@ static int qpa_init_in (HWVoiceIn *hw, struct audsettings *as)
return 0;
fail3:
g_free (pa->pcm_buf);
qemu_free (pa->pcm_buf);
pa->pcm_buf = NULL;
fail2:
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
}
pa_simple_free (pa->s);
pa->s = NULL;
fail1:
return -1;
}
@@ -668,13 +409,13 @@ static void qpa_fini_out (HWVoiceOut *hw)
audio_pt_unlock_and_signal (&pa->pt, AUDIO_FUNC);
audio_pt_join (&pa->pt, &ret, AUDIO_FUNC);
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
if (pa->s) {
pa_simple_free (pa->s);
pa->s = NULL;
}
audio_pt_fini (&pa->pt, AUDIO_FUNC);
g_free (pa->pcm_buf);
qemu_free (pa->pcm_buf);
pa->pcm_buf = NULL;
}
@@ -688,225 +429,70 @@ static void qpa_fini_in (HWVoiceIn *hw)
audio_pt_unlock_and_signal (&pa->pt, AUDIO_FUNC);
audio_pt_join (&pa->pt, &ret, AUDIO_FUNC);
if (pa->stream) {
pa_stream_unref (pa->stream);
pa->stream = NULL;
if (pa->s) {
pa_simple_free (pa->s);
pa->s = NULL;
}
audio_pt_fini (&pa->pt, AUDIO_FUNC);
g_free (pa->pcm_buf);
qemu_free (pa->pcm_buf);
pa->pcm_buf = NULL;
}
static int qpa_ctl_out (HWVoiceOut *hw, int cmd, ...)
{
PAVoiceOut *pa = (PAVoiceOut *) hw;
pa_operation *op;
pa_cvolume v;
paaudio *g = &glob_paaudio;
#ifdef PA_CHECK_VERSION /* macro is present in 0.9.16+ */
pa_cvolume_init (&v); /* function is present in 0.9.13+ */
#endif
switch (cmd) {
case VOICE_VOLUME:
{
SWVoiceOut *sw;
va_list ap;
va_start (ap, cmd);
sw = va_arg (ap, SWVoiceOut *);
va_end (ap);
v.channels = 2;
v.values[0] = ((PA_VOLUME_NORM - PA_VOLUME_MUTED) * sw->vol.l) / UINT32_MAX;
v.values[1] = ((PA_VOLUME_NORM - PA_VOLUME_MUTED) * sw->vol.r) / UINT32_MAX;
pa_threaded_mainloop_lock (g->mainloop);
op = pa_context_set_sink_input_volume (g->context,
pa_stream_get_index (pa->stream),
&v, NULL, NULL);
if (!op)
qpa_logerr (pa_context_errno (g->context),
"set_sink_input_volume() failed\n");
else
pa_operation_unref (op);
op = pa_context_set_sink_input_mute (g->context,
pa_stream_get_index (pa->stream),
sw->vol.mute, NULL, NULL);
if (!op) {
qpa_logerr (pa_context_errno (g->context),
"set_sink_input_mute() failed\n");
} else {
pa_operation_unref (op);
}
pa_threaded_mainloop_unlock (g->mainloop);
}
}
(void) hw;
(void) cmd;
return 0;
}
static int qpa_ctl_in (HWVoiceIn *hw, int cmd, ...)
{
PAVoiceIn *pa = (PAVoiceIn *) hw;
pa_operation *op;
pa_cvolume v;
paaudio *g = &glob_paaudio;
#ifdef PA_CHECK_VERSION
pa_cvolume_init (&v);
#endif
switch (cmd) {
case VOICE_VOLUME:
{
SWVoiceIn *sw;
va_list ap;
va_start (ap, cmd);
sw = va_arg (ap, SWVoiceIn *);
va_end (ap);
v.channels = 2;
v.values[0] = ((PA_VOLUME_NORM - PA_VOLUME_MUTED) * sw->vol.l) / UINT32_MAX;
v.values[1] = ((PA_VOLUME_NORM - PA_VOLUME_MUTED) * sw->vol.r) / UINT32_MAX;
pa_threaded_mainloop_lock (g->mainloop);
/* FIXME: use the upcoming "set_source_output_{volume,mute}" */
op = pa_context_set_source_volume_by_index (g->context,
pa_stream_get_device_index (pa->stream),
&v, NULL, NULL);
if (!op) {
qpa_logerr (pa_context_errno (g->context),
"set_source_volume() failed\n");
} else {
pa_operation_unref(op);
}
op = pa_context_set_source_mute_by_index (g->context,
pa_stream_get_index (pa->stream),
sw->vol.mute, NULL, NULL);
if (!op) {
qpa_logerr (pa_context_errno (g->context),
"set_source_mute() failed\n");
} else {
pa_operation_unref (op);
}
pa_threaded_mainloop_unlock (g->mainloop);
}
}
(void) hw;
(void) cmd;
return 0;
}
/* common */
static void *qpa_audio_init (void)
{
paaudio *g = &glob_paaudio;
g->mainloop = pa_threaded_mainloop_new ();
if (!g->mainloop) {
goto fail;
}
g->context = pa_context_new (pa_threaded_mainloop_get_api (g->mainloop), glob_paaudio.server);
if (!g->context) {
goto fail;
}
pa_context_set_state_callback (g->context, context_state_cb, g);
if (pa_context_connect (g->context, glob_paaudio.server, 0, NULL) < 0) {
qpa_logerr (pa_context_errno (g->context),
"pa_context_connect() failed\n");
goto fail;
}
pa_threaded_mainloop_lock (g->mainloop);
if (pa_threaded_mainloop_start (g->mainloop) < 0) {
goto unlock_and_fail;
}
for (;;) {
pa_context_state_t state;
state = pa_context_get_state (g->context);
if (state == PA_CONTEXT_READY) {
break;
}
if (!PA_CONTEXT_IS_GOOD (state)) {
qpa_logerr (pa_context_errno (g->context),
"Wrong context state\n");
goto unlock_and_fail;
}
/* Wait until the context is ready */
pa_threaded_mainloop_wait (g->mainloop);
}
pa_threaded_mainloop_unlock (g->mainloop);
return &glob_paaudio;
unlock_and_fail:
pa_threaded_mainloop_unlock (g->mainloop);
fail:
AUD_log (AUDIO_CAP, "Failed to initialize PA context");
return NULL;
return &conf;
}
static void qpa_audio_fini (void *opaque)
{
paaudio *g = opaque;
if (g->mainloop) {
pa_threaded_mainloop_stop (g->mainloop);
}
if (g->context) {
pa_context_disconnect (g->context);
pa_context_unref (g->context);
g->context = NULL;
}
if (g->mainloop) {
pa_threaded_mainloop_free (g->mainloop);
}
g->mainloop = NULL;
(void) opaque;
}
struct audio_option qpa_options[] = {
{
.name = "SAMPLES",
.tag = AUD_OPT_INT,
.valp = &glob_paaudio.samples,
.valp = &conf.samples,
.descr = "buffer size in samples"
},
{
.name = "DIVISOR",
.tag = AUD_OPT_INT,
.valp = &conf.divisor,
.descr = "threshold divisor"
},
{
.name = "SERVER",
.tag = AUD_OPT_STR,
.valp = &glob_paaudio.server,
.valp = &conf.server,
.descr = "server address"
},
{
.name = "SINK",
.tag = AUD_OPT_STR,
.valp = &glob_paaudio.sink,
.valp = &conf.sink,
.descr = "sink device name"
},
{
.name = "SOURCE",
.tag = AUD_OPT_STR,
.valp = &glob_paaudio.source,
.valp = &conf.source,
.descr = "source device name"
},
{ /* End of list */ }
@@ -937,6 +523,5 @@ struct audio_driver pa_audio_driver = {
.max_voices_out = INT_MAX,
.max_voices_in = INT_MAX,
.voice_size_out = sizeof (PAVoiceOut),
.voice_size_in = sizeof (PAVoiceIn),
.ctl_caps = VOICE_VOLUME_CAP
.voice_size_in = sizeof (PAVoiceIn)
};

View File

@@ -32,6 +32,7 @@
#elif defined(__OpenBSD__) || defined(__FreeBSD__) || defined(__DragonFly__)
#include <pthread.h>
#endif
#include <signal.h>
#endif
#define AUDIO_CAP "sdl"
@@ -138,36 +139,36 @@ static int aud_to_sdlfmt (audfmt_e fmt)
}
}
static int sdl_to_audfmt(int sdlfmt, audfmt_e *fmt, int *endianness)
static int sdl_to_audfmt (int sdlfmt, audfmt_e *fmt, int *endianess)
{
switch (sdlfmt) {
case AUDIO_S8:
*endianness = 0;
*endianess = 0;
*fmt = AUD_FMT_S8;
break;
case AUDIO_U8:
*endianness = 0;
*endianess = 0;
*fmt = AUD_FMT_U8;
break;
case AUDIO_S16LSB:
*endianness = 0;
*endianess = 0;
*fmt = AUD_FMT_S16;
break;
case AUDIO_U16LSB:
*endianness = 0;
*endianess = 0;
*fmt = AUD_FMT_U16;
break;
case AUDIO_S16MSB:
*endianness = 1;
*endianess = 1;
*fmt = AUD_FMT_S16;
break;
case AUDIO_U16MSB:
*endianness = 1;
*endianess = 1;
*fmt = AUD_FMT_U16;
break;
@@ -183,20 +184,11 @@ static int sdl_open (SDL_AudioSpec *req, SDL_AudioSpec *obt)
{
int status;
#ifndef _WIN32
int err;
sigset_t new, old;
/* Make sure potential threads created by SDL don't hog signals. */
err = sigfillset (&new);
if (err) {
dolog ("sdl_open: sigfillset failed: %s\n", strerror (errno));
return -1;
}
err = pthread_sigmask (SIG_BLOCK, &new, &old);
if (err) {
dolog ("sdl_open: pthread_sigmask failed: %s\n", strerror (err));
return -1;
}
sigfillset (&new);
pthread_sigmask (SIG_BLOCK, &new, &old);
#endif
status = SDL_OpenAudio (req, obt);
@@ -205,14 +197,7 @@ static int sdl_open (SDL_AudioSpec *req, SDL_AudioSpec *obt)
}
#ifndef _WIN32
err = pthread_sigmask (SIG_SETMASK, &old, NULL);
if (err) {
dolog ("sdl_open: pthread_sigmask (restore) failed: %s\n",
strerror (errno));
/* We have failed to restore original signal mask, all bets are off,
so exit the process */
exit (EXIT_FAILURE);
}
pthread_sigmask (SIG_SETMASK, &old, NULL);
#endif
return status;
}
@@ -337,7 +322,7 @@ static int sdl_init_out (HWVoiceOut *hw, struct audsettings *as)
SDLVoiceOut *sdl = (SDLVoiceOut *) hw;
SDLAudioState *s = &glob_sdl;
SDL_AudioSpec req, obt;
int endianness;
int endianess;
int err;
audfmt_e effective_fmt;
struct audsettings obt_as;
@@ -353,7 +338,7 @@ static int sdl_init_out (HWVoiceOut *hw, struct audsettings *as)
return -1;
}
err = sdl_to_audfmt(obt.format, &effective_fmt, &endianness);
err = sdl_to_audfmt (obt.format, &effective_fmt, &endianess);
if (err) {
sdl_close (s);
return -1;
@@ -362,7 +347,7 @@ static int sdl_init_out (HWVoiceOut *hw, struct audsettings *as)
obt_as.freq = obt.freq;
obt_as.nchannels = obt.channels;
obt_as.fmt = effective_fmt;
obt_as.endianness = endianness;
obt_as.endianness = endianess;
audio_pcm_init_info (&hw->info, &obt_as);
hw->samples = obt.samples;

View File

@@ -1,409 +0,0 @@
/*
* Copyright (C) 2010 Red Hat, Inc.
*
* maintained by Gerd Hoffmann <kraxel@redhat.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 or
* (at your option) version 3 of the License.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
#include "hw/hw.h"
#include "qemu/timer.h"
#include "ui/qemu-spice.h"
#define AUDIO_CAP "spice"
#include "audio.h"
#include "audio_int.h"
#if SPICE_INTERFACE_PLAYBACK_MAJOR > 1 || SPICE_INTERFACE_PLAYBACK_MINOR >= 3
#define LINE_OUT_SAMPLES (480 * 4)
#else
#define LINE_OUT_SAMPLES (256 * 4)
#endif
#if SPICE_INTERFACE_RECORD_MAJOR > 2 || SPICE_INTERFACE_RECORD_MINOR >= 3
#define LINE_IN_SAMPLES (480 * 4)
#else
#define LINE_IN_SAMPLES (256 * 4)
#endif
typedef struct SpiceRateCtl {
int64_t start_ticks;
int64_t bytes_sent;
} SpiceRateCtl;
typedef struct SpiceVoiceOut {
HWVoiceOut hw;
SpicePlaybackInstance sin;
SpiceRateCtl rate;
int active;
uint32_t *frame;
uint32_t *fpos;
uint32_t fsize;
} SpiceVoiceOut;
typedef struct SpiceVoiceIn {
HWVoiceIn hw;
SpiceRecordInstance sin;
SpiceRateCtl rate;
int active;
uint32_t samples[LINE_IN_SAMPLES];
} SpiceVoiceIn;
static const SpicePlaybackInterface playback_sif = {
.base.type = SPICE_INTERFACE_PLAYBACK,
.base.description = "playback",
.base.major_version = SPICE_INTERFACE_PLAYBACK_MAJOR,
.base.minor_version = SPICE_INTERFACE_PLAYBACK_MINOR,
};
static const SpiceRecordInterface record_sif = {
.base.type = SPICE_INTERFACE_RECORD,
.base.description = "record",
.base.major_version = SPICE_INTERFACE_RECORD_MAJOR,
.base.minor_version = SPICE_INTERFACE_RECORD_MINOR,
};
static void *spice_audio_init (void)
{
if (!using_spice) {
return NULL;
}
return &spice_audio_init;
}
static void spice_audio_fini (void *opaque)
{
/* nothing */
}
static void rate_start (SpiceRateCtl *rate)
{
memset (rate, 0, sizeof (*rate));
rate->start_ticks = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
}
static int rate_get_samples (struct audio_pcm_info *info, SpiceRateCtl *rate)
{
int64_t now;
int64_t ticks;
int64_t bytes;
int64_t samples;
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
ticks = now - rate->start_ticks;
bytes = muldiv64 (ticks, info->bytes_per_second, get_ticks_per_sec ());
samples = (bytes - rate->bytes_sent) >> info->shift;
if (samples < 0 || samples > 65536) {
error_report("Resetting rate control (%" PRId64 " samples)", samples);
rate_start (rate);
samples = 0;
}
rate->bytes_sent += samples << info->shift;
return samples;
}
/* playback */
static int line_out_init (HWVoiceOut *hw, struct audsettings *as)
{
SpiceVoiceOut *out = container_of (hw, SpiceVoiceOut, hw);
struct audsettings settings;
#if SPICE_INTERFACE_PLAYBACK_MAJOR > 1 || SPICE_INTERFACE_PLAYBACK_MINOR >= 3
settings.freq = spice_server_get_best_playback_rate(NULL);
#else
settings.freq = SPICE_INTERFACE_PLAYBACK_FREQ;
#endif
settings.nchannels = SPICE_INTERFACE_PLAYBACK_CHAN;
settings.fmt = AUD_FMT_S16;
settings.endianness = AUDIO_HOST_ENDIANNESS;
audio_pcm_init_info (&hw->info, &settings);
hw->samples = LINE_OUT_SAMPLES;
out->active = 0;
out->sin.base.sif = &playback_sif.base;
qemu_spice_add_interface (&out->sin.base);
#if SPICE_INTERFACE_PLAYBACK_MAJOR > 1 || SPICE_INTERFACE_PLAYBACK_MINOR >= 3
spice_server_set_playback_rate(&out->sin, settings.freq);
#endif
return 0;
}
static void line_out_fini (HWVoiceOut *hw)
{
SpiceVoiceOut *out = container_of (hw, SpiceVoiceOut, hw);
spice_server_remove_interface (&out->sin.base);
}
static int line_out_run (HWVoiceOut *hw, int live)
{
SpiceVoiceOut *out = container_of (hw, SpiceVoiceOut, hw);
int rpos, decr;
int samples;
if (!live) {
return 0;
}
decr = rate_get_samples (&hw->info, &out->rate);
decr = audio_MIN (live, decr);
samples = decr;
rpos = hw->rpos;
while (samples) {
int left_till_end_samples = hw->samples - rpos;
int len = audio_MIN (samples, left_till_end_samples);
if (!out->frame) {
spice_server_playback_get_buffer (&out->sin, &out->frame, &out->fsize);
out->fpos = out->frame;
}
if (out->frame) {
len = audio_MIN (len, out->fsize);
hw->clip (out->fpos, hw->mix_buf + rpos, len);
out->fsize -= len;
out->fpos += len;
if (out->fsize == 0) {
spice_server_playback_put_samples (&out->sin, out->frame);
out->frame = out->fpos = NULL;
}
}
rpos = (rpos + len) % hw->samples;
samples -= len;
}
hw->rpos = rpos;
return decr;
}
static int line_out_write (SWVoiceOut *sw, void *buf, int len)
{
return audio_pcm_sw_write (sw, buf, len);
}
static int line_out_ctl (HWVoiceOut *hw, int cmd, ...)
{
SpiceVoiceOut *out = container_of (hw, SpiceVoiceOut, hw);
switch (cmd) {
case VOICE_ENABLE:
if (out->active) {
break;
}
out->active = 1;
rate_start (&out->rate);
spice_server_playback_start (&out->sin);
break;
case VOICE_DISABLE:
if (!out->active) {
break;
}
out->active = 0;
if (out->frame) {
memset (out->fpos, 0, out->fsize << 2);
spice_server_playback_put_samples (&out->sin, out->frame);
out->frame = out->fpos = NULL;
}
spice_server_playback_stop (&out->sin);
break;
case VOICE_VOLUME:
{
#if ((SPICE_INTERFACE_PLAYBACK_MAJOR >= 1) && (SPICE_INTERFACE_PLAYBACK_MINOR >= 2))
SWVoiceOut *sw;
va_list ap;
uint16_t vol[2];
va_start (ap, cmd);
sw = va_arg (ap, SWVoiceOut *);
va_end (ap);
vol[0] = sw->vol.l / ((1ULL << 16) + 1);
vol[1] = sw->vol.r / ((1ULL << 16) + 1);
spice_server_playback_set_volume (&out->sin, 2, vol);
spice_server_playback_set_mute (&out->sin, sw->vol.mute);
#endif
break;
}
}
return 0;
}
/* record */
static int line_in_init (HWVoiceIn *hw, struct audsettings *as)
{
SpiceVoiceIn *in = container_of (hw, SpiceVoiceIn, hw);
struct audsettings settings;
#if SPICE_INTERFACE_RECORD_MAJOR > 2 || SPICE_INTERFACE_RECORD_MINOR >= 3
settings.freq = spice_server_get_best_record_rate(NULL);
#else
settings.freq = SPICE_INTERFACE_RECORD_FREQ;
#endif
settings.nchannels = SPICE_INTERFACE_RECORD_CHAN;
settings.fmt = AUD_FMT_S16;
settings.endianness = AUDIO_HOST_ENDIANNESS;
audio_pcm_init_info (&hw->info, &settings);
hw->samples = LINE_IN_SAMPLES;
in->active = 0;
in->sin.base.sif = &record_sif.base;
qemu_spice_add_interface (&in->sin.base);
#if SPICE_INTERFACE_RECORD_MAJOR > 2 || SPICE_INTERFACE_RECORD_MINOR >= 3
spice_server_set_record_rate(&in->sin, settings.freq);
#endif
return 0;
}
static void line_in_fini (HWVoiceIn *hw)
{
SpiceVoiceIn *in = container_of (hw, SpiceVoiceIn, hw);
spice_server_remove_interface (&in->sin.base);
}
static int line_in_run (HWVoiceIn *hw)
{
SpiceVoiceIn *in = container_of (hw, SpiceVoiceIn, hw);
int num_samples;
int ready;
int len[2];
uint64_t delta_samp;
const uint32_t *samples;
if (!(num_samples = hw->samples - audio_pcm_hw_get_live_in (hw))) {
return 0;
}
delta_samp = rate_get_samples (&hw->info, &in->rate);
num_samples = audio_MIN (num_samples, delta_samp);
ready = spice_server_record_get_samples (&in->sin, in->samples, num_samples);
samples = in->samples;
if (ready == 0) {
static const uint32_t silence[LINE_IN_SAMPLES];
samples = silence;
ready = LINE_IN_SAMPLES;
}
num_samples = audio_MIN (ready, num_samples);
if (hw->wpos + num_samples > hw->samples) {
len[0] = hw->samples - hw->wpos;
len[1] = num_samples - len[0];
} else {
len[0] = num_samples;
len[1] = 0;
}
hw->conv (hw->conv_buf + hw->wpos, samples, len[0]);
if (len[1]) {
hw->conv (hw->conv_buf, samples + len[0], len[1]);
}
hw->wpos = (hw->wpos + num_samples) % hw->samples;
return num_samples;
}
static int line_in_read (SWVoiceIn *sw, void *buf, int size)
{
return audio_pcm_sw_read (sw, buf, size);
}
static int line_in_ctl (HWVoiceIn *hw, int cmd, ...)
{
SpiceVoiceIn *in = container_of (hw, SpiceVoiceIn, hw);
switch (cmd) {
case VOICE_ENABLE:
if (in->active) {
break;
}
in->active = 1;
rate_start (&in->rate);
spice_server_record_start (&in->sin);
break;
case VOICE_DISABLE:
if (!in->active) {
break;
}
in->active = 0;
spice_server_record_stop (&in->sin);
break;
case VOICE_VOLUME:
{
#if ((SPICE_INTERFACE_RECORD_MAJOR >= 2) && (SPICE_INTERFACE_RECORD_MINOR >= 2))
SWVoiceIn *sw;
va_list ap;
uint16_t vol[2];
va_start (ap, cmd);
sw = va_arg (ap, SWVoiceIn *);
va_end (ap);
vol[0] = sw->vol.l / ((1ULL << 16) + 1);
vol[1] = sw->vol.r / ((1ULL << 16) + 1);
spice_server_record_set_volume (&in->sin, 2, vol);
spice_server_record_set_mute (&in->sin, sw->vol.mute);
#endif
break;
}
}
return 0;
}
static struct audio_option audio_options[] = {
{ /* end of list */ },
};
static struct audio_pcm_ops audio_callbacks = {
.init_out = line_out_init,
.fini_out = line_out_fini,
.run_out = line_out_run,
.write = line_out_write,
.ctl_out = line_out_ctl,
.init_in = line_in_init,
.fini_in = line_in_fini,
.run_in = line_in_run,
.read = line_in_read,
.ctl_in = line_in_ctl,
};
struct audio_driver spice_audio_driver = {
.name = "spice",
.descr = "spice audio driver",
.options = audio_options,
.init = spice_audio_init,
.fini = spice_audio_fini,
.pcm_ops = &audio_callbacks,
.max_voices_out = 1,
.max_voices_in = 1,
.voice_size_out = sizeof (SpiceVoiceOut),
.voice_size_in = sizeof (SpiceVoiceIn),
#if ((SPICE_INTERFACE_PLAYBACK_MAJOR >= 1) && (SPICE_INTERFACE_PLAYBACK_MINOR >= 2))
.ctl_caps = VOICE_VOLUME_CAP
#endif
};
void qemu_spice_audio_init (void)
{
spice_audio_driver.can_be_default = 1;
}

View File

@@ -22,7 +22,7 @@
* THE SOFTWARE.
*/
#include "hw/hw.h"
#include "qemu/timer.h"
#include "qemu-timer.h"
#include "audio.h"
#define AUDIO_CAP "wav"
@@ -30,7 +30,7 @@
typedef struct WAVVoiceOut {
HWVoiceOut hw;
FILE *f;
QEMUFile *f;
int64_t old_ticks;
void *pcm_buf;
int total_samples;
@@ -52,7 +52,7 @@ static int wav_run_out (HWVoiceOut *hw, int live)
int rpos, decr, samples;
uint8_t *dst;
struct st_sample *src;
int64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
int64_t now = qemu_get_clock (vm_clock);
int64_t ticks = now - wav->old_ticks;
int64_t bytes =
muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());
@@ -76,10 +76,7 @@ static int wav_run_out (HWVoiceOut *hw, int live)
dst = advance (wav->pcm_buf, rpos << hw->info.shift);
hw->clip (dst, src, convert_samples);
if (fwrite (dst, convert_samples << hw->info.shift, 1, wav->f) != 1) {
dolog ("wav_run_out: fwrite of %d bytes failed\nReaons: %s\n",
convert_samples << hw->info.shift, strerror (errno));
}
qemu_put_buffer (wav->f, dst, convert_samples << hw->info.shift);
rpos = (rpos + convert_samples) % hw->samples;
samples -= convert_samples;
@@ -155,20 +152,16 @@ static int wav_init_out (HWVoiceOut *hw, struct audsettings *as)
le_store (hdr + 28, hw->info.freq << (bits16 + stereo), 4);
le_store (hdr + 32, 1 << (bits16 + stereo), 2);
wav->f = fopen (conf.wav_path, "wb");
wav->f = qemu_fopen (conf.wav_path, "wb");
if (!wav->f) {
dolog ("Failed to open wave file `%s'\nReason: %s\n",
conf.wav_path, strerror (errno));
g_free (wav->pcm_buf);
qemu_free (wav->pcm_buf);
wav->pcm_buf = NULL;
return -1;
}
if (fwrite (hdr, sizeof (hdr), 1, wav->f) != 1) {
dolog ("wav_init_out: failed to write header\nReason: %s\n",
strerror(errno));
return -1;
}
qemu_put_buffer (wav->f, hdr, sizeof (hdr));
return 0;
}
@@ -187,35 +180,16 @@ static void wav_fini_out (HWVoiceOut *hw)
le_store (rlen, rifflen, 4);
le_store (dlen, datalen, 4);
if (fseek (wav->f, 4, SEEK_SET)) {
dolog ("wav_fini_out: fseek to rlen failed\nReason: %s\n",
strerror(errno));
goto doclose;
}
if (fwrite (rlen, 4, 1, wav->f) != 1) {
dolog ("wav_fini_out: failed to write rlen\nReason: %s\n",
strerror (errno));
goto doclose;
}
if (fseek (wav->f, 32, SEEK_CUR)) {
dolog ("wav_fini_out: fseek to dlen failed\nReason: %s\n",
strerror (errno));
goto doclose;
}
if (fwrite (dlen, 4, 1, wav->f) != 1) {
dolog ("wav_fini_out: failed to write dlen\nReaons: %s\n",
strerror (errno));
goto doclose;
}
qemu_fseek (wav->f, 4, SEEK_SET);
qemu_put_buffer (wav->f, rlen, 4);
doclose:
if (fclose (wav->f)) {
dolog ("wav_fini_out: fclose %p failed\nReason: %s\n",
wav->f, strerror (errno));
}
qemu_fseek (wav->f, 32, SEEK_CUR);
qemu_put_buffer (wav->f, dlen, 4);
qemu_fclose (wav->f);
wav->f = NULL;
g_free (wav->pcm_buf);
qemu_free (wav->pcm_buf);
wav->pcm_buf = NULL;
}

View File

@@ -1,9 +1,9 @@
#include "hw/hw.h"
#include "monitor/monitor.h"
#include "monitor.h"
#include "audio.h"
typedef struct {
FILE *f;
QEMUFile *f;
int bytes;
char *path;
int freq;
@@ -35,49 +35,27 @@ static void wav_destroy (void *opaque)
uint8_t dlen[4];
uint32_t datalen = wav->bytes;
uint32_t rifflen = datalen + 36;
Monitor *mon = cur_mon;
if (wav->f) {
le_store (rlen, rifflen, 4);
le_store (dlen, datalen, 4);
if (fseek (wav->f, 4, SEEK_SET)) {
monitor_printf (mon, "wav_destroy: rlen fseek failed\nReason: %s\n",
strerror (errno));
goto doclose;
}
if (fwrite (rlen, 4, 1, wav->f) != 1) {
monitor_printf (mon, "wav_destroy: rlen fwrite failed\nReason %s\n",
strerror (errno));
goto doclose;
}
if (fseek (wav->f, 32, SEEK_CUR)) {
monitor_printf (mon, "wav_destroy: dlen fseek failed\nReason %s\n",
strerror (errno));
goto doclose;
}
if (fwrite (dlen, 1, 4, wav->f) != 4) {
monitor_printf (mon, "wav_destroy: dlen fwrite failed\nReason %s\n",
strerror (errno));
goto doclose;
}
doclose:
if (fclose (wav->f)) {
error_report("wav_destroy: fclose failed: %s", strerror(errno));
}
qemu_fseek (wav->f, 4, SEEK_SET);
qemu_put_buffer (wav->f, rlen, 4);
qemu_fseek (wav->f, 32, SEEK_CUR);
qemu_put_buffer (wav->f, dlen, 4);
qemu_fclose (wav->f);
}
g_free (wav->path);
qemu_free (wav->path);
}
static void wav_capture (void *opaque, void *buf, int size)
{
WAVState *wav = opaque;
if (fwrite (buf, size, 1, wav->f) != 1) {
monitor_printf (cur_mon, "wav_capture: fwrite error\nReason: %s",
strerror (errno));
}
qemu_put_buffer (wav->f, buf, size);
wav->bytes += size;
}
@@ -93,7 +71,7 @@ static void wav_capture_info (void *opaque)
WAVState *wav = opaque;
char *path = wav->path;
monitor_printf (cur_mon, "Capturing audio(%d,%d,%d) to %s: %d bytes\n",
monitor_printf(cur_mon, "Capturing audio(%d,%d,%d) to %s: %d bytes\n",
wav->freq, wav->bits, wav->nchannels,
path ? path : "<not available>", wav->bytes);
}
@@ -120,12 +98,12 @@ int wav_start_capture (CaptureState *s, const char *path, int freq,
CaptureVoiceOut *cap;
if (bits != 8 && bits != 16) {
monitor_printf (mon, "incorrect bit count %d, must be 8 or 16\n", bits);
monitor_printf(mon, "incorrect bit count %d, must be 8 or 16\n", bits);
return -1;
}
if (nchannels != 1 && nchannels != 2) {
monitor_printf (mon, "incorrect channel count %d, must be 1 or 2\n",
monitor_printf(mon, "incorrect channel count %d, must be 1 or 2\n",
nchannels);
return -1;
}
@@ -142,7 +120,7 @@ int wav_start_capture (CaptureState *s, const char *path, int freq,
ops.capture = wav_capture;
ops.destroy = wav_destroy;
wav = g_malloc0 (sizeof (*wav));
wav = qemu_mallocz (sizeof (*wav));
shift = bits16 + stereo;
hdr[34] = bits16 ? 0x10 : 0x08;
@@ -152,42 +130,32 @@ int wav_start_capture (CaptureState *s, const char *path, int freq,
le_store (hdr + 28, freq << shift, 4);
le_store (hdr + 32, 1 << shift, 2);
wav->f = fopen (path, "wb");
wav->f = qemu_fopen (path, "wb");
if (!wav->f) {
monitor_printf (mon, "Failed to open wave file `%s'\nReason: %s\n",
monitor_printf(mon, "Failed to open wave file `%s'\nReason: %s\n",
path, strerror (errno));
g_free (wav);
qemu_free (wav);
return -1;
}
wav->path = g_strdup (path);
wav->path = qemu_strdup (path);
wav->bits = bits;
wav->nchannels = nchannels;
wav->freq = freq;
if (fwrite (hdr, sizeof (hdr), 1, wav->f) != 1) {
monitor_printf (mon, "Failed to write header\nReason: %s\n",
strerror (errno));
goto error_free;
}
qemu_put_buffer (wav->f, hdr, sizeof (hdr));
cap = AUD_add_capture (&as, &ops, wav);
if (!cap) {
monitor_printf (mon, "Failed to add audio capture\n");
goto error_free;
monitor_printf(mon, "Failed to add audio capture\n");
qemu_free (wav->path);
qemu_fclose (wav->f);
qemu_free (wav);
return -1;
}
wav->cap = cap;
s->opaque = wav;
s->ops = wav_capture_ops;
return 0;
error_free:
g_free (wav->path);
if (fclose (wav->f)) {
monitor_printf (mon, "Failed to close wave file\nReason: %s\n",
strerror (errno));
}
g_free (wav);
return -1;
}

View File

@@ -1,7 +1,7 @@
/* public domain */
#include "qemu-common.h"
#include "sysemu/sysemu.h"
#include "sysemu.h"
#include "audio.h"
#define AUDIO_CAP "winwave"
@@ -72,7 +72,7 @@ static void winwave_log_mmresult (MMRESULT mr)
break;
case MMSYSERR_NOMEM:
str = "Unable to allocate or lock memory";
str = "Unable to allocate or locl memory";
break;
case WAVERR_SYNC:
@@ -222,9 +222,9 @@ static int winwave_init_out (HWVoiceOut *hw, struct audsettings *as)
return 0;
err4:
g_free (wave->pcm_buf);
qemu_free (wave->pcm_buf);
err3:
g_free (wave->hdrs);
qemu_free (wave->hdrs);
err2:
winwave_anal_close_out (wave);
err1:
@@ -310,10 +310,10 @@ static void winwave_fini_out (HWVoiceOut *hw)
wave->event = NULL;
}
g_free (wave->pcm_buf);
qemu_free (wave->pcm_buf);
wave->pcm_buf = NULL;
g_free (wave->hdrs);
qemu_free (wave->hdrs);
wave->hdrs = NULL;
}
@@ -349,15 +349,21 @@ static int winwave_ctl_out (HWVoiceOut *hw, int cmd, ...)
else {
hw->poll_mode = 0;
}
if (wave->paused) {
mr = waveOutRestart (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutRestart");
}
wave->paused = 0;
}
}
return 0;
case VOICE_DISABLE:
if (!wave->paused) {
mr = waveOutReset (wave->hwo);
mr = waveOutPause (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutReset");
winwave_logerr (mr, "waveOutPause");
}
else {
wave->paused = 1;
@@ -505,9 +511,9 @@ static int winwave_init_in (HWVoiceIn *hw, struct audsettings *as)
return 0;
err4:
g_free (wave->pcm_buf);
qemu_free (wave->pcm_buf);
err3:
g_free (wave->hdrs);
qemu_free (wave->hdrs);
err2:
winwave_anal_close_in (wave);
err1:
@@ -544,10 +550,10 @@ static void winwave_fini_in (HWVoiceIn *hw)
wave->event = NULL;
}
g_free (wave->pcm_buf);
qemu_free (wave->pcm_buf);
wave->pcm_buf = NULL;
g_free (wave->hdrs);
qemu_free (wave->hdrs);
wave->hdrs = NULL;
}
@@ -575,7 +581,8 @@ static int winwave_run_in (HWVoiceIn *hw)
int conv = audio_MIN (left, decr);
hw->conv (hw->conv_buf + hw->wpos,
advance (wave->pcm_buf, wave->rpos << hw->info.shift),
conv);
conv,
&nominal_volume);
wave->rpos = (wave->rpos + conv) % hw->samples;
hw->wpos = (hw->wpos + conv) % hw->samples;

View File

@@ -1,11 +0,0 @@
common-obj-y += rng.o rng-egd.o
common-obj-$(CONFIG_POSIX) += rng-random.o
common-obj-y += msmouse.o testdev.o
common-obj-$(CONFIG_BRLAPI) += baum.o
baum.o-cflags := $(SDL_CFLAGS)
common-obj-$(CONFIG_TPM) += tpm.o
common-obj-y += hostmem.o hostmem-ram.o
common-obj-$(CONFIG_LINUX) += hostmem-file.o

View File

@@ -1,134 +0,0 @@
/*
* QEMU Host Memory Backend for hugetlbfs
*
* Copyright (C) 2013-2014 Red Hat Inc
*
* Authors:
* Paolo Bonzini <pbonzini@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu-common.h"
#include "sysemu/hostmem.h"
#include "sysemu/sysemu.h"
#include "qom/object_interfaces.h"
/* hostmem-file.c */
/**
* @TYPE_MEMORY_BACKEND_FILE:
* name of backend that uses mmap on a file descriptor
*/
#define TYPE_MEMORY_BACKEND_FILE "memory-backend-file"
#define MEMORY_BACKEND_FILE(obj) \
OBJECT_CHECK(HostMemoryBackendFile, (obj), TYPE_MEMORY_BACKEND_FILE)
typedef struct HostMemoryBackendFile HostMemoryBackendFile;
struct HostMemoryBackendFile {
HostMemoryBackend parent_obj;
bool share;
char *mem_path;
};
static void
file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(backend);
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
return;
}
if (!fb->mem_path) {
error_setg(errp, "mem_path property not set");
return;
}
#ifndef CONFIG_LINUX
error_setg(errp, "-mem-path not supported on this host");
#else
if (!memory_region_size(&backend->mr)) {
backend->force_prealloc = mem_prealloc;
memory_region_init_ram_from_file(&backend->mr, OBJECT(backend),
object_get_canonical_path(OBJECT(backend)),
backend->size, fb->share,
fb->mem_path, errp);
}
#endif
}
static void
file_backend_class_init(ObjectClass *oc, void *data)
{
HostMemoryBackendClass *bc = MEMORY_BACKEND_CLASS(oc);
bc->alloc = file_backend_memory_alloc;
}
static char *get_mem_path(Object *o, Error **errp)
{
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
return g_strdup(fb->mem_path);
}
static void set_mem_path(Object *o, const char *str, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(o);
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
if (memory_region_size(&backend->mr)) {
error_setg(errp, "cannot change property value");
return;
}
if (fb->mem_path) {
g_free(fb->mem_path);
}
fb->mem_path = g_strdup(str);
}
static bool file_memory_backend_get_share(Object *o, Error **errp)
{
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
return fb->share;
}
static void file_memory_backend_set_share(Object *o, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(o);
HostMemoryBackendFile *fb = MEMORY_BACKEND_FILE(o);
if (memory_region_size(&backend->mr)) {
error_setg(errp, "cannot change property value");
return;
}
fb->share = value;
}
static void
file_backend_instance_init(Object *o)
{
object_property_add_bool(o, "share",
file_memory_backend_get_share,
file_memory_backend_set_share, NULL);
object_property_add_str(o, "mem-path", get_mem_path,
set_mem_path, NULL);
}
static const TypeInfo file_backend_info = {
.name = TYPE_MEMORY_BACKEND_FILE,
.parent = TYPE_MEMORY_BACKEND,
.class_init = file_backend_class_init,
.instance_init = file_backend_instance_init,
.instance_size = sizeof(HostMemoryBackendFile),
};
static void register_types(void)
{
type_register_static(&file_backend_info);
}
type_init(register_types);

View File

@@ -1,53 +0,0 @@
/*
* QEMU Host Memory Backend
*
* Copyright (C) 2013-2014 Red Hat Inc
*
* Authors:
* Igor Mammedov <imammedo@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "sysemu/hostmem.h"
#include "qom/object_interfaces.h"
#define TYPE_MEMORY_BACKEND_RAM "memory-backend-ram"
static void
ram_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
{
char *path;
if (!backend->size) {
error_setg(errp, "can't create backend with size 0");
return;
}
path = object_get_canonical_path_component(OBJECT(backend));
memory_region_init_ram(&backend->mr, OBJECT(backend), path,
backend->size, errp);
g_free(path);
}
static void
ram_backend_class_init(ObjectClass *oc, void *data)
{
HostMemoryBackendClass *bc = MEMORY_BACKEND_CLASS(oc);
bc->alloc = ram_backend_memory_alloc;
}
static const TypeInfo ram_backend_info = {
.name = TYPE_MEMORY_BACKEND_RAM,
.parent = TYPE_MEMORY_BACKEND,
.class_init = ram_backend_class_init,
};
static void register_types(void)
{
type_register_static(&ram_backend_info);
}
type_init(register_types);

View File

@@ -1,365 +0,0 @@
/*
* QEMU Host Memory Backend
*
* Copyright (C) 2013-2014 Red Hat Inc
*
* Authors:
* Igor Mammedov <imammedo@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "sysemu/hostmem.h"
#include "qapi/visitor.h"
#include "qapi-types.h"
#include "qapi-visit.h"
#include "qapi/qmp/qerror.h"
#include "qemu/config-file.h"
#include "qom/object_interfaces.h"
#ifdef CONFIG_NUMA
#include <numaif.h>
QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_DEFAULT != MPOL_DEFAULT);
QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_PREFERRED != MPOL_PREFERRED);
QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_BIND != MPOL_BIND);
QEMU_BUILD_BUG_ON(HOST_MEM_POLICY_INTERLEAVE != MPOL_INTERLEAVE);
#endif
static void
host_memory_backend_get_size(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
uint64_t value = backend->size;
visit_type_size(v, &value, name, errp);
}
static void
host_memory_backend_set_size(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
Error *local_err = NULL;
uint64_t value;
if (memory_region_size(&backend->mr)) {
error_setg(&local_err, "cannot change property value");
goto out;
}
visit_type_size(v, &value, name, &local_err);
if (local_err) {
goto out;
}
if (!value) {
error_setg(&local_err, "Property '%s.%s' doesn't take value '%"
PRIu64 "'", object_get_typename(obj), name, value);
goto out;
}
backend->size = value;
out:
error_propagate(errp, local_err);
}
static void
host_memory_backend_get_host_nodes(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
uint16List *host_nodes = NULL;
uint16List **node = &host_nodes;
unsigned long value;
value = find_first_bit(backend->host_nodes, MAX_NODES);
if (value == MAX_NODES) {
return;
}
*node = g_malloc0(sizeof(**node));
(*node)->value = value;
node = &(*node)->next;
do {
value = find_next_bit(backend->host_nodes, MAX_NODES, value + 1);
if (value == MAX_NODES) {
break;
}
*node = g_malloc0(sizeof(**node));
(*node)->value = value;
node = &(*node)->next;
} while (true);
visit_type_uint16List(v, &host_nodes, name, errp);
}
static void
host_memory_backend_set_host_nodes(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
#ifdef CONFIG_NUMA
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
uint16List *l = NULL;
visit_type_uint16List(v, &l, name, errp);
while (l) {
bitmap_set(backend->host_nodes, l->value, 1);
l = l->next;
}
#else
error_setg(errp, "NUMA node binding are not supported by this QEMU");
#endif
}
static void
host_memory_backend_get_policy(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
int policy = backend->policy;
visit_type_enum(v, &policy, HostMemPolicy_lookup, NULL, name, errp);
}
static void
host_memory_backend_set_policy(Object *obj, Visitor *v, void *opaque,
const char *name, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
int policy;
visit_type_enum(v, &policy, HostMemPolicy_lookup, NULL, name, errp);
backend->policy = policy;
#ifndef CONFIG_NUMA
if (policy != HOST_MEM_POLICY_DEFAULT) {
error_setg(errp, "NUMA policies are not supported by this QEMU");
}
#endif
}
static bool host_memory_backend_get_merge(Object *obj, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
return backend->merge;
}
static void host_memory_backend_set_merge(Object *obj, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (!memory_region_size(&backend->mr)) {
backend->merge = value;
return;
}
if (value != backend->merge) {
void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr);
qemu_madvise(ptr, sz,
value ? QEMU_MADV_MERGEABLE : QEMU_MADV_UNMERGEABLE);
backend->merge = value;
}
}
static bool host_memory_backend_get_dump(Object *obj, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
return backend->dump;
}
static void host_memory_backend_set_dump(Object *obj, bool value, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (!memory_region_size(&backend->mr)) {
backend->dump = value;
return;
}
if (value != backend->dump) {
void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr);
qemu_madvise(ptr, sz,
value ? QEMU_MADV_DODUMP : QEMU_MADV_DONTDUMP);
backend->dump = value;
}
}
static bool host_memory_backend_get_prealloc(Object *obj, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
return backend->prealloc || backend->force_prealloc;
}
static void host_memory_backend_set_prealloc(Object *obj, bool value,
Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
if (backend->force_prealloc) {
if (value) {
error_setg(errp,
"remove -mem-prealloc to use the prealloc property");
return;
}
}
if (!memory_region_size(&backend->mr)) {
backend->prealloc = value;
return;
}
if (value && !backend->prealloc) {
int fd = memory_region_get_fd(&backend->mr);
void *ptr = memory_region_get_ram_ptr(&backend->mr);
uint64_t sz = memory_region_size(&backend->mr);
os_mem_prealloc(fd, ptr, sz);
backend->prealloc = true;
}
}
static void host_memory_backend_init(Object *obj)
{
HostMemoryBackend *backend = MEMORY_BACKEND(obj);
backend->merge = qemu_opt_get_bool(qemu_get_machine_opts(),
"mem-merge", true);
backend->dump = qemu_opt_get_bool(qemu_get_machine_opts(),
"dump-guest-core", true);
backend->prealloc = mem_prealloc;
object_property_add_bool(obj, "merge",
host_memory_backend_get_merge,
host_memory_backend_set_merge, NULL);
object_property_add_bool(obj, "dump",
host_memory_backend_get_dump,
host_memory_backend_set_dump, NULL);
object_property_add_bool(obj, "prealloc",
host_memory_backend_get_prealloc,
host_memory_backend_set_prealloc, NULL);
object_property_add(obj, "size", "int",
host_memory_backend_get_size,
host_memory_backend_set_size, NULL, NULL, NULL);
object_property_add(obj, "host-nodes", "int",
host_memory_backend_get_host_nodes,
host_memory_backend_set_host_nodes, NULL, NULL, NULL);
object_property_add(obj, "policy", "str",
host_memory_backend_get_policy,
host_memory_backend_set_policy, NULL, NULL, NULL);
}
MemoryRegion *
host_memory_backend_get_memory(HostMemoryBackend *backend, Error **errp)
{
return memory_region_size(&backend->mr) ? &backend->mr : NULL;
}
static void
host_memory_backend_memory_complete(UserCreatable *uc, Error **errp)
{
HostMemoryBackend *backend = MEMORY_BACKEND(uc);
HostMemoryBackendClass *bc = MEMORY_BACKEND_GET_CLASS(uc);
Error *local_err = NULL;
void *ptr;
uint64_t sz;
if (bc->alloc) {
bc->alloc(backend, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
ptr = memory_region_get_ram_ptr(&backend->mr);
sz = memory_region_size(&backend->mr);
if (backend->merge) {
qemu_madvise(ptr, sz, QEMU_MADV_MERGEABLE);
}
if (!backend->dump) {
qemu_madvise(ptr, sz, QEMU_MADV_DONTDUMP);
}
#ifdef CONFIG_NUMA
unsigned long lastbit = find_last_bit(backend->host_nodes, MAX_NODES);
/* lastbit == MAX_NODES means maxnode = 0 */
unsigned long maxnode = (lastbit + 1) % (MAX_NODES + 1);
/* ensure policy won't be ignored in case memory is preallocated
* before mbind(). note: MPOL_MF_STRICT is ignored on hugepages so
* this doesn't catch hugepage case. */
unsigned flags = MPOL_MF_STRICT | MPOL_MF_MOVE;
/* check for invalid host-nodes and policies and give more verbose
* error messages than mbind(). */
if (maxnode && backend->policy == MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be empty for policy default,"
" or you should explicitly specify a policy other"
" than default");
return;
} else if (maxnode == 0 && backend->policy != MPOL_DEFAULT) {
error_setg(errp, "host-nodes must be set for policy %s",
HostMemPolicy_lookup[backend->policy]);
return;
}
/* We can have up to MAX_NODES nodes, but we need to pass maxnode+1
* as argument to mbind() due to an old Linux bug (feature?) which
* cuts off the last specified node. This means backend->host_nodes
* must have MAX_NODES+1 bits available.
*/
assert(sizeof(backend->host_nodes) >=
BITS_TO_LONGS(MAX_NODES + 1) * sizeof(unsigned long));
assert(maxnode <= MAX_NODES);
if (mbind(ptr, sz, backend->policy,
maxnode ? backend->host_nodes : NULL, maxnode + 1, flags)) {
error_setg_errno(errp, errno,
"cannot bind memory to host NUMA nodes");
return;
}
#endif
/* Preallocate memory after the NUMA policy has been instantiated.
* This is necessary to guarantee memory is allocated with
* specified NUMA policy in place.
*/
if (backend->prealloc) {
os_mem_prealloc(memory_region_get_fd(&backend->mr), ptr, sz);
}
}
}
static void
host_memory_backend_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
ucc->complete = host_memory_backend_memory_complete;
}
static const TypeInfo host_memory_backend_info = {
.name = TYPE_MEMORY_BACKEND,
.parent = TYPE_OBJECT,
.abstract = true,
.class_size = sizeof(HostMemoryBackendClass),
.class_init = host_memory_backend_class_init,
.instance_size = sizeof(HostMemoryBackend),
.instance_init = host_memory_backend_init,
.interfaces = (InterfaceInfo[]) {
{ TYPE_USER_CREATABLE },
{ }
}
};
static void register_types(void)
{
type_register_static(&host_memory_backend_info);
}
type_init(register_types);

View File

@@ -1,232 +0,0 @@
/*
* QEMU Random Number Generator Backend
*
* Copyright IBM, Corp. 2012
*
* Authors:
* Anthony Liguori <aliguori@us.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "sysemu/rng.h"
#include "sysemu/char.h"
#include "qapi/qmp/qerror.h"
#include "hw/qdev.h" /* just for DEFINE_PROP_CHR */
#define TYPE_RNG_EGD "rng-egd"
#define RNG_EGD(obj) OBJECT_CHECK(RngEgd, (obj), TYPE_RNG_EGD)
typedef struct RngEgd
{
RngBackend parent;
CharDriverState *chr;
char *chr_name;
GSList *requests;
} RngEgd;
typedef struct RngRequest
{
EntropyReceiveFunc *receive_entropy;
uint8_t *data;
void *opaque;
size_t offset;
size_t size;
} RngRequest;
static void rng_egd_request_entropy(RngBackend *b, size_t size,
EntropyReceiveFunc *receive_entropy,
void *opaque)
{
RngEgd *s = RNG_EGD(b);
RngRequest *req;
req = g_malloc(sizeof(*req));
req->offset = 0;
req->size = size;
req->receive_entropy = receive_entropy;
req->opaque = opaque;
req->data = g_malloc(req->size);
while (size > 0) {
uint8_t header[2];
uint8_t len = MIN(size, 255);
/* synchronous entropy request */
header[0] = 0x02;
header[1] = len;
qemu_chr_fe_write(s->chr, header, sizeof(header));
size -= len;
}
s->requests = g_slist_append(s->requests, req);
}
static void rng_egd_free_request(RngRequest *req)
{
g_free(req->data);
g_free(req);
}
static int rng_egd_chr_can_read(void *opaque)
{
RngEgd *s = RNG_EGD(opaque);
GSList *i;
int size = 0;
for (i = s->requests; i; i = i->next) {
RngRequest *req = i->data;
size += req->size - req->offset;
}
return size;
}
static void rng_egd_chr_read(void *opaque, const uint8_t *buf, int size)
{
RngEgd *s = RNG_EGD(opaque);
size_t buf_offset = 0;
while (size > 0 && s->requests) {
RngRequest *req = s->requests->data;
int len = MIN(size, req->size - req->offset);
memcpy(req->data + req->offset, buf + buf_offset, len);
buf_offset += len;
req->offset += len;
size -= len;
if (req->offset == req->size) {
s->requests = g_slist_remove_link(s->requests, s->requests);
req->receive_entropy(req->opaque, req->data, req->size);
rng_egd_free_request(req);
}
}
}
static void rng_egd_free_requests(RngEgd *s)
{
GSList *i;
for (i = s->requests; i; i = i->next) {
rng_egd_free_request(i->data);
}
g_slist_free(s->requests);
s->requests = NULL;
}
static void rng_egd_cancel_requests(RngBackend *b)
{
RngEgd *s = RNG_EGD(b);
/* We simply delete the list of pending requests. If there is data in the
* queue waiting to be read, this is okay, because there will always be
* more data than we requested originally
*/
rng_egd_free_requests(s);
}
static void rng_egd_opened(RngBackend *b, Error **errp)
{
RngEgd *s = RNG_EGD(b);
if (s->chr_name == NULL) {
error_set(errp, QERR_INVALID_PARAMETER_VALUE,
"chardev", "a valid character device");
return;
}
s->chr = qemu_chr_find(s->chr_name);
if (s->chr == NULL) {
error_set(errp, QERR_DEVICE_NOT_FOUND, s->chr_name);
return;
}
if (qemu_chr_fe_claim(s->chr) != 0) {
error_set(errp, QERR_DEVICE_IN_USE, s->chr_name);
return;
}
/* FIXME we should resubmit pending requests when the CDS reconnects. */
qemu_chr_add_handlers(s->chr, rng_egd_chr_can_read, rng_egd_chr_read,
NULL, s);
}
static void rng_egd_set_chardev(Object *obj, const char *value, Error **errp)
{
RngBackend *b = RNG_BACKEND(obj);
RngEgd *s = RNG_EGD(b);
if (b->opened) {
error_set(errp, QERR_PERMISSION_DENIED);
} else {
g_free(s->chr_name);
s->chr_name = g_strdup(value);
}
}
static char *rng_egd_get_chardev(Object *obj, Error **errp)
{
RngEgd *s = RNG_EGD(obj);
if (s->chr && s->chr->label) {
return g_strdup(s->chr->label);
}
return NULL;
}
static void rng_egd_init(Object *obj)
{
object_property_add_str(obj, "chardev",
rng_egd_get_chardev, rng_egd_set_chardev,
NULL);
}
static void rng_egd_finalize(Object *obj)
{
RngEgd *s = RNG_EGD(obj);
if (s->chr) {
qemu_chr_add_handlers(s->chr, NULL, NULL, NULL, NULL);
qemu_chr_fe_release(s->chr);
}
g_free(s->chr_name);
rng_egd_free_requests(s);
}
static void rng_egd_class_init(ObjectClass *klass, void *data)
{
RngBackendClass *rbc = RNG_BACKEND_CLASS(klass);
rbc->request_entropy = rng_egd_request_entropy;
rbc->cancel_requests = rng_egd_cancel_requests;
rbc->opened = rng_egd_opened;
}
static const TypeInfo rng_egd_info = {
.name = TYPE_RNG_EGD,
.parent = TYPE_RNG_BACKEND,
.instance_size = sizeof(RngEgd),
.class_init = rng_egd_class_init,
.instance_init = rng_egd_init,
.instance_finalize = rng_egd_finalize,
};
static void register_types(void)
{
type_register_static(&rng_egd_info);
}
type_init(register_types);

View File

@@ -1,156 +0,0 @@
/*
* QEMU Random Number Generator Backend
*
* Copyright IBM, Corp. 2012
*
* Authors:
* Anthony Liguori <aliguori@us.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "sysemu/rng-random.h"
#include "sysemu/rng.h"
#include "qapi/qmp/qerror.h"
#include "qemu/main-loop.h"
struct RndRandom
{
RngBackend parent;
int fd;
char *filename;
EntropyReceiveFunc *receive_func;
void *opaque;
size_t size;
};
/**
* A simple and incomplete backend to request entropy from /dev/random.
*
* This backend exposes an additional "filename" property that can be used to
* set the filename to use to open the backend.
*/
static void entropy_available(void *opaque)
{
RndRandom *s = RNG_RANDOM(opaque);
uint8_t buffer[s->size];
ssize_t len;
len = read(s->fd, buffer, s->size);
if (len < 0 && errno == EAGAIN) {
return;
}
g_assert(len != -1);
s->receive_func(s->opaque, buffer, len);
s->receive_func = NULL;
qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
}
static void rng_random_request_entropy(RngBackend *b, size_t size,
EntropyReceiveFunc *receive_entropy,
void *opaque)
{
RndRandom *s = RNG_RANDOM(b);
if (s->receive_func) {
s->receive_func(s->opaque, NULL, 0);
}
s->receive_func = receive_entropy;
s->opaque = opaque;
s->size = size;
qemu_set_fd_handler(s->fd, entropy_available, NULL, s);
}
static void rng_random_opened(RngBackend *b, Error **errp)
{
RndRandom *s = RNG_RANDOM(b);
if (s->filename == NULL) {
error_set(errp, QERR_INVALID_PARAMETER_VALUE,
"filename", "a valid filename");
} else {
s->fd = qemu_open(s->filename, O_RDONLY | O_NONBLOCK);
if (s->fd == -1) {
error_setg_file_open(errp, errno, s->filename);
}
}
}
static char *rng_random_get_filename(Object *obj, Error **errp)
{
RndRandom *s = RNG_RANDOM(obj);
return g_strdup(s->filename);
}
static void rng_random_set_filename(Object *obj, const char *filename,
Error **errp)
{
RngBackend *b = RNG_BACKEND(obj);
RndRandom *s = RNG_RANDOM(obj);
if (b->opened) {
error_set(errp, QERR_PERMISSION_DENIED);
return;
}
g_free(s->filename);
s->filename = g_strdup(filename);
}
static void rng_random_init(Object *obj)
{
RndRandom *s = RNG_RANDOM(obj);
object_property_add_str(obj, "filename",
rng_random_get_filename,
rng_random_set_filename,
NULL);
s->filename = g_strdup("/dev/random");
s->fd = -1;
}
static void rng_random_finalize(Object *obj)
{
RndRandom *s = RNG_RANDOM(obj);
if (s->fd != -1) {
qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
qemu_close(s->fd);
}
g_free(s->filename);
}
static void rng_random_class_init(ObjectClass *klass, void *data)
{
RngBackendClass *rbc = RNG_BACKEND_CLASS(klass);
rbc->request_entropy = rng_random_request_entropy;
rbc->opened = rng_random_opened;
}
static const TypeInfo rng_random_info = {
.name = TYPE_RNG_RANDOM,
.parent = TYPE_RNG_BACKEND,
.instance_size = sizeof(RndRandom),
.class_init = rng_random_class_init,
.instance_init = rng_random_init,
.instance_finalize = rng_random_finalize,
};
static void register_types(void)
{
type_register_static(&rng_random_info);
}
type_init(register_types);

View File

@@ -1,109 +0,0 @@
/*
* QEMU Random Number Generator Backend
*
* Copyright IBM, Corp. 2012
*
* Authors:
* Anthony Liguori <aliguori@us.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "sysemu/rng.h"
#include "qapi/qmp/qerror.h"
#include "qom/object_interfaces.h"
void rng_backend_request_entropy(RngBackend *s, size_t size,
EntropyReceiveFunc *receive_entropy,
void *opaque)
{
RngBackendClass *k = RNG_BACKEND_GET_CLASS(s);
if (k->request_entropy) {
k->request_entropy(s, size, receive_entropy, opaque);
}
}
void rng_backend_cancel_requests(RngBackend *s)
{
RngBackendClass *k = RNG_BACKEND_GET_CLASS(s);
if (k->cancel_requests) {
k->cancel_requests(s);
}
}
static bool rng_backend_prop_get_opened(Object *obj, Error **errp)
{
RngBackend *s = RNG_BACKEND(obj);
return s->opened;
}
static void rng_backend_complete(UserCreatable *uc, Error **errp)
{
object_property_set_bool(OBJECT(uc), true, "opened", errp);
}
static void rng_backend_prop_set_opened(Object *obj, bool value, Error **errp)
{
RngBackend *s = RNG_BACKEND(obj);
RngBackendClass *k = RNG_BACKEND_GET_CLASS(s);
Error *local_err = NULL;
if (value == s->opened) {
return;
}
if (!value && s->opened) {
error_set(errp, QERR_PERMISSION_DENIED);
return;
}
if (k->opened) {
k->opened(s, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
}
s->opened = true;
}
static void rng_backend_init(Object *obj)
{
object_property_add_bool(obj, "opened",
rng_backend_prop_get_opened,
rng_backend_prop_set_opened,
NULL);
}
static void rng_backend_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
ucc->complete = rng_backend_complete;
}
static const TypeInfo rng_backend_info = {
.name = TYPE_RNG_BACKEND,
.parent = TYPE_OBJECT,
.instance_size = sizeof(RngBackend),
.instance_init = rng_backend_init,
.class_size = sizeof(RngBackendClass),
.class_init = rng_backend_class_init,
.abstract = true,
.interfaces = (InterfaceInfo[]) {
{ TYPE_USER_CREATABLE },
{ }
}
};
static void register_types(void)
{
type_register_static(&rng_backend_info);
}
type_init(register_types);

View File

@@ -1,131 +0,0 @@
/*
* QEMU Char Device for testsuite control
*
* Copyright (c) 2014 Red Hat, Inc.
*
* Author: Paolo Bonzini <pbonzini@redhat.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "sysemu/char.h"
#define BUF_SIZE 32
typedef struct {
CharDriverState *chr;
uint8_t in_buf[32];
int in_buf_used;
} TestdevCharState;
/* Try to interpret a whole incoming packet */
static int testdev_eat_packet(TestdevCharState *testdev)
{
const uint8_t *cur = testdev->in_buf;
int len = testdev->in_buf_used;
uint8_t c;
int arg;
#define EAT(c) do { \
if (!len--) { \
return 0; \
} \
c = *cur++; \
} while (0)
EAT(c);
while (isspace(c)) {
EAT(c);
}
arg = 0;
while (isdigit(c)) {
arg = arg * 10 + c - '0';
EAT(c);
}
while (isspace(c)) {
EAT(c);
}
switch (c) {
case 'q':
exit((arg << 1) | 1);
break;
default:
break;
}
return cur - testdev->in_buf;
}
/* The other end is writing some data. Store it and try to interpret */
static int testdev_write(CharDriverState *chr, const uint8_t *buf, int len)
{
TestdevCharState *testdev = chr->opaque;
int tocopy, eaten, orig_len = len;
while (len) {
/* Complete our buffer as much as possible */
tocopy = MIN(len, BUF_SIZE - testdev->in_buf_used);
memcpy(testdev->in_buf + testdev->in_buf_used, buf, tocopy);
testdev->in_buf_used += tocopy;
buf += tocopy;
len -= tocopy;
/* Interpret it as much as possible */
while (testdev->in_buf_used > 0 &&
(eaten = testdev_eat_packet(testdev)) > 0) {
memmove(testdev->in_buf, testdev->in_buf + eaten,
testdev->in_buf_used - eaten);
testdev->in_buf_used -= eaten;
}
}
return orig_len;
}
static void testdev_close(struct CharDriverState *chr)
{
TestdevCharState *testdev = chr->opaque;
g_free(testdev);
}
CharDriverState *chr_testdev_init(void)
{
TestdevCharState *testdev;
CharDriverState *chr;
testdev = g_malloc0(sizeof(TestdevCharState));
testdev->chr = chr = g_malloc0(sizeof(CharDriverState));
chr->opaque = testdev;
chr->chr_write = testdev_write;
chr->chr_close = testdev_close;
return chr;
}
static void register_types(void)
{
register_char_driver("testdev", CHARDEV_BACKEND_KIND_TESTDEV, NULL);
}
type_init(register_types);

View File

@@ -1,193 +0,0 @@
/*
* QEMU TPM Backend
*
* Copyright IBM, Corp. 2013
*
* Authors:
* Stefan Berger <stefanb@us.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
* Based on backends/rng.c by Anthony Liguori
*/
#include "sysemu/tpm_backend.h"
#include "qapi/qmp/qerror.h"
#include "sysemu/tpm.h"
#include "qemu/thread.h"
#include "sysemu/tpm_backend_int.h"
enum TpmType tpm_backend_get_type(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->type;
}
const char *tpm_backend_get_desc(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->desc();
}
void tpm_backend_destroy(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->destroy(s);
}
int tpm_backend_init(TPMBackend *s, TPMState *state,
TPMRecvDataCB *datacb)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->init(s, state, datacb);
}
int tpm_backend_startup_tpm(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->startup_tpm(s);
}
bool tpm_backend_had_startup_error(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->had_startup_error(s);
}
size_t tpm_backend_realloc_buffer(TPMBackend *s, TPMSizedBuffer *sb)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->realloc_buffer(sb);
}
void tpm_backend_deliver_request(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
k->ops->deliver_request(s);
}
void tpm_backend_reset(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
k->ops->reset(s);
}
void tpm_backend_cancel_cmd(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
k->ops->cancel_cmd(s);
}
bool tpm_backend_get_tpm_established_flag(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->get_tpm_established_flag(s);
}
static bool tpm_backend_prop_get_opened(Object *obj, Error **errp)
{
TPMBackend *s = TPM_BACKEND(obj);
return s->opened;
}
void tpm_backend_open(TPMBackend *s, Error **errp)
{
object_property_set_bool(OBJECT(s), true, "opened", errp);
}
static void tpm_backend_prop_set_opened(Object *obj, bool value, Error **errp)
{
TPMBackend *s = TPM_BACKEND(obj);
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
Error *local_err = NULL;
if (value == s->opened) {
return;
}
if (!value && s->opened) {
error_set(errp, QERR_PERMISSION_DENIED);
return;
}
if (k->opened) {
k->opened(s, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
}
s->opened = true;
}
static void tpm_backend_instance_init(Object *obj)
{
object_property_add_bool(obj, "opened",
tpm_backend_prop_get_opened,
tpm_backend_prop_set_opened,
NULL);
}
void tpm_backend_thread_deliver_request(TPMBackendThread *tbt)
{
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_PROCESS_CMD, NULL);
}
void tpm_backend_thread_create(TPMBackendThread *tbt,
GFunc func, gpointer user_data)
{
if (!tbt->pool) {
tbt->pool = g_thread_pool_new(func, user_data, 1, TRUE, NULL);
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_INIT, NULL);
}
}
void tpm_backend_thread_end(TPMBackendThread *tbt)
{
if (tbt->pool) {
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_END, NULL);
g_thread_pool_free(tbt->pool, FALSE, TRUE);
tbt->pool = NULL;
}
}
void tpm_backend_thread_tpm_reset(TPMBackendThread *tbt,
GFunc func, gpointer user_data)
{
if (!tbt->pool) {
tpm_backend_thread_create(tbt, func, user_data);
} else {
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_TPM_RESET,
NULL);
}
}
static const TypeInfo tpm_backend_info = {
.name = TYPE_TPM_BACKEND,
.parent = TYPE_OBJECT,
.instance_size = sizeof(TPMBackend),
.instance_init = tpm_backend_instance_init,
.class_size = sizeof(TPMBackendClass),
.abstract = true,
};
static void register_types(void)
{
type_register_static(&tpm_backend_info);
}
type_init(register_types);

185
balloon.c
View File

@@ -1,9 +1,7 @@
/*
* Generic Balloon handlers and management
* QEMU System Emulator
*
* Copyright (c) 2003-2008 Fabrice Bellard
* Copyright (C) 2011 Red Hat, Inc.
* Copyright (C) 2011 Amit Shah <amit.shah@redhat.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
@@ -24,96 +22,125 @@
* THE SOFTWARE.
*/
#include "monitor/monitor.h"
#include "exec/cpu-common.h"
#include "sysemu/kvm.h"
#include "sysemu/balloon.h"
#include "trace.h"
#include "qmp-commands.h"
#include "qapi/qmp/qjson.h"
#include "sysemu.h"
#include "monitor.h"
#include "qjson.h"
#include "qint.h"
#include "cpu-common.h"
#include "kvm.h"
#include "balloon.h"
static QEMUBalloonEvent *balloon_event_fn;
static QEMUBalloonStatus *balloon_stat_fn;
static void *balloon_opaque;
int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
QEMUBalloonStatus *stat_func, void *opaque)
static QEMUBalloonEvent *qemu_balloon_event;
void *qemu_balloon_event_opaque;
void qemu_add_balloon_handler(QEMUBalloonEvent *func, void *opaque)
{
if (balloon_event_fn || balloon_stat_fn || balloon_opaque) {
/* We're already registered one balloon handler. How many can
* a guest really have?
qemu_balloon_event = func;
qemu_balloon_event_opaque = opaque;
}
int qemu_balloon(ram_addr_t target, MonitorCompletion cb, void *opaque)
{
if (qemu_balloon_event) {
qemu_balloon_event(qemu_balloon_event_opaque, target, cb, opaque);
return 1;
} else {
return 0;
}
}
int qemu_balloon_status(MonitorCompletion cb, void *opaque)
{
if (qemu_balloon_event) {
qemu_balloon_event(qemu_balloon_event_opaque, 0, cb, opaque);
return 1;
} else {
return 0;
}
}
static void print_balloon_stat(const char *key, QObject *obj, void *opaque)
{
Monitor *mon = opaque;
if (strcmp(key, "actual"))
monitor_printf(mon, ",%s=%" PRId64, key,
qint_get_int(qobject_to_qint(obj)));
}
void monitor_print_balloon(Monitor *mon, const QObject *data)
{
QDict *qdict;
qdict = qobject_to_qdict(data);
if (!qdict_haskey(qdict, "actual"))
return;
monitor_printf(mon, "balloon: actual=%" PRId64,
qdict_get_int(qdict, "actual") >> 20);
qdict_iter(qdict, print_balloon_stat, mon);
monitor_printf(mon, "\n");
}
/**
* do_info_balloon(): Balloon information
*
* Make an asynchronous request for balloon info. When the request completes
* a QDict will be returned according to the following specification:
*
* - "actual": current balloon value in bytes
* The following fields may or may not be present:
* - "mem_swapped_in": Amount of memory swapped in (bytes)
* - "mem_swapped_out": Amount of memory swapped out (bytes)
* - "major_page_faults": Number of major faults
* - "minor_page_faults": Number of minor faults
* - "free_mem": Total amount of free and unused memory (bytes)
* - "total_mem": Total amount of available memory (bytes)
*
* Example:
*
* { "actual": 1073741824, "mem_swapped_in": 0, "mem_swapped_out": 0,
* "major_page_faults": 142, "minor_page_faults": 239245,
* "free_mem": 1014185984, "total_mem": 1044668416 }
*/
error_report("Another balloon device already registered");
int do_info_balloon(Monitor *mon, MonitorCompletion cb, void *opaque)
{
int ret;
if (kvm_enabled() && !kvm_has_sync_mmu()) {
qerror_report(QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return -1;
}
balloon_event_fn = event_func;
balloon_stat_fn = stat_func;
balloon_opaque = opaque;
ret = qemu_balloon_status(cb, opaque);
if (!ret) {
qerror_report(QERR_DEVICE_NOT_ACTIVE, "balloon");
return -1;
}
return 0;
}
void qemu_remove_balloon_handler(void *opaque)
/**
* do_balloon(): Request VM to change its memory allocation
*/
int do_balloon(Monitor *mon, const QDict *params,
MonitorCompletion cb, void *opaque)
{
if (balloon_opaque != opaque) {
return;
}
balloon_event_fn = NULL;
balloon_stat_fn = NULL;
balloon_opaque = NULL;
}
static int qemu_balloon(ram_addr_t target)
{
if (!balloon_event_fn) {
return 0;
}
trace_balloon_event(balloon_opaque, target);
balloon_event_fn(balloon_opaque, target);
return 1;
}
static int qemu_balloon_status(BalloonInfo *info)
{
if (!balloon_stat_fn) {
return 0;
}
balloon_stat_fn(balloon_opaque, info);
return 1;
}
BalloonInfo *qmp_query_balloon(Error **errp)
{
BalloonInfo *info;
int ret;
if (kvm_enabled() && !kvm_has_sync_mmu()) {
error_set(errp, QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return NULL;
qerror_report(QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return -1;
}
info = g_malloc0(sizeof(*info));
if (qemu_balloon_status(info) == 0) {
error_set(errp, QERR_DEVICE_NOT_ACTIVE, "balloon");
qapi_free_BalloonInfo(info);
return NULL;
ret = qemu_balloon(qdict_get_int(params, "value"), cb, opaque);
if (ret == 0) {
qerror_report(QERR_DEVICE_NOT_ACTIVE, "balloon");
return -1;
}
return info;
}
void qmp_balloon(int64_t value, Error **errp)
{
if (kvm_enabled() && !kvm_has_sync_mmu()) {
error_set(errp, QERR_KVM_MISSING_CAP, "synchronous MMU", "balloon");
return;
}
if (value <= 0) {
error_set(errp, QERR_INVALID_PARAMETER_VALUE, "target", "a size");
return;
}
if (qemu_balloon(value) == 0) {
error_set(errp, QERR_DEVICE_NOT_ACTIVE, "balloon");
}
cb(opaque, NULL);
return 0;
}

33
balloon.h Normal file
View File

@@ -0,0 +1,33 @@
/*
* Balloon
*
* Copyright IBM, Corp. 2008
*
* Authors:
* Anthony Liguori <aliguori@us.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
*/
#ifndef _QEMU_BALLOON_H
#define _QEMU_BALLOON_H
#include "monitor.h"
typedef void (QEMUBalloonEvent)(void *opaque, ram_addr_t target,
MonitorCompletion cb, void *cb_data);
void qemu_add_balloon_handler(QEMUBalloonEvent *func, void *opaque);
int qemu_balloon(ram_addr_t target, MonitorCompletion cb, void *opaque);
int qemu_balloon_status(MonitorCompletion cb, void *opaque);
void monitor_print_balloon(Monitor *mon, const QObject *data);
int do_info_balloon(Monitor *mon, MonitorCompletion cb, void *opaque);
int do_balloon(Monitor *mon, const QDict *params,
MonitorCompletion cb, void *opaque);
#endif

647
block-migration.c Normal file
View File

@@ -0,0 +1,647 @@
/*
* QEMU live block migration
*
* Copyright IBM, Corp. 2009
*
* Authors:
* Liran Schour <lirans@il.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
*/
#include "qemu-common.h"
#include "block_int.h"
#include "hw/hw.h"
#include "qemu-queue.h"
#include "qemu-timer.h"
#include "monitor.h"
#include "block-migration.h"
#include "migration.h"
#include <assert.h>
#define BLOCK_SIZE (BDRV_SECTORS_PER_DIRTY_CHUNK << BDRV_SECTOR_BITS)
#define BLK_MIG_FLAG_DEVICE_BLOCK 0x01
#define BLK_MIG_FLAG_EOS 0x02
#define BLK_MIG_FLAG_PROGRESS 0x04
#define MAX_IS_ALLOCATED_SEARCH 65536
//#define DEBUG_BLK_MIGRATION
#ifdef DEBUG_BLK_MIGRATION
#define DPRINTF(fmt, ...) \
do { printf("blk_migration: " fmt, ## __VA_ARGS__); } while (0)
#else
#define DPRINTF(fmt, ...) \
do { } while (0)
#endif
typedef struct BlkMigDevState {
BlockDriverState *bs;
int bulk_completed;
int shared_base;
int64_t cur_sector;
int64_t cur_dirty;
int64_t completed_sectors;
int64_t total_sectors;
int64_t dirty;
QSIMPLEQ_ENTRY(BlkMigDevState) entry;
} BlkMigDevState;
typedef struct BlkMigBlock {
uint8_t *buf;
BlkMigDevState *bmds;
int64_t sector;
struct iovec iov;
QEMUIOVector qiov;
BlockDriverAIOCB *aiocb;
int ret;
int64_t time;
QSIMPLEQ_ENTRY(BlkMigBlock) entry;
} BlkMigBlock;
typedef struct BlkMigState {
int blk_enable;
int shared_base;
QSIMPLEQ_HEAD(bmds_list, BlkMigDevState) bmds_list;
QSIMPLEQ_HEAD(blk_list, BlkMigBlock) blk_list;
int submitted;
int read_done;
int transferred;
int64_t total_sector_sum;
int prev_progress;
int bulk_completed;
long double total_time;
int reads;
} BlkMigState;
static BlkMigState block_mig_state;
static void blk_send(QEMUFile *f, BlkMigBlock * blk)
{
int len;
/* sector number and flags */
qemu_put_be64(f, (blk->sector << BDRV_SECTOR_BITS)
| BLK_MIG_FLAG_DEVICE_BLOCK);
/* device name */
len = strlen(blk->bmds->bs->device_name);
qemu_put_byte(f, len);
qemu_put_buffer(f, (uint8_t *)blk->bmds->bs->device_name, len);
qemu_put_buffer(f, blk->buf, BLOCK_SIZE);
}
int blk_mig_active(void)
{
return !QSIMPLEQ_EMPTY(&block_mig_state.bmds_list);
}
uint64_t blk_mig_bytes_transferred(void)
{
BlkMigDevState *bmds;
uint64_t sum = 0;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
sum += bmds->completed_sectors;
}
return sum << BDRV_SECTOR_BITS;
}
uint64_t blk_mig_bytes_remaining(void)
{
return blk_mig_bytes_total() - blk_mig_bytes_transferred();
}
uint64_t blk_mig_bytes_total(void)
{
BlkMigDevState *bmds;
uint64_t sum = 0;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
sum += bmds->total_sectors;
}
return sum << BDRV_SECTOR_BITS;
}
static inline void add_avg_read_time(int64_t time)
{
block_mig_state.reads++;
block_mig_state.total_time += time;
}
static inline long double compute_read_bwidth(void)
{
assert(block_mig_state.total_time != 0);
return (block_mig_state.reads * BLOCK_SIZE)/ block_mig_state.total_time;
}
static void blk_mig_read_cb(void *opaque, int ret)
{
BlkMigBlock *blk = opaque;
blk->ret = ret;
blk->time = qemu_get_clock_ns(rt_clock) - blk->time;
add_avg_read_time(blk->time);
QSIMPLEQ_INSERT_TAIL(&block_mig_state.blk_list, blk, entry);
block_mig_state.submitted--;
block_mig_state.read_done++;
assert(block_mig_state.submitted >= 0);
}
static int mig_save_device_bulk(Monitor *mon, QEMUFile *f,
BlkMigDevState *bmds)
{
int64_t total_sectors = bmds->total_sectors;
int64_t cur_sector = bmds->cur_sector;
BlockDriverState *bs = bmds->bs;
BlkMigBlock *blk;
int nr_sectors;
if (bmds->shared_base) {
while (cur_sector < total_sectors &&
!bdrv_is_allocated(bs, cur_sector, MAX_IS_ALLOCATED_SEARCH,
&nr_sectors)) {
cur_sector += nr_sectors;
}
}
if (cur_sector >= total_sectors) {
bmds->cur_sector = bmds->completed_sectors = total_sectors;
return 1;
}
bmds->completed_sectors = cur_sector;
cur_sector &= ~((int64_t)BDRV_SECTORS_PER_DIRTY_CHUNK - 1);
/* we are going to transfer a full block even if it is not allocated */
nr_sectors = BDRV_SECTORS_PER_DIRTY_CHUNK;
if (total_sectors - cur_sector < BDRV_SECTORS_PER_DIRTY_CHUNK) {
nr_sectors = total_sectors - cur_sector;
}
blk = qemu_malloc(sizeof(BlkMigBlock));
blk->buf = qemu_malloc(BLOCK_SIZE);
blk->bmds = bmds;
blk->sector = cur_sector;
blk->iov.iov_base = blk->buf;
blk->iov.iov_len = nr_sectors * BDRV_SECTOR_SIZE;
qemu_iovec_init_external(&blk->qiov, &blk->iov, 1);
blk->time = qemu_get_clock_ns(rt_clock);
blk->aiocb = bdrv_aio_readv(bs, cur_sector, &blk->qiov,
nr_sectors, blk_mig_read_cb, blk);
if (!blk->aiocb) {
goto error;
}
block_mig_state.submitted++;
bdrv_reset_dirty(bs, cur_sector, nr_sectors);
bmds->cur_sector = cur_sector + nr_sectors;
return (bmds->cur_sector >= total_sectors);
error:
monitor_printf(mon, "Error reading sector %" PRId64 "\n", cur_sector);
qemu_file_set_error(f);
qemu_free(blk->buf);
qemu_free(blk);
return 0;
}
static void set_dirty_tracking(int enable)
{
BlkMigDevState *bmds;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
bdrv_set_dirty_tracking(bmds->bs, enable);
}
}
static void init_blk_migration_it(void *opaque, BlockDriverState *bs)
{
Monitor *mon = opaque;
BlkMigDevState *bmds;
int64_t sectors;
if (!bdrv_is_read_only(bs)) {
sectors = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
if (sectors <= 0) {
return;
}
bmds = qemu_mallocz(sizeof(BlkMigDevState));
bmds->bs = bs;
bmds->bulk_completed = 0;
bmds->total_sectors = sectors;
bmds->completed_sectors = 0;
bmds->shared_base = block_mig_state.shared_base;
block_mig_state.total_sector_sum += sectors;
if (bmds->shared_base) {
monitor_printf(mon, "Start migration for %s with shared base "
"image\n",
bs->device_name);
} else {
monitor_printf(mon, "Start full migration for %s\n",
bs->device_name);
}
QSIMPLEQ_INSERT_TAIL(&block_mig_state.bmds_list, bmds, entry);
}
}
static void init_blk_migration(Monitor *mon, QEMUFile *f)
{
block_mig_state.submitted = 0;
block_mig_state.read_done = 0;
block_mig_state.transferred = 0;
block_mig_state.total_sector_sum = 0;
block_mig_state.prev_progress = -1;
block_mig_state.bulk_completed = 0;
block_mig_state.total_time = 0;
block_mig_state.reads = 0;
bdrv_iterate(init_blk_migration_it, mon);
}
static int blk_mig_save_bulked_block(Monitor *mon, QEMUFile *f)
{
int64_t completed_sector_sum = 0;
BlkMigDevState *bmds;
int progress;
int ret = 0;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
if (bmds->bulk_completed == 0) {
if (mig_save_device_bulk(mon, f, bmds) == 1) {
/* completed bulk section for this device */
bmds->bulk_completed = 1;
}
completed_sector_sum += bmds->completed_sectors;
ret = 1;
break;
} else {
completed_sector_sum += bmds->completed_sectors;
}
}
progress = completed_sector_sum * 100 / block_mig_state.total_sector_sum;
if (progress != block_mig_state.prev_progress) {
block_mig_state.prev_progress = progress;
qemu_put_be64(f, (progress << BDRV_SECTOR_BITS)
| BLK_MIG_FLAG_PROGRESS);
monitor_printf(mon, "Completed %d %%\r", progress);
monitor_flush(mon);
}
return ret;
}
static void blk_mig_reset_dirty_cursor(void)
{
BlkMigDevState *bmds;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
bmds->cur_dirty = 0;
}
}
static int mig_save_device_dirty(Monitor *mon, QEMUFile *f,
BlkMigDevState *bmds, int is_async)
{
BlkMigBlock *blk;
int64_t total_sectors = bmds->total_sectors;
int64_t sector;
int nr_sectors;
for (sector = bmds->cur_dirty; sector < bmds->total_sectors;) {
if (bdrv_get_dirty(bmds->bs, sector)) {
if (total_sectors - sector < BDRV_SECTORS_PER_DIRTY_CHUNK) {
nr_sectors = total_sectors - sector;
} else {
nr_sectors = BDRV_SECTORS_PER_DIRTY_CHUNK;
}
blk = qemu_malloc(sizeof(BlkMigBlock));
blk->buf = qemu_malloc(BLOCK_SIZE);
blk->bmds = bmds;
blk->sector = sector;
if (is_async) {
blk->iov.iov_base = blk->buf;
blk->iov.iov_len = nr_sectors * BDRV_SECTOR_SIZE;
qemu_iovec_init_external(&blk->qiov, &blk->iov, 1);
blk->time = qemu_get_clock_ns(rt_clock);
blk->aiocb = bdrv_aio_readv(bmds->bs, sector, &blk->qiov,
nr_sectors, blk_mig_read_cb, blk);
if (!blk->aiocb) {
goto error;
}
block_mig_state.submitted++;
} else {
if (bdrv_read(bmds->bs, sector, blk->buf,
nr_sectors) < 0) {
goto error;
}
blk_send(f, blk);
qemu_free(blk->buf);
qemu_free(blk);
}
bdrv_reset_dirty(bmds->bs, sector, nr_sectors);
break;
}
sector += BDRV_SECTORS_PER_DIRTY_CHUNK;
bmds->cur_dirty = sector;
}
return (bmds->cur_dirty >= bmds->total_sectors);
error:
monitor_printf(mon, "Error reading sector %" PRId64 "\n", sector);
qemu_file_set_error(f);
qemu_free(blk->buf);
qemu_free(blk);
return 0;
}
static int blk_mig_save_dirty_block(Monitor *mon, QEMUFile *f, int is_async)
{
BlkMigDevState *bmds;
int ret = 0;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
if (mig_save_device_dirty(mon, f, bmds, is_async) == 0) {
ret = 1;
break;
}
}
return ret;
}
static void flush_blks(QEMUFile* f)
{
BlkMigBlock *blk;
DPRINTF("%s Enter submitted %d read_done %d transferred %d\n",
__FUNCTION__, block_mig_state.submitted, block_mig_state.read_done,
block_mig_state.transferred);
while ((blk = QSIMPLEQ_FIRST(&block_mig_state.blk_list)) != NULL) {
if (qemu_file_rate_limit(f)) {
break;
}
if (blk->ret < 0) {
qemu_file_set_error(f);
break;
}
blk_send(f, blk);
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.blk_list, entry);
qemu_free(blk->buf);
qemu_free(blk);
block_mig_state.read_done--;
block_mig_state.transferred++;
assert(block_mig_state.read_done >= 0);
}
DPRINTF("%s Exit submitted %d read_done %d transferred %d\n", __FUNCTION__,
block_mig_state.submitted, block_mig_state.read_done,
block_mig_state.transferred);
}
static int64_t get_remaining_dirty(void)
{
BlkMigDevState *bmds;
int64_t dirty = 0;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
dirty += bdrv_get_dirty_count(bmds->bs);
}
return dirty * BLOCK_SIZE;
}
static int is_stage2_completed(void)
{
int64_t remaining_dirty;
long double bwidth;
if (block_mig_state.bulk_completed == 1) {
remaining_dirty = get_remaining_dirty();
if (remaining_dirty == 0) {
return 1;
}
bwidth = compute_read_bwidth();
if ((remaining_dirty / bwidth) <=
migrate_max_downtime()) {
/* finish stage2 because we think that we can finish remaing work
below max_downtime */
return 1;
}
}
return 0;
}
static void blk_mig_cleanup(Monitor *mon)
{
BlkMigDevState *bmds;
BlkMigBlock *blk;
while ((bmds = QSIMPLEQ_FIRST(&block_mig_state.bmds_list)) != NULL) {
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.bmds_list, entry);
qemu_free(bmds);
}
while ((blk = QSIMPLEQ_FIRST(&block_mig_state.blk_list)) != NULL) {
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.blk_list, entry);
qemu_free(blk->buf);
qemu_free(blk);
}
set_dirty_tracking(0);
monitor_printf(mon, "\n");
}
static int block_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
{
DPRINTF("Enter save live stage %d submitted %d transferred %d\n",
stage, block_mig_state.submitted, block_mig_state.transferred);
if (stage < 0) {
blk_mig_cleanup(mon);
return 0;
}
if (block_mig_state.blk_enable != 1) {
/* no need to migrate storage */
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
return 1;
}
if (stage == 1) {
init_blk_migration(mon, f);
/* start track dirty blocks */
set_dirty_tracking(1);
}
flush_blks(f);
if (qemu_file_has_error(f)) {
blk_mig_cleanup(mon);
return 0;
}
blk_mig_reset_dirty_cursor();
if (stage == 2) {
/* control the rate of transfer */
while ((block_mig_state.submitted +
block_mig_state.read_done) * BLOCK_SIZE <
qemu_file_get_rate_limit(f)) {
if (block_mig_state.bulk_completed == 0) {
/* first finish the bulk phase */
if (blk_mig_save_bulked_block(mon, f) == 0) {
/* finished saving bulk on all devices */
block_mig_state.bulk_completed = 1;
}
} else {
if (blk_mig_save_dirty_block(mon, f, 1) == 0) {
/* no more dirty blocks */
break;
}
}
}
flush_blks(f);
if (qemu_file_has_error(f)) {
blk_mig_cleanup(mon);
return 0;
}
}
if (stage == 3) {
/* we know for sure that save bulk is completed and
all async read completed */
assert(block_mig_state.submitted == 0);
while (blk_mig_save_dirty_block(mon, f, 0) != 0);
blk_mig_cleanup(mon);
/* report completion */
qemu_put_be64(f, (100 << BDRV_SECTOR_BITS) | BLK_MIG_FLAG_PROGRESS);
if (qemu_file_has_error(f)) {
return 0;
}
monitor_printf(mon, "Block migration completed\n");
}
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
return ((stage == 2) && is_stage2_completed());
}
static int block_load(QEMUFile *f, void *opaque, int version_id)
{
static int banner_printed;
int len, flags;
char device_name[256];
int64_t addr;
BlockDriverState *bs;
uint8_t *buf;
do {
addr = qemu_get_be64(f);
flags = addr & ~BDRV_SECTOR_MASK;
addr >>= BDRV_SECTOR_BITS;
if (flags & BLK_MIG_FLAG_DEVICE_BLOCK) {
int ret;
/* get device name */
len = qemu_get_byte(f);
qemu_get_buffer(f, (uint8_t *)device_name, len);
device_name[len] = '\0';
bs = bdrv_find(device_name);
if (!bs) {
fprintf(stderr, "Error unknown block device %s\n",
device_name);
return -EINVAL;
}
buf = qemu_malloc(BLOCK_SIZE);
qemu_get_buffer(f, buf, BLOCK_SIZE);
ret = bdrv_write(bs, addr, buf, BDRV_SECTORS_PER_DIRTY_CHUNK);
qemu_free(buf);
if (ret < 0) {
return ret;
}
} else if (flags & BLK_MIG_FLAG_PROGRESS) {
if (!banner_printed) {
printf("Receiving block device images\n");
banner_printed = 1;
}
printf("Completed %d %%%c", (int)addr,
(addr == 100) ? '\n' : '\r');
fflush(stdout);
} else if (!(flags & BLK_MIG_FLAG_EOS)) {
fprintf(stderr, "Unknown flags\n");
return -EINVAL;
}
if (qemu_file_has_error(f)) {
return -EIO;
}
} while (!(flags & BLK_MIG_FLAG_EOS));
return 0;
}
static void block_set_params(int blk_enable, int shared_base, void *opaque)
{
block_mig_state.blk_enable = blk_enable;
block_mig_state.shared_base = shared_base;
/* shared base means that blk_enable = 1 */
block_mig_state.blk_enable |= shared_base;
}
void blk_mig_init(void)
{
QSIMPLEQ_INIT(&block_mig_state.bmds_list);
QSIMPLEQ_INIT(&block_mig_state.blk_list);
register_savevm_live(NULL, "block", 0, 1, block_set_params,
block_save_live, NULL, block_load, &block_mig_state);
}

6394
block.c

File diff suppressed because it is too large Load Diff

288
block.h Normal file
View File

@@ -0,0 +1,288 @@
#ifndef BLOCK_H
#define BLOCK_H
#include "qemu-aio.h"
#include "qemu-common.h"
#include "qemu-option.h"
#include "qobject.h"
/* block.c */
typedef struct BlockDriver BlockDriver;
typedef struct BlockDriverInfo {
/* in bytes, 0 if irrelevant */
int cluster_size;
/* offset at which the VM state can be saved (0 if not possible) */
int64_t vm_state_offset;
} BlockDriverInfo;
typedef struct QEMUSnapshotInfo {
char id_str[128]; /* unique snapshot id */
/* the following fields are informative. They are not needed for
the consistency of the snapshot */
char name[256]; /* user choosen name */
uint32_t vm_state_size; /* VM state info size */
uint32_t date_sec; /* UTC date of the snapshot */
uint32_t date_nsec;
uint64_t vm_clock_nsec; /* VM clock relative to boot */
} QEMUSnapshotInfo;
#define BDRV_O_RDWR 0x0002
#define BDRV_O_SNAPSHOT 0x0008 /* open the file read only and save writes in a snapshot */
#define BDRV_O_NOCACHE 0x0020 /* do not use the host page cache */
#define BDRV_O_CACHE_WB 0x0040 /* use write-back caching */
#define BDRV_O_NATIVE_AIO 0x0080 /* use native AIO instead of the thread pool */
#define BDRV_O_NO_BACKING 0x0100 /* don't open the backing file */
#define BDRV_O_NO_FLUSH 0x0200 /* disable flushing on this disk */
#define BDRV_O_CACHE_MASK (BDRV_O_NOCACHE | BDRV_O_CACHE_WB | BDRV_O_NO_FLUSH)
#define BDRV_SECTOR_BITS 9
#define BDRV_SECTOR_SIZE (1ULL << BDRV_SECTOR_BITS)
#define BDRV_SECTOR_MASK ~(BDRV_SECTOR_SIZE - 1)
typedef enum {
BLOCK_ERR_REPORT, BLOCK_ERR_IGNORE, BLOCK_ERR_STOP_ENOSPC,
BLOCK_ERR_STOP_ANY
} BlockErrorAction;
typedef enum {
BDRV_ACTION_REPORT, BDRV_ACTION_IGNORE, BDRV_ACTION_STOP
} BlockMonEventAction;
void bdrv_mon_event(const BlockDriverState *bdrv,
BlockMonEventAction action, int is_read);
void bdrv_info_print(Monitor *mon, const QObject *data);
void bdrv_info(Monitor *mon, QObject **ret_data);
void bdrv_stats_print(Monitor *mon, const QObject *data);
void bdrv_info_stats(Monitor *mon, QObject **ret_data);
void bdrv_init(void);
void bdrv_init_with_whitelist(void);
BlockDriver *bdrv_find_protocol(const char *filename);
BlockDriver *bdrv_find_format(const char *format_name);
BlockDriver *bdrv_find_whitelisted_format(const char *format_name);
int bdrv_create(BlockDriver *drv, const char* filename,
QEMUOptionParameter *options);
int bdrv_create_file(const char* filename, QEMUOptionParameter *options);
BlockDriverState *bdrv_new(const char *device_name);
void bdrv_delete(BlockDriverState *bs);
int bdrv_file_open(BlockDriverState **pbs, const char *filename, int flags);
int bdrv_open(BlockDriverState *bs, const char *filename, int flags,
BlockDriver *drv);
void bdrv_close(BlockDriverState *bs);
int bdrv_attach(BlockDriverState *bs, DeviceState *qdev);
void bdrv_detach(BlockDriverState *bs, DeviceState *qdev);
DeviceState *bdrv_get_attached(BlockDriverState *bs);
int bdrv_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors);
int bdrv_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors);
int bdrv_pread(BlockDriverState *bs, int64_t offset,
void *buf, int count);
int bdrv_pwrite(BlockDriverState *bs, int64_t offset,
const void *buf, int count);
int bdrv_pwrite_sync(BlockDriverState *bs, int64_t offset,
const void *buf, int count);
int bdrv_write_sync(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors);
int bdrv_truncate(BlockDriverState *bs, int64_t offset);
int64_t bdrv_getlength(BlockDriverState *bs);
void bdrv_get_geometry(BlockDriverState *bs, uint64_t *nb_sectors_ptr);
void bdrv_guess_geometry(BlockDriverState *bs, int *pcyls, int *pheads, int *psecs);
int bdrv_commit(BlockDriverState *bs);
void bdrv_commit_all(void);
int bdrv_change_backing_file(BlockDriverState *bs,
const char *backing_file, const char *backing_fmt);
void bdrv_register(BlockDriver *bdrv);
typedef struct BdrvCheckResult {
int corruptions;
int leaks;
int check_errors;
} BdrvCheckResult;
int bdrv_check(BlockDriverState *bs, BdrvCheckResult *res);
/* async block I/O */
typedef struct BlockDriverAIOCB BlockDriverAIOCB;
typedef void BlockDriverCompletionFunc(void *opaque, int ret);
typedef void BlockDriverDirtyHandler(BlockDriverState *bs, int64_t sector,
int sector_num);
BlockDriverAIOCB *bdrv_aio_readv(BlockDriverState *bs, int64_t sector_num,
QEMUIOVector *iov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque);
BlockDriverAIOCB *bdrv_aio_writev(BlockDriverState *bs, int64_t sector_num,
QEMUIOVector *iov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque);
BlockDriverAIOCB *bdrv_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque);
void bdrv_aio_cancel(BlockDriverAIOCB *acb);
typedef struct BlockRequest {
/* Fields to be filled by multiwrite caller */
int64_t sector;
int nb_sectors;
QEMUIOVector *qiov;
BlockDriverCompletionFunc *cb;
void *opaque;
/* Filled by multiwrite implementation */
int error;
} BlockRequest;
int bdrv_aio_multiwrite(BlockDriverState *bs, BlockRequest *reqs,
int num_reqs);
/* sg packet commands */
int bdrv_ioctl(BlockDriverState *bs, unsigned long int req, void *buf);
BlockDriverAIOCB *bdrv_aio_ioctl(BlockDriverState *bs,
unsigned long int req, void *buf,
BlockDriverCompletionFunc *cb, void *opaque);
/* Ensure contents are flushed to disk. */
void bdrv_flush(BlockDriverState *bs);
void bdrv_flush_all(void);
void bdrv_close_all(void);
int bdrv_has_zero_init(BlockDriverState *bs);
int bdrv_is_allocated(BlockDriverState *bs, int64_t sector_num, int nb_sectors,
int *pnum);
#define BDRV_TYPE_HD 0
#define BDRV_TYPE_CDROM 1
#define BDRV_TYPE_FLOPPY 2
#define BIOS_ATA_TRANSLATION_AUTO 0
#define BIOS_ATA_TRANSLATION_NONE 1
#define BIOS_ATA_TRANSLATION_LBA 2
#define BIOS_ATA_TRANSLATION_LARGE 3
#define BIOS_ATA_TRANSLATION_RECHS 4
void bdrv_set_geometry_hint(BlockDriverState *bs,
int cyls, int heads, int secs);
void bdrv_set_type_hint(BlockDriverState *bs, int type);
void bdrv_set_translation_hint(BlockDriverState *bs, int translation);
void bdrv_get_geometry_hint(BlockDriverState *bs,
int *pcyls, int *pheads, int *psecs);
int bdrv_get_type_hint(BlockDriverState *bs);
int bdrv_get_translation_hint(BlockDriverState *bs);
void bdrv_set_on_error(BlockDriverState *bs, BlockErrorAction on_read_error,
BlockErrorAction on_write_error);
BlockErrorAction bdrv_get_on_error(BlockDriverState *bs, int is_read);
void bdrv_set_removable(BlockDriverState *bs, int removable);
int bdrv_is_removable(BlockDriverState *bs);
int bdrv_is_read_only(BlockDriverState *bs);
int bdrv_is_sg(BlockDriverState *bs);
int bdrv_enable_write_cache(BlockDriverState *bs);
int bdrv_is_inserted(BlockDriverState *bs);
int bdrv_media_changed(BlockDriverState *bs);
int bdrv_is_locked(BlockDriverState *bs);
void bdrv_set_locked(BlockDriverState *bs, int locked);
int bdrv_eject(BlockDriverState *bs, int eject_flag);
void bdrv_set_change_cb(BlockDriverState *bs,
void (*change_cb)(void *opaque), void *opaque);
void bdrv_get_format(BlockDriverState *bs, char *buf, int buf_size);
BlockDriverState *bdrv_find(const char *name);
BlockDriverState *bdrv_next(BlockDriverState *bs);
void bdrv_iterate(void (*it)(void *opaque, BlockDriverState *bs),
void *opaque);
int bdrv_is_encrypted(BlockDriverState *bs);
int bdrv_key_required(BlockDriverState *bs);
int bdrv_set_key(BlockDriverState *bs, const char *key);
int bdrv_query_missing_keys(void);
void bdrv_iterate_format(void (*it)(void *opaque, const char *name),
void *opaque);
const char *bdrv_get_device_name(BlockDriverState *bs);
int bdrv_write_compressed(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors);
int bdrv_get_info(BlockDriverState *bs, BlockDriverInfo *bdi);
const char *bdrv_get_encrypted_filename(BlockDriverState *bs);
void bdrv_get_backing_filename(BlockDriverState *bs,
char *filename, int filename_size);
int bdrv_can_snapshot(BlockDriverState *bs);
int bdrv_is_snapshot(BlockDriverState *bs);
BlockDriverState *bdrv_snapshots(void);
int bdrv_snapshot_create(BlockDriverState *bs,
QEMUSnapshotInfo *sn_info);
int bdrv_snapshot_goto(BlockDriverState *bs,
const char *snapshot_id);
int bdrv_snapshot_delete(BlockDriverState *bs, const char *snapshot_id);
int bdrv_snapshot_list(BlockDriverState *bs,
QEMUSnapshotInfo **psn_info);
char *bdrv_snapshot_dump(char *buf, int buf_size, QEMUSnapshotInfo *sn);
char *get_human_readable_size(char *buf, int buf_size, int64_t size);
int path_is_absolute(const char *path);
void path_combine(char *dest, int dest_size,
const char *base_path,
const char *filename);
int bdrv_save_vmstate(BlockDriverState *bs, const uint8_t *buf,
int64_t pos, int size);
int bdrv_load_vmstate(BlockDriverState *bs, uint8_t *buf,
int64_t pos, int size);
#define BDRV_SECTORS_PER_DIRTY_CHUNK 2048
void bdrv_set_dirty_tracking(BlockDriverState *bs, int enable);
int bdrv_get_dirty(BlockDriverState *bs, int64_t sector);
void bdrv_reset_dirty(BlockDriverState *bs, int64_t cur_sector,
int nr_sectors);
int64_t bdrv_get_dirty_count(BlockDriverState *bs);
typedef enum {
BLKDBG_L1_UPDATE,
BLKDBG_L1_GROW_ALLOC_TABLE,
BLKDBG_L1_GROW_WRITE_TABLE,
BLKDBG_L1_GROW_ACTIVATE_TABLE,
BLKDBG_L2_LOAD,
BLKDBG_L2_UPDATE,
BLKDBG_L2_UPDATE_COMPRESSED,
BLKDBG_L2_ALLOC_COW_READ,
BLKDBG_L2_ALLOC_WRITE,
BLKDBG_READ,
BLKDBG_READ_AIO,
BLKDBG_READ_BACKING,
BLKDBG_READ_BACKING_AIO,
BLKDBG_READ_COMPRESSED,
BLKDBG_WRITE_AIO,
BLKDBG_WRITE_COMPRESSED,
BLKDBG_VMSTATE_LOAD,
BLKDBG_VMSTATE_SAVE,
BLKDBG_COW_READ,
BLKDBG_COW_WRITE,
BLKDBG_REFTABLE_LOAD,
BLKDBG_REFTABLE_GROW,
BLKDBG_REFBLOCK_LOAD,
BLKDBG_REFBLOCK_UPDATE,
BLKDBG_REFBLOCK_UPDATE_PART,
BLKDBG_REFBLOCK_ALLOC,
BLKDBG_REFBLOCK_ALLOC_HOOKUP,
BLKDBG_REFBLOCK_ALLOC_WRITE,
BLKDBG_REFBLOCK_ALLOC_WRITE_BLOCKS,
BLKDBG_REFBLOCK_ALLOC_WRITE_TABLE,
BLKDBG_REFBLOCK_ALLOC_SWITCH_TABLE,
BLKDBG_CLUSTER_ALLOC,
BLKDBG_CLUSTER_ALLOC_BYTES,
BLKDBG_CLUSTER_FREE,
BLKDBG_EVENT_MAX,
} BlkDebugEvent;
#define BLKDBG_EVENT(bs, evt) bdrv_debug_event(bs, evt)
void bdrv_debug_event(BlockDriverState *bs, BlkDebugEvent event);
#endif

View File

@@ -1,40 +0,0 @@
block-obj-y += raw_bsd.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat.o
block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o
block-obj-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o
block-obj-y += qed-check.o
block-obj-$(CONFIG_VHDX) += vhdx.o vhdx-endian.o vhdx-log.o
block-obj-$(CONFIG_QUORUM) += quorum.o
block-obj-y += parallels.o blkdebug.o blkverify.o
block-obj-y += block-backend.o snapshot.o qapi.o
block-obj-$(CONFIG_WIN32) += raw-win32.o win32-aio.o
block-obj-$(CONFIG_POSIX) += raw-posix.o
block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
block-obj-y += null.o mirror.o
block-obj-y += nbd.o nbd-client.o sheepdog.o
block-obj-$(CONFIG_LIBISCSI) += iscsi.o
block-obj-$(CONFIG_LIBNFS) += nfs.o
block-obj-$(CONFIG_CURL) += curl.o
block-obj-$(CONFIG_RBD) += rbd.o
block-obj-$(CONFIG_GLUSTERFS) += gluster.o
block-obj-$(CONFIG_ARCHIPELAGO) += archipelago.o
block-obj-$(CONFIG_LIBSSH2) += ssh.o
block-obj-y += accounting.o
common-obj-y += stream.o
common-obj-y += commit.o
common-obj-y += backup.o
iscsi.o-cflags := $(LIBISCSI_CFLAGS)
iscsi.o-libs := $(LIBISCSI_LIBS)
curl.o-cflags := $(CURL_CFLAGS)
curl.o-libs := $(CURL_LIBS)
rbd.o-cflags := $(RBD_CFLAGS)
rbd.o-libs := $(RBD_LIBS)
gluster.o-cflags := $(GLUSTERFS_CFLAGS)
gluster.o-libs := $(GLUSTERFS_LIBS)
ssh.o-cflags := $(LIBSSH2_CFLAGS)
ssh.o-libs := $(LIBSSH2_LIBS)
archipelago.o-libs := $(ARCHIPELAGO_LIBS)
qcow.o-libs := -lz
linux-aio.o-libs := -laio

View File

@@ -1,56 +0,0 @@
/*
* QEMU System Emulator block accounting
*
* Copyright (c) 2011 Christoph Hellwig
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "block/accounting.h"
#include "block/block_int.h"
#include "qemu/timer.h"
void block_acct_start(BlockAcctStats *stats, BlockAcctCookie *cookie,
int64_t bytes, enum BlockAcctType type)
{
assert(type < BLOCK_MAX_IOTYPE);
cookie->bytes = bytes;
cookie->start_time_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
cookie->type = type;
}
void block_acct_done(BlockAcctStats *stats, BlockAcctCookie *cookie)
{
assert(cookie->type < BLOCK_MAX_IOTYPE);
stats->nr_bytes[cookie->type] += cookie->bytes;
stats->nr_ops[cookie->type]++;
stats->total_time_ns[cookie->type] +=
qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - cookie->start_time_ns;
}
void block_acct_highest_sector(BlockAcctStats *stats, int64_t sector_num,
unsigned int nb_sectors)
{
if (stats->wr_highest_sector < sector_num + nb_sectors - 1) {
stats->wr_highest_sector = sector_num + nb_sectors - 1;
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,437 +0,0 @@
/*
* QEMU backup
*
* Copyright (C) 2013 Proxmox Server Solutions
*
* Authors:
* Dietmar Maurer (dietmar@proxmox.com)
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include <stdio.h>
#include <errno.h>
#include <unistd.h>
#include "trace.h"
#include "block/block.h"
#include "block/block_int.h"
#include "block/blockjob.h"
#include "qemu/ratelimit.h"
#define BACKUP_CLUSTER_BITS 16
#define BACKUP_CLUSTER_SIZE (1 << BACKUP_CLUSTER_BITS)
#define BACKUP_SECTORS_PER_CLUSTER (BACKUP_CLUSTER_SIZE / BDRV_SECTOR_SIZE)
#define SLICE_TIME 100000000ULL /* ns */
typedef struct CowRequest {
int64_t start;
int64_t end;
QLIST_ENTRY(CowRequest) list;
CoQueue wait_queue; /* coroutines blocked on this request */
} CowRequest;
typedef struct BackupBlockJob {
BlockJob common;
BlockDriverState *target;
MirrorSyncMode sync_mode;
RateLimit limit;
BlockdevOnError on_source_error;
BlockdevOnError on_target_error;
CoRwlock flush_rwlock;
uint64_t sectors_read;
HBitmap *bitmap;
QLIST_HEAD(, CowRequest) inflight_reqs;
} BackupBlockJob;
/* See if in-flight requests overlap and wait for them to complete */
static void coroutine_fn wait_for_overlapping_requests(BackupBlockJob *job,
int64_t start,
int64_t end)
{
CowRequest *req;
bool retry;
do {
retry = false;
QLIST_FOREACH(req, &job->inflight_reqs, list) {
if (end > req->start && start < req->end) {
qemu_co_queue_wait(&req->wait_queue);
retry = true;
break;
}
}
} while (retry);
}
/* Keep track of an in-flight request */
static void cow_request_begin(CowRequest *req, BackupBlockJob *job,
int64_t start, int64_t end)
{
req->start = start;
req->end = end;
qemu_co_queue_init(&req->wait_queue);
QLIST_INSERT_HEAD(&job->inflight_reqs, req, list);
}
/* Forget about a completed request */
static void cow_request_end(CowRequest *req)
{
QLIST_REMOVE(req, list);
qemu_co_queue_restart_all(&req->wait_queue);
}
static int coroutine_fn backup_do_cow(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
bool *error_is_read)
{
BackupBlockJob *job = (BackupBlockJob *)bs->job;
CowRequest cow_request;
struct iovec iov;
QEMUIOVector bounce_qiov;
void *bounce_buffer = NULL;
int ret = 0;
int64_t start, end;
int n;
qemu_co_rwlock_rdlock(&job->flush_rwlock);
start = sector_num / BACKUP_SECTORS_PER_CLUSTER;
end = DIV_ROUND_UP(sector_num + nb_sectors, BACKUP_SECTORS_PER_CLUSTER);
trace_backup_do_cow_enter(job, start, sector_num, nb_sectors);
wait_for_overlapping_requests(job, start, end);
cow_request_begin(&cow_request, job, start, end);
for (; start < end; start++) {
if (hbitmap_get(job->bitmap, start)) {
trace_backup_do_cow_skip(job, start);
continue; /* already copied */
}
trace_backup_do_cow_process(job, start);
n = MIN(BACKUP_SECTORS_PER_CLUSTER,
job->common.len / BDRV_SECTOR_SIZE -
start * BACKUP_SECTORS_PER_CLUSTER);
if (!bounce_buffer) {
bounce_buffer = qemu_blockalign(bs, BACKUP_CLUSTER_SIZE);
}
iov.iov_base = bounce_buffer;
iov.iov_len = n * BDRV_SECTOR_SIZE;
qemu_iovec_init_external(&bounce_qiov, &iov, 1);
ret = bdrv_co_readv(bs, start * BACKUP_SECTORS_PER_CLUSTER, n,
&bounce_qiov);
if (ret < 0) {
trace_backup_do_cow_read_fail(job, start, ret);
if (error_is_read) {
*error_is_read = true;
}
goto out;
}
if (buffer_is_zero(iov.iov_base, iov.iov_len)) {
ret = bdrv_co_write_zeroes(job->target,
start * BACKUP_SECTORS_PER_CLUSTER,
n, BDRV_REQ_MAY_UNMAP);
} else {
ret = bdrv_co_writev(job->target,
start * BACKUP_SECTORS_PER_CLUSTER, n,
&bounce_qiov);
}
if (ret < 0) {
trace_backup_do_cow_write_fail(job, start, ret);
if (error_is_read) {
*error_is_read = false;
}
goto out;
}
hbitmap_set(job->bitmap, start, 1);
/* Publish progress, guest I/O counts as progress too. Note that the
* offset field is an opaque progress value, it is not a disk offset.
*/
job->sectors_read += n;
job->common.offset += n * BDRV_SECTOR_SIZE;
}
out:
if (bounce_buffer) {
qemu_vfree(bounce_buffer);
}
cow_request_end(&cow_request);
trace_backup_do_cow_return(job, sector_num, nb_sectors, ret);
qemu_co_rwlock_unlock(&job->flush_rwlock);
return ret;
}
static int coroutine_fn backup_before_write_notify(
NotifierWithReturn *notifier,
void *opaque)
{
BdrvTrackedRequest *req = opaque;
int64_t sector_num = req->offset >> BDRV_SECTOR_BITS;
int nb_sectors = req->bytes >> BDRV_SECTOR_BITS;
assert((req->offset & (BDRV_SECTOR_SIZE - 1)) == 0);
assert((req->bytes & (BDRV_SECTOR_SIZE - 1)) == 0);
return backup_do_cow(req->bs, sector_num, nb_sectors, NULL);
}
static void backup_set_speed(BlockJob *job, int64_t speed, Error **errp)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
if (speed < 0) {
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
}
static void backup_iostatus_reset(BlockJob *job)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
bdrv_iostatus_reset(s->target);
}
static const BlockJobDriver backup_job_driver = {
.instance_size = sizeof(BackupBlockJob),
.job_type = BLOCK_JOB_TYPE_BACKUP,
.set_speed = backup_set_speed,
.iostatus_reset = backup_iostatus_reset,
};
static BlockErrorAction backup_error_action(BackupBlockJob *job,
bool read, int error)
{
if (read) {
return block_job_error_action(&job->common, job->common.bs,
job->on_source_error, true, error);
} else {
return block_job_error_action(&job->common, job->target,
job->on_target_error, false, error);
}
}
typedef struct {
int ret;
} BackupCompleteData;
static void backup_complete(BlockJob *job, void *opaque)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
BackupCompleteData *data = opaque;
bdrv_unref(s->target);
block_job_completed(job, data->ret);
g_free(data);
}
static void coroutine_fn backup_run(void *opaque)
{
BackupBlockJob *job = opaque;
BackupCompleteData *data;
BlockDriverState *bs = job->common.bs;
BlockDriverState *target = job->target;
BlockdevOnError on_target_error = job->on_target_error;
NotifierWithReturn before_write = {
.notify = backup_before_write_notify,
};
int64_t start, end;
int ret = 0;
QLIST_INIT(&job->inflight_reqs);
qemu_co_rwlock_init(&job->flush_rwlock);
start = 0;
end = DIV_ROUND_UP(job->common.len / BDRV_SECTOR_SIZE,
BACKUP_SECTORS_PER_CLUSTER);
job->bitmap = hbitmap_alloc(end, 0);
bdrv_set_enable_write_cache(target, true);
bdrv_set_on_error(target, on_target_error, on_target_error);
bdrv_iostatus_enable(target);
bdrv_add_before_write_notifier(bs, &before_write);
if (job->sync_mode == MIRROR_SYNC_MODE_NONE) {
while (!block_job_is_cancelled(&job->common)) {
/* Yield until the job is cancelled. We just let our before_write
* notify callback service CoW requests. */
job->common.busy = false;
qemu_coroutine_yield();
job->common.busy = true;
}
} else {
/* Both FULL and TOP SYNC_MODE's require copying.. */
for (; start < end; start++) {
bool error_is_read;
if (block_job_is_cancelled(&job->common)) {
break;
}
/* we need to yield so that qemu_aio_flush() returns.
* (without, VM does not reboot)
*/
if (job->common.speed) {
uint64_t delay_ns = ratelimit_calculate_delay(
&job->limit, job->sectors_read);
job->sectors_read = 0;
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, delay_ns);
} else {
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, 0);
}
if (block_job_is_cancelled(&job->common)) {
break;
}
if (job->sync_mode == MIRROR_SYNC_MODE_TOP) {
int i, n;
int alloced = 0;
/* Check to see if these blocks are already in the
* backing file. */
for (i = 0; i < BACKUP_SECTORS_PER_CLUSTER;) {
/* bdrv_is_allocated() only returns true/false based
* on the first set of sectors it comes across that
* are are all in the same state.
* For that reason we must verify each sector in the
* backup cluster length. We end up copying more than
* needed but at some point that is always the case. */
alloced =
bdrv_is_allocated(bs,
start * BACKUP_SECTORS_PER_CLUSTER + i,
BACKUP_SECTORS_PER_CLUSTER - i, &n);
i += n;
if (alloced == 1 || n == 0) {
break;
}
}
/* If the above loop never found any sectors that are in
* the topmost image, skip this backup. */
if (alloced == 0) {
continue;
}
}
/* FULL sync mode we copy the whole drive. */
ret = backup_do_cow(bs, start * BACKUP_SECTORS_PER_CLUSTER,
BACKUP_SECTORS_PER_CLUSTER, &error_is_read);
if (ret < 0) {
/* Depending on error action, fail now or retry cluster */
BlockErrorAction action =
backup_error_action(job, error_is_read, -ret);
if (action == BLOCK_ERROR_ACTION_REPORT) {
break;
} else {
start--;
continue;
}
}
}
}
notifier_with_return_remove(&before_write);
/* wait until pending backup_do_cow() calls have completed */
qemu_co_rwlock_wrlock(&job->flush_rwlock);
qemu_co_rwlock_unlock(&job->flush_rwlock);
hbitmap_free(job->bitmap);
bdrv_iostatus_disable(target);
bdrv_op_unblock_all(target, job->common.blocker);
data = g_malloc(sizeof(*data));
data->ret = ret;
block_job_defer_to_main_loop(&job->common, backup_complete, data);
}
void backup_start(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, MirrorSyncMode sync_mode,
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockCompletionFunc *cb, void *opaque,
Error **errp)
{
int64_t len;
assert(bs);
assert(target);
assert(cb);
if (bs == target) {
error_setg(errp, "Source and target cannot be the same");
return;
}
if ((on_source_error == BLOCKDEV_ON_ERROR_STOP ||
on_source_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_set(errp, QERR_INVALID_PARAMETER, "on-source-error");
return;
}
if (!bdrv_is_inserted(bs)) {
error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(bs));
return;
}
if (!bdrv_is_inserted(target)) {
error_setg(errp, "Device is not inserted: %s",
bdrv_get_device_name(target));
return;
}
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) {
return;
}
if (bdrv_op_is_blocked(target, BLOCK_OP_TYPE_BACKUP_TARGET, errp)) {
return;
}
len = bdrv_getlength(bs);
if (len < 0) {
error_setg_errno(errp, -len, "unable to get length for '%s'",
bdrv_get_device_name(bs));
return;
}
BackupBlockJob *job = block_job_create(&backup_job_driver, bs, speed,
cb, opaque, errp);
if (!job) {
return;
}
bdrv_op_block_all(target, job->common.blocker);
job->on_source_error = on_source_error;
job->on_target_error = on_target_error;
job->target = target;
job->sync_mode = sync_mode;
job->common.len = len;
job->common.co = qemu_coroutine_create(backup_run);
qemu_coroutine_enter(job->common.co, job);
}

View File

@@ -23,43 +23,45 @@
*/
#include "qemu-common.h"
#include "qemu/config-file.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "qapi/qmp/qbool.h"
#include "qapi/qmp/qdict.h"
#include "qapi/qmp/qint.h"
#include "qapi/qmp/qstring.h"
#include "block_int.h"
#include "module.h"
typedef struct BlkdebugVars {
int state;
/* If inject_errno != 0, an error is injected for requests */
int inject_errno;
/* Decides if all future requests fail (false) or only the next one and
* after the next request inject_errno is reset to 0 (true) */
bool inject_once;
/* Decides if aio_readv/writev fails right away (true) or returns an error
* return value only in the callback (false) */
bool inject_immediately;
} BlkdebugVars;
typedef struct BDRVBlkdebugState {
int state;
int new_state;
QLIST_HEAD(, BlkdebugRule) rules[BLKDBG_EVENT_MAX];
QSIMPLEQ_HEAD(, BlkdebugRule) active_rules;
QLIST_HEAD(, BlkdebugSuspendedReq) suspended_reqs;
BlkdebugVars vars;
QLIST_HEAD(list, BlkdebugRule) rules[BLKDBG_EVENT_MAX];
} BDRVBlkdebugState;
typedef struct BlkdebugAIOCB {
BlockAIOCB common;
BlockDriverAIOCB common;
QEMUBH *bh;
int ret;
} BlkdebugAIOCB;
typedef struct BlkdebugSuspendedReq {
Coroutine *co;
char *tag;
QLIST_ENTRY(BlkdebugSuspendedReq) next;
} BlkdebugSuspendedReq;
static void blkdebug_aio_cancel(BlockDriverAIOCB *blockacb);
static const AIOCBInfo blkdebug_aiocb_info = {
static AIOPool blkdebug_aio_pool = {
.aiocb_size = sizeof(BlkdebugAIOCB),
.cancel = blkdebug_aio_cancel,
};
enum {
ACTION_INJECT_ERROR,
ACTION_SET_STATE,
ACTION_SUSPEND,
};
typedef struct BlkdebugRule {
@@ -71,17 +73,12 @@ typedef struct BlkdebugRule {
int error;
int immediately;
int once;
int64_t sector;
} inject;
struct {
int new_state;
} set_state;
struct {
char *tag;
} suspend;
} options;
QLIST_ENTRY(BlkdebugRule) next;
QSIMPLEQ_ENTRY(BlkdebugRule) active_next;
} BlkdebugRule;
static QemuOptsList inject_error_opts = {
@@ -100,10 +97,6 @@ static QemuOptsList inject_error_opts = {
.name = "errno",
.type = QEMU_OPT_NUMBER,
},
{
.name = "sector",
.type = QEMU_OPT_NUMBER,
},
{
.name = "once",
.type = QEMU_OPT_BOOL,
@@ -154,7 +147,9 @@ static const char *event_names[BLKDBG_EVENT_MAX] = {
[BLKDBG_L2_ALLOC_COW_READ] = "l2_alloc.cow_read",
[BLKDBG_L2_ALLOC_WRITE] = "l2_alloc.write",
[BLKDBG_READ] = "read",
[BLKDBG_READ_AIO] = "read_aio",
[BLKDBG_READ_BACKING] = "read_backing",
[BLKDBG_READ_BACKING_AIO] = "read_backing_aio",
[BLKDBG_READ_COMPRESSED] = "read_compressed",
@@ -169,7 +164,6 @@ static const char *event_names[BLKDBG_EVENT_MAX] = {
[BLKDBG_REFTABLE_LOAD] = "reftable_load",
[BLKDBG_REFTABLE_GROW] = "reftable_grow",
[BLKDBG_REFTABLE_UPDATE] = "reftable_update",
[BLKDBG_REFBLOCK_LOAD] = "refblock_load",
[BLKDBG_REFBLOCK_UPDATE] = "refblock_update",
@@ -184,19 +178,6 @@ static const char *event_names[BLKDBG_EVENT_MAX] = {
[BLKDBG_CLUSTER_ALLOC] = "cluster_alloc",
[BLKDBG_CLUSTER_ALLOC_BYTES] = "cluster_alloc_bytes",
[BLKDBG_CLUSTER_FREE] = "cluster_free",
[BLKDBG_FLUSH_TO_OS] = "flush_to_os",
[BLKDBG_FLUSH_TO_DISK] = "flush_to_disk",
[BLKDBG_PWRITEV_RMW_HEAD] = "pwritev_rmw.head",
[BLKDBG_PWRITEV_RMW_AFTER_HEAD] = "pwritev_rmw.after_head",
[BLKDBG_PWRITEV_RMW_TAIL] = "pwritev_rmw.tail",
[BLKDBG_PWRITEV_RMW_AFTER_TAIL] = "pwritev_rmw.after_tail",
[BLKDBG_PWRITEV] = "pwritev",
[BLKDBG_PWRITEV_ZERO] = "pwritev_zero",
[BLKDBG_PWRITEV_DONE] = "pwritev_done",
[BLKDBG_EMPTY_IMAGE_PREPARE] = "empty_image_prepare",
};
static int get_event_by_name(const char *name, BlkDebugEvent *event)
@@ -216,7 +197,6 @@ static int get_event_by_name(const char *name, BlkDebugEvent *event)
struct add_rule_data {
BDRVBlkdebugState *s;
int action;
Error **errp;
};
static int add_rule(QemuOpts *opts, void *opaque)
@@ -229,16 +209,12 @@ static int add_rule(QemuOpts *opts, void *opaque)
/* Find the right event for the rule */
event_name = qemu_opt_get(opts, "event");
if (!event_name) {
error_setg(d->errp, "Missing event name for rule");
return -1;
} else if (get_event_by_name(event_name, &event) < 0) {
error_setg(d->errp, "Invalid event name \"%s\"", event_name);
if (!event_name || get_event_by_name(event_name, &event) < 0) {
return -1;
}
/* Set attributes common for all actions */
rule = g_malloc0(sizeof(*rule));
rule = qemu_mallocz(sizeof(*rule));
*rule = (struct BlkdebugRule) {
.event = event,
.action = d->action,
@@ -252,18 +228,12 @@ static int add_rule(QemuOpts *opts, void *opaque)
rule->options.inject.once = qemu_opt_get_bool(opts, "once", 0);
rule->options.inject.immediately =
qemu_opt_get_bool(opts, "immediately", 0);
rule->options.inject.sector = qemu_opt_get_number(opts, "sector", -1);
break;
case ACTION_SET_STATE:
rule->options.set_state.new_state =
qemu_opt_get_number(opts, "new_state", 0);
break;
case ACTION_SUSPEND:
rule->options.suspend.tag =
g_strdup(qemu_opt_get(opts, "tag"));
break;
};
/* Add the rule */
@@ -272,189 +242,75 @@ static int add_rule(QemuOpts *opts, void *opaque)
return 0;
}
static void remove_rule(BlkdebugRule *rule)
static int read_config(BDRVBlkdebugState *s, const char *filename)
{
switch (rule->action) {
case ACTION_INJECT_ERROR:
case ACTION_SET_STATE:
break;
case ACTION_SUSPEND:
g_free(rule->options.suspend.tag);
break;
}
QLIST_REMOVE(rule, next);
g_free(rule);
}
static int read_config(BDRVBlkdebugState *s, const char *filename,
QDict *options, Error **errp)
{
FILE *f = NULL;
FILE *f;
int ret;
struct add_rule_data d;
Error *local_err = NULL;
if (filename) {
f = fopen(filename, "r");
if (f == NULL) {
error_setg_errno(errp, errno, "Could not read blkdebug config file");
return -errno;
}
ret = qemu_config_parse(f, config_groups, filename);
if (ret < 0) {
error_setg(errp, "Could not parse blkdebug config file");
ret = -EINVAL;
goto fail;
}
}
qemu_config_parse_qdict(options, config_groups, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
}
d.s = s;
d.action = ACTION_INJECT_ERROR;
d.errp = &local_err;
qemu_opts_foreach(&inject_error_opts, add_rule, &d, 1);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
}
qemu_opts_foreach(&inject_error_opts, add_rule, &d, 0);
d.action = ACTION_SET_STATE;
qemu_opts_foreach(&set_state_opts, add_rule, &d, 1);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
}
qemu_opts_foreach(&set_state_opts, add_rule, &d, 0);
ret = 0;
fail:
qemu_opts_reset(&inject_error_opts);
qemu_opts_reset(&set_state_opts);
if (f) {
fclose(f);
}
return ret;
}
/* Valid blkdebug filenames look like blkdebug:path/to/config:path/to/image */
static void blkdebug_parse_filename(const char *filename, QDict *options,
Error **errp)
{
const char *c;
/* Parse the blkdebug: prefix */
if (!strstart(filename, "blkdebug:", &filename)) {
/* There was no prefix; therefore, all options have to be already
present in the QDict (except for the filename) */
qdict_put(options, "x-image", qstring_from_str(filename));
return;
}
/* Parse config file path */
c = strchr(filename, ':');
if (c == NULL) {
error_setg(errp, "blkdebug requires both config file and image path");
return;
}
if (c != filename) {
QString *config_path;
config_path = qstring_from_substr(filename, 0, c - filename - 1);
qdict_put(options, "config", config_path);
}
/* TODO Allow multi-level nesting and set file.filename here */
filename = c + 1;
qdict_put(options, "x-image", qstring_from_str(filename));
}
static QemuOptsList runtime_opts = {
.name = "blkdebug",
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
.desc = {
{
.name = "config",
.type = QEMU_OPT_STRING,
.help = "Path to the configuration file",
},
{
.name = "x-image",
.type = QEMU_OPT_STRING,
.help = "[internal use only, will be removed]",
},
{
.name = "align",
.type = QEMU_OPT_SIZE,
.help = "Required alignment in bytes",
},
{ /* end of list */ }
},
};
static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int blkdebug_open(BlockDriverState *bs, const char *filename, int flags)
{
BDRVBlkdebugState *s = bs->opaque;
QemuOpts *opts;
Error *local_err = NULL;
const char *config;
uint64_t align;
int ret;
char *config, *c;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto out;
/* Parse the blkdebug: prefix */
if (strncmp(filename, "blkdebug:", strlen("blkdebug:"))) {
return -EINVAL;
}
filename += strlen("blkdebug:");
/* Read rules from config file */
c = strchr(filename, ':');
if (c == NULL) {
return -EINVAL;
}
/* Read rules from config file or command line options */
config = qemu_opt_get(opts, "config");
ret = read_config(s, config, options, errp);
if (ret) {
goto out;
config = strdup(filename);
config[c - filename] = '\0';
ret = read_config(s, config);
free(config);
if (ret < 0) {
return ret;
}
filename = c + 1;
/* Set initial state */
s->state = 1;
s->vars.state = 1;
/* Open the backing file */
assert(bs->file == NULL);
ret = bdrv_open_image(&bs->file, qemu_opt_get(opts, "x-image"), options, "image",
flags | BDRV_O_PROTOCOL, false, &local_err);
ret = bdrv_file_open(&bs->file, filename, flags);
if (ret < 0) {
error_propagate(errp, local_err);
goto out;
}
/* Set request alignment */
align = qemu_opt_get_size(opts, "align", bs->request_alignment);
if (align > 0 && align < INT_MAX && !(align & (align - 1))) {
bs->request_alignment = align;
} else {
error_setg(errp, "Invalid alignment");
ret = -EINVAL;
goto fail_unref;
}
ret = 0;
goto out;
fail_unref:
bdrv_unref(bs->file);
out:
qemu_opts_del(opts);
return ret;
}
return 0;
}
static void error_callback_bh(void *opaque)
@@ -462,99 +318,71 @@ static void error_callback_bh(void *opaque)
struct BlkdebugAIOCB *acb = opaque;
qemu_bh_delete(acb->bh);
acb->common.cb(acb->common.opaque, acb->ret);
qemu_aio_unref(acb);
qemu_aio_release(acb);
}
static BlockAIOCB *inject_error(BlockDriverState *bs,
BlockCompletionFunc *cb, void *opaque, BlkdebugRule *rule)
static void blkdebug_aio_cancel(BlockDriverAIOCB *blockacb)
{
BlkdebugAIOCB *acb = container_of(blockacb, BlkdebugAIOCB, common);
qemu_aio_release(acb);
}
static BlockDriverAIOCB *inject_error(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVBlkdebugState *s = bs->opaque;
int error = rule->options.inject.error;
int error = s->vars.inject_errno;
struct BlkdebugAIOCB *acb;
QEMUBH *bh;
if (rule->options.inject.once) {
QSIMPLEQ_INIT(&s->active_rules);
if (s->vars.inject_once) {
s->vars.inject_errno = 0;
}
if (rule->options.inject.immediately) {
if (s->vars.inject_immediately) {
return NULL;
}
acb = qemu_aio_get(&blkdebug_aiocb_info, bs, cb, opaque);
acb = qemu_aio_get(&blkdebug_aio_pool, bs, cb, opaque);
acb->ret = -error;
bh = aio_bh_new(bdrv_get_aio_context(bs), error_callback_bh, acb);
bh = qemu_bh_new(error_callback_bh, acb);
acb->bh = bh;
qemu_bh_schedule(bh);
return &acb->common;
}
static BlockAIOCB *blkdebug_aio_readv(BlockDriverState *bs,
static BlockDriverAIOCB *blkdebug_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque)
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugRule *rule = NULL;
QSIMPLEQ_FOREACH(rule, &s->active_rules, active_next) {
if (rule->options.inject.sector == -1 ||
(rule->options.inject.sector >= sector_num &&
rule->options.inject.sector < sector_num + nb_sectors)) {
break;
}
if (s->vars.inject_errno) {
return inject_error(bs, cb, opaque);
}
if (rule && rule->options.inject.error) {
return inject_error(bs, cb, opaque, rule);
}
return bdrv_aio_readv(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
BlockDriverAIOCB *acb =
bdrv_aio_readv(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
return acb;
}
static BlockAIOCB *blkdebug_aio_writev(BlockDriverState *bs,
static BlockDriverAIOCB *blkdebug_aio_writev(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque)
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugRule *rule = NULL;
QSIMPLEQ_FOREACH(rule, &s->active_rules, active_next) {
if (rule->options.inject.sector == -1 ||
(rule->options.inject.sector >= sector_num &&
rule->options.inject.sector < sector_num + nb_sectors)) {
break;
}
if (s->vars.inject_errno) {
return inject_error(bs, cb, opaque);
}
if (rule && rule->options.inject.error) {
return inject_error(bs, cb, opaque, rule);
}
return bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
BlockDriverAIOCB *acb =
bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
return acb;
}
static BlockAIOCB *blkdebug_aio_flush(BlockDriverState *bs,
BlockCompletionFunc *cb, void *opaque)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugRule *rule = NULL;
QSIMPLEQ_FOREACH(rule, &s->active_rules, active_next) {
if (rule->options.inject.sector == -1) {
break;
}
}
if (rule && rule->options.inject.error) {
return inject_error(bs, cb, opaque, rule);
}
return bdrv_aio_flush(bs->file, cb, opaque);
}
static void blkdebug_close(BlockDriverState *bs)
{
BDRVBlkdebugState *s = bs->opaque;
@@ -563,232 +391,78 @@ static void blkdebug_close(BlockDriverState *bs)
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
QLIST_FOREACH_SAFE(rule, &s->rules[i], next, next) {
remove_rule(rule);
QLIST_REMOVE(rule, next);
qemu_free(rule);
}
}
}
static void suspend_request(BlockDriverState *bs, BlkdebugRule *rule)
static void blkdebug_flush(BlockDriverState *bs)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugSuspendedReq r;
r = (BlkdebugSuspendedReq) {
.co = qemu_coroutine_self(),
.tag = g_strdup(rule->options.suspend.tag),
};
remove_rule(rule);
QLIST_INSERT_HEAD(&s->suspended_reqs, &r, next);
printf("blkdebug: Suspended request '%s'\n", r.tag);
qemu_coroutine_yield();
printf("blkdebug: Resuming request '%s'\n", r.tag);
QLIST_REMOVE(&r, next);
g_free(r.tag);
bdrv_flush(bs->file);
}
static bool process_rule(BlockDriverState *bs, struct BlkdebugRule *rule,
bool injected)
static BlockDriverAIOCB *blkdebug_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
return bdrv_aio_flush(bs->file, cb, opaque);
}
static void process_rule(BlockDriverState *bs, struct BlkdebugRule *rule,
BlkdebugVars *old_vars)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugVars *vars = &s->vars;
/* Only process rules for the current state */
if (rule->state && rule->state != s->state) {
return injected;
if (rule->state && rule->state != old_vars->state) {
return;
}
/* Take the action */
switch (rule->action) {
case ACTION_INJECT_ERROR:
if (!injected) {
QSIMPLEQ_INIT(&s->active_rules);
injected = true;
}
QSIMPLEQ_INSERT_HEAD(&s->active_rules, rule, active_next);
vars->inject_errno = rule->options.inject.error;
vars->inject_once = rule->options.inject.once;
vars->inject_immediately = rule->options.inject.immediately;
break;
case ACTION_SET_STATE:
s->new_state = rule->options.set_state.new_state;
break;
case ACTION_SUSPEND:
suspend_request(bs, rule);
vars->state = rule->options.set_state.new_state;
break;
}
return injected;
}
static void blkdebug_debug_event(BlockDriverState *bs, BlkDebugEvent event)
{
BDRVBlkdebugState *s = bs->opaque;
struct BlkdebugRule *rule, *next;
bool injected;
assert((int)event >= 0 && event < BLKDBG_EVENT_MAX);
injected = false;
s->new_state = s->state;
QLIST_FOREACH_SAFE(rule, &s->rules[event], next, next) {
injected = process_rule(bs, rule, injected);
}
s->state = s->new_state;
}
static int blkdebug_debug_breakpoint(BlockDriverState *bs, const char *event,
const char *tag)
{
BDRVBlkdebugState *s = bs->opaque;
struct BlkdebugRule *rule;
BlkDebugEvent blkdebug_event;
BlkdebugVars old_vars = s->vars;
if (get_event_by_name(event, &blkdebug_event) < 0) {
return -ENOENT;
}
rule = g_malloc(sizeof(*rule));
*rule = (struct BlkdebugRule) {
.event = blkdebug_event,
.action = ACTION_SUSPEND,
.state = 0,
.options.suspend.tag = g_strdup(tag),
};
QLIST_INSERT_HEAD(&s->rules[blkdebug_event], rule, next);
return 0;
}
static int blkdebug_debug_resume(BlockDriverState *bs, const char *tag)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugSuspendedReq *r, *next;
QLIST_FOREACH_SAFE(r, &s->suspended_reqs, next, next) {
if (!strcmp(r->tag, tag)) {
qemu_coroutine_enter(r->co, NULL);
return 0;
}
}
return -ENOENT;
}
static int blkdebug_debug_remove_breakpoint(BlockDriverState *bs,
const char *tag)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugSuspendedReq *r, *r_next;
BlkdebugRule *rule, *next;
int i, ret = -ENOENT;
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
QLIST_FOREACH_SAFE(rule, &s->rules[i], next, next) {
if (rule->action == ACTION_SUSPEND &&
!strcmp(rule->options.suspend.tag, tag)) {
remove_rule(rule);
ret = 0;
}
}
}
QLIST_FOREACH_SAFE(r, &s->suspended_reqs, next, r_next) {
if (!strcmp(r->tag, tag)) {
qemu_coroutine_enter(r->co, NULL);
ret = 0;
}
}
return ret;
}
static bool blkdebug_debug_is_suspended(BlockDriverState *bs, const char *tag)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugSuspendedReq *r;
QLIST_FOREACH(r, &s->suspended_reqs, next) {
if (!strcmp(r->tag, tag)) {
return true;
}
}
return false;
}
static int64_t blkdebug_getlength(BlockDriverState *bs)
{
return bdrv_getlength(bs->file);
}
static void blkdebug_refresh_filename(BlockDriverState *bs)
{
QDict *opts;
const QDictEntry *e;
bool force_json = false;
for (e = qdict_first(bs->options); e; e = qdict_next(bs->options, e)) {
if (strcmp(qdict_entry_key(e), "config") &&
strcmp(qdict_entry_key(e), "x-image") &&
strcmp(qdict_entry_key(e), "image") &&
strncmp(qdict_entry_key(e), "image.", strlen("image.")))
{
force_json = true;
break;
}
}
if (force_json && !bs->file->full_open_options) {
/* The config file cannot be recreated, so creating a plain filename
* is impossible */
if (event < 0 || event >= BLKDBG_EVENT_MAX) {
return;
}
if (!force_json && bs->file->exact_filename[0]) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"blkdebug:%s:%s",
qdict_get_try_str(bs->options, "config") ?: "",
bs->file->exact_filename);
QLIST_FOREACH(rule, &s->rules[event], next) {
process_rule(bs, rule, &old_vars);
}
opts = qdict_new();
qdict_put_obj(opts, "driver", QOBJECT(qstring_from_str("blkdebug")));
QINCREF(bs->file->full_open_options);
qdict_put_obj(opts, "image", QOBJECT(bs->file->full_open_options));
for (e = qdict_first(bs->options); e; e = qdict_next(bs->options, e)) {
if (strcmp(qdict_entry_key(e), "x-image") &&
strcmp(qdict_entry_key(e), "image") &&
strncmp(qdict_entry_key(e), "image.", strlen("image.")))
{
qobject_incref(qdict_entry_value(e));
qdict_put_obj(opts, qdict_entry_key(e), qdict_entry_value(e));
}
}
bs->full_open_options = opts;
}
static BlockDriver bdrv_blkdebug = {
.format_name = "blkdebug",
.protocol_name = "blkdebug",
.instance_size = sizeof(BDRVBlkdebugState),
.bdrv_parse_filename = blkdebug_parse_filename,
.bdrv_file_open = blkdebug_open,
.bdrv_close = blkdebug_close,
.bdrv_getlength = blkdebug_getlength,
.bdrv_refresh_filename = blkdebug_refresh_filename,
.bdrv_flush = blkdebug_flush,
.bdrv_aio_readv = blkdebug_aio_readv,
.bdrv_aio_writev = blkdebug_aio_writev,
.bdrv_aio_flush = blkdebug_aio_flush,
.bdrv_debug_event = blkdebug_debug_event,
.bdrv_debug_breakpoint = blkdebug_debug_breakpoint,
.bdrv_debug_remove_breakpoint
= blkdebug_debug_remove_breakpoint,
.bdrv_debug_resume = blkdebug_debug_resume,
.bdrv_debug_is_suspended = blkdebug_debug_is_suspended,
};
static void bdrv_blkdebug_init(void)

View File

@@ -1,360 +0,0 @@
/*
* Block protocol for block driver correctness testing
*
* Copyright (C) 2010 IBM, Corp.
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include <stdarg.h>
#include "qemu/sockets.h" /* for EINPROGRESS on Windows */
#include "block/block_int.h"
#include "qapi/qmp/qdict.h"
#include "qapi/qmp/qstring.h"
typedef struct {
BlockDriverState *test_file;
} BDRVBlkverifyState;
typedef struct BlkverifyAIOCB BlkverifyAIOCB;
struct BlkverifyAIOCB {
BlockAIOCB common;
QEMUBH *bh;
/* Request metadata */
bool is_write;
int64_t sector_num;
int nb_sectors;
int ret; /* first completed request's result */
unsigned int done; /* completion counter */
QEMUIOVector *qiov; /* user I/O vector */
QEMUIOVector raw_qiov; /* cloned I/O vector for raw file */
void *buf; /* buffer for raw file I/O */
void (*verify)(BlkverifyAIOCB *acb);
};
static const AIOCBInfo blkverify_aiocb_info = {
.aiocb_size = sizeof(BlkverifyAIOCB),
};
static void GCC_FMT_ATTR(2, 3) blkverify_err(BlkverifyAIOCB *acb,
const char *fmt, ...)
{
va_list ap;
va_start(ap, fmt);
fprintf(stderr, "blkverify: %s sector_num=%" PRId64 " nb_sectors=%d ",
acb->is_write ? "write" : "read", acb->sector_num,
acb->nb_sectors);
vfprintf(stderr, fmt, ap);
fprintf(stderr, "\n");
va_end(ap);
exit(1);
}
/* Valid blkverify filenames look like blkverify:path/to/raw_image:path/to/image */
static void blkverify_parse_filename(const char *filename, QDict *options,
Error **errp)
{
const char *c;
QString *raw_path;
/* Parse the blkverify: prefix */
if (!strstart(filename, "blkverify:", &filename)) {
/* There was no prefix; therefore, all options have to be already
present in the QDict (except for the filename) */
qdict_put(options, "x-image", qstring_from_str(filename));
return;
}
/* Parse the raw image filename */
c = strchr(filename, ':');
if (c == NULL) {
error_setg(errp, "blkverify requires raw copy and original image path");
return;
}
/* TODO Implement option pass-through and set raw.filename here */
raw_path = qstring_from_substr(filename, 0, c - filename - 1);
qdict_put(options, "x-raw", raw_path);
/* TODO Allow multi-level nesting and set file.filename here */
filename = c + 1;
qdict_put(options, "x-image", qstring_from_str(filename));
}
static QemuOptsList runtime_opts = {
.name = "blkverify",
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
.desc = {
{
.name = "x-raw",
.type = QEMU_OPT_STRING,
.help = "[internal use only, will be removed]",
},
{
.name = "x-image",
.type = QEMU_OPT_STRING,
.help = "[internal use only, will be removed]",
},
{ /* end of list */ }
},
};
static int blkverify_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVBlkverifyState *s = bs->opaque;
QemuOpts *opts;
Error *local_err = NULL;
int ret;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
}
/* Open the raw file */
assert(bs->file == NULL);
ret = bdrv_open_image(&bs->file, qemu_opt_get(opts, "x-raw"), options,
"raw", flags | BDRV_O_PROTOCOL, false, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
goto fail;
}
/* Open the test file */
assert(s->test_file == NULL);
ret = bdrv_open_image(&s->test_file, qemu_opt_get(opts, "x-image"), options,
"test", flags, false, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
s->test_file = NULL;
goto fail;
}
ret = 0;
fail:
qemu_opts_del(opts);
return ret;
}
static void blkverify_close(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
bdrv_unref(s->test_file);
s->test_file = NULL;
}
static int64_t blkverify_getlength(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
return bdrv_getlength(s->test_file);
}
static BlkverifyAIOCB *blkverify_aio_get(BlockDriverState *bs, bool is_write,
int64_t sector_num, QEMUIOVector *qiov,
int nb_sectors,
BlockCompletionFunc *cb,
void *opaque)
{
BlkverifyAIOCB *acb = qemu_aio_get(&blkverify_aiocb_info, bs, cb, opaque);
acb->bh = NULL;
acb->is_write = is_write;
acb->sector_num = sector_num;
acb->nb_sectors = nb_sectors;
acb->ret = -EINPROGRESS;
acb->done = 0;
acb->qiov = qiov;
acb->buf = NULL;
acb->verify = NULL;
return acb;
}
static void blkverify_aio_bh(void *opaque)
{
BlkverifyAIOCB *acb = opaque;
qemu_bh_delete(acb->bh);
if (acb->buf) {
qemu_iovec_destroy(&acb->raw_qiov);
qemu_vfree(acb->buf);
}
acb->common.cb(acb->common.opaque, acb->ret);
qemu_aio_unref(acb);
}
static void blkverify_aio_cb(void *opaque, int ret)
{
BlkverifyAIOCB *acb = opaque;
switch (++acb->done) {
case 1:
acb->ret = ret;
break;
case 2:
if (acb->ret != ret) {
blkverify_err(acb, "return value mismatch %d != %d", acb->ret, ret);
}
if (acb->verify) {
acb->verify(acb);
}
acb->bh = aio_bh_new(bdrv_get_aio_context(acb->common.bs),
blkverify_aio_bh, acb);
qemu_bh_schedule(acb->bh);
break;
}
}
static void blkverify_verify_readv(BlkverifyAIOCB *acb)
{
ssize_t offset = qemu_iovec_compare(acb->qiov, &acb->raw_qiov);
if (offset != -1) {
blkverify_err(acb, "contents mismatch in sector %" PRId64,
acb->sector_num + (int64_t)(offset / BDRV_SECTOR_SIZE));
}
}
static BlockAIOCB *blkverify_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque)
{
BDRVBlkverifyState *s = bs->opaque;
BlkverifyAIOCB *acb = blkverify_aio_get(bs, false, sector_num, qiov,
nb_sectors, cb, opaque);
acb->verify = blkverify_verify_readv;
acb->buf = qemu_blockalign(bs->file, qiov->size);
qemu_iovec_init(&acb->raw_qiov, acb->qiov->niov);
qemu_iovec_clone(&acb->raw_qiov, qiov, acb->buf);
bdrv_aio_readv(s->test_file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb);
bdrv_aio_readv(bs->file, sector_num, &acb->raw_qiov, nb_sectors,
blkverify_aio_cb, acb);
return &acb->common;
}
static BlockAIOCB *blkverify_aio_writev(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque)
{
BDRVBlkverifyState *s = bs->opaque;
BlkverifyAIOCB *acb = blkverify_aio_get(bs, true, sector_num, qiov,
nb_sectors, cb, opaque);
bdrv_aio_writev(s->test_file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb);
bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb);
return &acb->common;
}
static BlockAIOCB *blkverify_aio_flush(BlockDriverState *bs,
BlockCompletionFunc *cb,
void *opaque)
{
BDRVBlkverifyState *s = bs->opaque;
/* Only flush test file, the raw file is not important */
return bdrv_aio_flush(s->test_file, cb, opaque);
}
static bool blkverify_recurse_is_first_non_filter(BlockDriverState *bs,
BlockDriverState *candidate)
{
BDRVBlkverifyState *s = bs->opaque;
bool perm = bdrv_recurse_is_first_non_filter(bs->file, candidate);
if (perm) {
return true;
}
return bdrv_recurse_is_first_non_filter(s->test_file, candidate);
}
/* Propagate AioContext changes to ->test_file */
static void blkverify_detach_aio_context(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
bdrv_detach_aio_context(s->test_file);
}
static void blkverify_attach_aio_context(BlockDriverState *bs,
AioContext *new_context)
{
BDRVBlkverifyState *s = bs->opaque;
bdrv_attach_aio_context(s->test_file, new_context);
}
static void blkverify_refresh_filename(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
/* bs->file has already been refreshed */
bdrv_refresh_filename(s->test_file);
if (bs->file->full_open_options && s->test_file->full_open_options) {
QDict *opts = qdict_new();
qdict_put_obj(opts, "driver", QOBJECT(qstring_from_str("blkverify")));
QINCREF(bs->file->full_open_options);
qdict_put_obj(opts, "raw", QOBJECT(bs->file->full_open_options));
QINCREF(s->test_file->full_open_options);
qdict_put_obj(opts, "test", QOBJECT(s->test_file->full_open_options));
bs->full_open_options = opts;
}
if (bs->file->exact_filename[0] && s->test_file->exact_filename[0]) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"blkverify:%s:%s",
bs->file->exact_filename, s->test_file->exact_filename);
}
}
static BlockDriver bdrv_blkverify = {
.format_name = "blkverify",
.protocol_name = "blkverify",
.instance_size = sizeof(BDRVBlkverifyState),
.bdrv_parse_filename = blkverify_parse_filename,
.bdrv_file_open = blkverify_open,
.bdrv_close = blkverify_close,
.bdrv_getlength = blkverify_getlength,
.bdrv_refresh_filename = blkverify_refresh_filename,
.bdrv_aio_readv = blkverify_aio_readv,
.bdrv_aio_writev = blkverify_aio_writev,
.bdrv_aio_flush = blkverify_aio_flush,
.bdrv_attach_aio_context = blkverify_attach_aio_context,
.bdrv_detach_aio_context = blkverify_detach_aio_context,
.is_filter = true,
.bdrv_recurse_is_first_non_filter = blkverify_recurse_is_first_non_filter,
};
static void bdrv_blkverify_init(void)
{
bdrv_register(&bdrv_blkverify);
}
block_init(bdrv_blkverify_init);

View File

@@ -1,665 +0,0 @@
/*
* QEMU Block backends
*
* Copyright (C) 2014 Red Hat, Inc.
*
* Authors:
* Markus Armbruster <armbru@redhat.com>,
*
* This work is licensed under the terms of the GNU LGPL, version 2.1
* or later. See the COPYING.LIB file in the top-level directory.
*/
#include "sysemu/block-backend.h"
#include "block/block_int.h"
#include "sysemu/blockdev.h"
#include "qapi-event.h"
/* Number of coroutines to reserve per attached device model */
#define COROUTINE_POOL_RESERVATION 64
struct BlockBackend {
char *name;
int refcnt;
BlockDriverState *bs;
DriveInfo *legacy_dinfo; /* null unless created by drive_new() */
QTAILQ_ENTRY(BlockBackend) link; /* for blk_backends */
void *dev; /* attached device model, if any */
/* TODO change to DeviceState when all users are qdevified */
const BlockDevOps *dev_ops;
void *dev_opaque;
};
static void drive_info_del(DriveInfo *dinfo);
/* All the BlockBackends (except for hidden ones) */
static QTAILQ_HEAD(, BlockBackend) blk_backends =
QTAILQ_HEAD_INITIALIZER(blk_backends);
/*
* Create a new BlockBackend with @name, with a reference count of one.
* @name must not be null or empty.
* Fail if a BlockBackend with this name already exists.
* Store an error through @errp on failure, unless it's null.
* Return the new BlockBackend on success, null on failure.
*/
BlockBackend *blk_new(const char *name, Error **errp)
{
BlockBackend *blk;
assert(name && name[0]);
if (!id_wellformed(name)) {
error_setg(errp, "Invalid device name");
return NULL;
}
if (blk_by_name(name)) {
error_setg(errp, "Device with id '%s' already exists", name);
return NULL;
}
if (bdrv_find_node(name)) {
error_setg(errp,
"Device name '%s' conflicts with an existing node name",
name);
return NULL;
}
blk = g_new0(BlockBackend, 1);
blk->name = g_strdup(name);
blk->refcnt = 1;
QTAILQ_INSERT_TAIL(&blk_backends, blk, link);
return blk;
}
/*
* Create a new BlockBackend with a new BlockDriverState attached.
* Otherwise just like blk_new(), which see.
*/
BlockBackend *blk_new_with_bs(const char *name, Error **errp)
{
BlockBackend *blk;
BlockDriverState *bs;
blk = blk_new(name, errp);
if (!blk) {
return NULL;
}
bs = bdrv_new_root();
blk->bs = bs;
bs->blk = blk;
return blk;
}
static void blk_delete(BlockBackend *blk)
{
assert(!blk->refcnt);
assert(!blk->dev);
if (blk->bs) {
assert(blk->bs->blk == blk);
blk->bs->blk = NULL;
bdrv_unref(blk->bs);
blk->bs = NULL;
}
/* Avoid double-remove after blk_hide_on_behalf_of_do_drive_del() */
if (blk->name[0]) {
QTAILQ_REMOVE(&blk_backends, blk, link);
}
g_free(blk->name);
drive_info_del(blk->legacy_dinfo);
g_free(blk);
}
static void drive_info_del(DriveInfo *dinfo)
{
if (!dinfo) {
return;
}
qemu_opts_del(dinfo->opts);
g_free(dinfo->serial);
g_free(dinfo);
}
/*
* Increment @blk's reference count.
* @blk must not be null.
*/
void blk_ref(BlockBackend *blk)
{
blk->refcnt++;
}
/*
* Decrement @blk's reference count.
* If this drops it to zero, destroy @blk.
* For convenience, do nothing if @blk is null.
*/
void blk_unref(BlockBackend *blk)
{
if (blk) {
assert(blk->refcnt > 0);
if (!--blk->refcnt) {
blk_delete(blk);
}
}
}
/*
* Return the BlockBackend after @blk.
* If @blk is null, return the first one.
* Else, return @blk's next sibling, which may be null.
*
* To iterate over all BlockBackends, do
* for (blk = blk_next(NULL); blk; blk = blk_next(blk)) {
* ...
* }
*/
BlockBackend *blk_next(BlockBackend *blk)
{
return blk ? QTAILQ_NEXT(blk, link) : QTAILQ_FIRST(&blk_backends);
}
/*
* Return @blk's name, a non-null string.
* Wart: the name is empty iff @blk has been hidden with
* blk_hide_on_behalf_of_do_drive_del().
*/
const char *blk_name(BlockBackend *blk)
{
return blk->name;
}
/*
* Return the BlockBackend with name @name if it exists, else null.
* @name must not be null.
*/
BlockBackend *blk_by_name(const char *name)
{
BlockBackend *blk;
assert(name);
QTAILQ_FOREACH(blk, &blk_backends, link) {
if (!strcmp(name, blk->name)) {
return blk;
}
}
return NULL;
}
/*
* Return the BlockDriverState attached to @blk if any, else null.
*/
BlockDriverState *blk_bs(BlockBackend *blk)
{
return blk->bs;
}
/*
* Return @blk's DriveInfo if any, else null.
*/
DriveInfo *blk_legacy_dinfo(BlockBackend *blk)
{
return blk->legacy_dinfo;
}
/*
* Set @blk's DriveInfo to @dinfo, and return it.
* @blk must not have a DriveInfo set already.
* No other BlockBackend may have the same DriveInfo set.
*/
DriveInfo *blk_set_legacy_dinfo(BlockBackend *blk, DriveInfo *dinfo)
{
assert(!blk->legacy_dinfo);
return blk->legacy_dinfo = dinfo;
}
/*
* Return the BlockBackend with DriveInfo @dinfo.
* It must exist.
*/
BlockBackend *blk_by_legacy_dinfo(DriveInfo *dinfo)
{
BlockBackend *blk;
QTAILQ_FOREACH(blk, &blk_backends, link) {
if (blk->legacy_dinfo == dinfo) {
return blk;
}
}
abort();
}
/*
* Hide @blk.
* @blk must not have been hidden already.
* Make attached BlockDriverState, if any, anonymous.
* Once hidden, @blk is invisible to all functions that don't receive
* it as argument. For example, blk_by_name() won't return it.
* Strictly for use by do_drive_del().
* TODO get rid of it!
*/
void blk_hide_on_behalf_of_do_drive_del(BlockBackend *blk)
{
QTAILQ_REMOVE(&blk_backends, blk, link);
blk->name[0] = 0;
if (blk->bs) {
bdrv_make_anon(blk->bs);
}
}
/*
* Attach device model @dev to @blk.
* Return 0 on success, -EBUSY when a device model is attached already.
*/
int blk_attach_dev(BlockBackend *blk, void *dev)
/* TODO change to DeviceState *dev when all users are qdevified */
{
if (blk->dev) {
return -EBUSY;
}
blk_ref(blk);
blk->dev = dev;
bdrv_iostatus_reset(blk->bs);
return 0;
}
/*
* Attach device model @dev to @blk.
* @blk must not have a device model attached already.
* TODO qdevified devices don't use this, remove when devices are qdevified
*/
void blk_attach_dev_nofail(BlockBackend *blk, void *dev)
{
if (blk_attach_dev(blk, dev) < 0) {
abort();
}
}
/*
* Detach device model @dev from @blk.
* @dev must be currently attached to @blk.
*/
void blk_detach_dev(BlockBackend *blk, void *dev)
/* TODO change to DeviceState *dev when all users are qdevified */
{
assert(blk->dev == dev);
blk->dev = NULL;
blk->dev_ops = NULL;
blk->dev_opaque = NULL;
bdrv_set_guest_block_size(blk->bs, 512);
blk_unref(blk);
}
/*
* Return the device model attached to @blk if any, else null.
*/
void *blk_get_attached_dev(BlockBackend *blk)
/* TODO change to return DeviceState * when all users are qdevified */
{
return blk->dev;
}
/*
* Set @blk's device model callbacks to @ops.
* @opaque is the opaque argument to pass to the callbacks.
* This is for use by device models.
*/
void blk_set_dev_ops(BlockBackend *blk, const BlockDevOps *ops,
void *opaque)
{
blk->dev_ops = ops;
blk->dev_opaque = opaque;
}
/*
* Notify @blk's attached device model of media change.
* If @load is true, notify of media load.
* Else, notify of media eject.
* Also send DEVICE_TRAY_MOVED events as appropriate.
*/
void blk_dev_change_media_cb(BlockBackend *blk, bool load)
{
if (blk->dev_ops && blk->dev_ops->change_media_cb) {
bool tray_was_closed = !blk_dev_is_tray_open(blk);
blk->dev_ops->change_media_cb(blk->dev_opaque, load);
if (tray_was_closed) {
/* tray open */
qapi_event_send_device_tray_moved(blk_name(blk),
true, &error_abort);
}
if (load) {
/* tray close */
qapi_event_send_device_tray_moved(blk_name(blk),
false, &error_abort);
}
}
}
/*
* Does @blk's attached device model have removable media?
* %true if no device model is attached.
*/
bool blk_dev_has_removable_media(BlockBackend *blk)
{
return !blk->dev || (blk->dev_ops && blk->dev_ops->change_media_cb);
}
/*
* Notify @blk's attached device model of a media eject request.
* If @force is true, the medium is about to be yanked out forcefully.
*/
void blk_dev_eject_request(BlockBackend *blk, bool force)
{
if (blk->dev_ops && blk->dev_ops->eject_request_cb) {
blk->dev_ops->eject_request_cb(blk->dev_opaque, force);
}
}
/*
* Does @blk's attached device model have a tray, and is it open?
*/
bool blk_dev_is_tray_open(BlockBackend *blk)
{
if (blk->dev_ops && blk->dev_ops->is_tray_open) {
return blk->dev_ops->is_tray_open(blk->dev_opaque);
}
return false;
}
/*
* Does @blk's attached device model have the medium locked?
* %false if the device model has no such lock.
*/
bool blk_dev_is_medium_locked(BlockBackend *blk)
{
if (blk->dev_ops && blk->dev_ops->is_medium_locked) {
return blk->dev_ops->is_medium_locked(blk->dev_opaque);
}
return false;
}
/*
* Notify @blk's attached device model of a backend size change.
*/
void blk_dev_resize_cb(BlockBackend *blk)
{
if (blk->dev_ops && blk->dev_ops->resize_cb) {
blk->dev_ops->resize_cb(blk->dev_opaque);
}
}
void blk_iostatus_enable(BlockBackend *blk)
{
bdrv_iostatus_enable(blk->bs);
}
int blk_read(BlockBackend *blk, int64_t sector_num, uint8_t *buf,
int nb_sectors)
{
return bdrv_read(blk->bs, sector_num, buf, nb_sectors);
}
int blk_read_unthrottled(BlockBackend *blk, int64_t sector_num, uint8_t *buf,
int nb_sectors)
{
return bdrv_read_unthrottled(blk->bs, sector_num, buf, nb_sectors);
}
int blk_write(BlockBackend *blk, int64_t sector_num, const uint8_t *buf,
int nb_sectors)
{
return bdrv_write(blk->bs, sector_num, buf, nb_sectors);
}
BlockAIOCB *blk_aio_write_zeroes(BlockBackend *blk, int64_t sector_num,
int nb_sectors, BdrvRequestFlags flags,
BlockCompletionFunc *cb, void *opaque)
{
return bdrv_aio_write_zeroes(blk->bs, sector_num, nb_sectors, flags,
cb, opaque);
}
int blk_pread(BlockBackend *blk, int64_t offset, void *buf, int count)
{
return bdrv_pread(blk->bs, offset, buf, count);
}
int blk_pwrite(BlockBackend *blk, int64_t offset, const void *buf, int count)
{
return bdrv_pwrite(blk->bs, offset, buf, count);
}
int64_t blk_getlength(BlockBackend *blk)
{
return bdrv_getlength(blk->bs);
}
void blk_get_geometry(BlockBackend *blk, uint64_t *nb_sectors_ptr)
{
bdrv_get_geometry(blk->bs, nb_sectors_ptr);
}
BlockAIOCB *blk_aio_readv(BlockBackend *blk, int64_t sector_num,
QEMUIOVector *iov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque)
{
return bdrv_aio_readv(blk->bs, sector_num, iov, nb_sectors, cb, opaque);
}
BlockAIOCB *blk_aio_writev(BlockBackend *blk, int64_t sector_num,
QEMUIOVector *iov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque)
{
return bdrv_aio_writev(blk->bs, sector_num, iov, nb_sectors, cb, opaque);
}
BlockAIOCB *blk_aio_flush(BlockBackend *blk,
BlockCompletionFunc *cb, void *opaque)
{
return bdrv_aio_flush(blk->bs, cb, opaque);
}
BlockAIOCB *blk_aio_discard(BlockBackend *blk,
int64_t sector_num, int nb_sectors,
BlockCompletionFunc *cb, void *opaque)
{
return bdrv_aio_discard(blk->bs, sector_num, nb_sectors, cb, opaque);
}
void blk_aio_cancel(BlockAIOCB *acb)
{
bdrv_aio_cancel(acb);
}
void blk_aio_cancel_async(BlockAIOCB *acb)
{
bdrv_aio_cancel_async(acb);
}
int blk_aio_multiwrite(BlockBackend *blk, BlockRequest *reqs, int num_reqs)
{
return bdrv_aio_multiwrite(blk->bs, reqs, num_reqs);
}
int blk_ioctl(BlockBackend *blk, unsigned long int req, void *buf)
{
return bdrv_ioctl(blk->bs, req, buf);
}
BlockAIOCB *blk_aio_ioctl(BlockBackend *blk, unsigned long int req, void *buf,
BlockCompletionFunc *cb, void *opaque)
{
return bdrv_aio_ioctl(blk->bs, req, buf, cb, opaque);
}
int blk_co_discard(BlockBackend *blk, int64_t sector_num, int nb_sectors)
{
return bdrv_co_discard(blk->bs, sector_num, nb_sectors);
}
int blk_co_flush(BlockBackend *blk)
{
return bdrv_co_flush(blk->bs);
}
int blk_flush(BlockBackend *blk)
{
return bdrv_flush(blk->bs);
}
int blk_flush_all(void)
{
return bdrv_flush_all();
}
void blk_drain_all(void)
{
bdrv_drain_all();
}
BlockdevOnError blk_get_on_error(BlockBackend *blk, bool is_read)
{
return bdrv_get_on_error(blk->bs, is_read);
}
BlockErrorAction blk_get_error_action(BlockBackend *blk, bool is_read,
int error)
{
return bdrv_get_error_action(blk->bs, is_read, error);
}
void blk_error_action(BlockBackend *blk, BlockErrorAction action,
bool is_read, int error)
{
bdrv_error_action(blk->bs, action, is_read, error);
}
int blk_is_read_only(BlockBackend *blk)
{
return bdrv_is_read_only(blk->bs);
}
int blk_is_sg(BlockBackend *blk)
{
return bdrv_is_sg(blk->bs);
}
int blk_enable_write_cache(BlockBackend *blk)
{
return bdrv_enable_write_cache(blk->bs);
}
void blk_set_enable_write_cache(BlockBackend *blk, bool wce)
{
bdrv_set_enable_write_cache(blk->bs, wce);
}
void blk_invalidate_cache(BlockBackend *blk, Error **errp)
{
bdrv_invalidate_cache(blk->bs, errp);
}
int blk_is_inserted(BlockBackend *blk)
{
return bdrv_is_inserted(blk->bs);
}
void blk_lock_medium(BlockBackend *blk, bool locked)
{
bdrv_lock_medium(blk->bs, locked);
}
void blk_eject(BlockBackend *blk, bool eject_flag)
{
bdrv_eject(blk->bs, eject_flag);
}
int blk_get_flags(BlockBackend *blk)
{
return bdrv_get_flags(blk->bs);
}
void blk_set_guest_block_size(BlockBackend *blk, int align)
{
bdrv_set_guest_block_size(blk->bs, align);
}
void *blk_blockalign(BlockBackend *blk, size_t size)
{
return qemu_blockalign(blk ? blk->bs : NULL, size);
}
bool blk_op_is_blocked(BlockBackend *blk, BlockOpType op, Error **errp)
{
return bdrv_op_is_blocked(blk->bs, op, errp);
}
void blk_op_unblock(BlockBackend *blk, BlockOpType op, Error *reason)
{
bdrv_op_unblock(blk->bs, op, reason);
}
void blk_op_block_all(BlockBackend *blk, Error *reason)
{
bdrv_op_block_all(blk->bs, reason);
}
void blk_op_unblock_all(BlockBackend *blk, Error *reason)
{
bdrv_op_unblock_all(blk->bs, reason);
}
AioContext *blk_get_aio_context(BlockBackend *blk)
{
return bdrv_get_aio_context(blk->bs);
}
void blk_set_aio_context(BlockBackend *blk, AioContext *new_context)
{
bdrv_set_aio_context(blk->bs, new_context);
}
void blk_add_aio_context_notifier(BlockBackend *blk,
void (*attached_aio_context)(AioContext *new_context, void *opaque),
void (*detach_aio_context)(void *opaque), void *opaque)
{
bdrv_add_aio_context_notifier(blk->bs, attached_aio_context,
detach_aio_context, opaque);
}
void blk_remove_aio_context_notifier(BlockBackend *blk,
void (*attached_aio_context)(AioContext *,
void *),
void (*detach_aio_context)(void *),
void *opaque)
{
bdrv_remove_aio_context_notifier(blk->bs, attached_aio_context,
detach_aio_context, opaque);
}
void blk_add_close_notifier(BlockBackend *blk, Notifier *notify)
{
bdrv_add_close_notifier(blk->bs, notify);
}
void blk_io_plug(BlockBackend *blk)
{
bdrv_io_plug(blk->bs);
}
void blk_io_unplug(BlockBackend *blk)
{
bdrv_io_unplug(blk->bs);
}
BlockAcctStats *blk_get_stats(BlockBackend *blk)
{
return bdrv_get_stats(blk->bs);
}
void *blk_aio_get(const AIOCBInfo *aiocb_info, BlockBackend *blk,
BlockCompletionFunc *cb, void *opaque)
{
return qemu_aio_get(aiocb_info, blk_bs(blk), cb, opaque);
}

View File

@@ -23,8 +23,8 @@
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "block_int.h"
#include "module.h"
/**************************************************************/
@@ -39,41 +39,55 @@
// not allocated: 0xffffffff
// always little-endian
struct bochs_header {
char magic[32]; /* "Bochs Virtual HD Image" */
char type[16]; /* "Redolog" */
char subtype[16]; /* "Undoable" / "Volatile" / "Growing" */
struct bochs_header_v1 {
char magic[32]; // "Bochs Virtual HD Image"
char type[16]; // "Redolog"
char subtype[16]; // "Undoable" / "Volatile" / "Growing"
uint32_t version;
uint32_t header; /* size of header */
uint32_t catalog; /* num of entries */
uint32_t bitmap; /* bitmap size */
uint32_t extent; /* extent size */
uint32_t header; // size of header
union {
struct {
uint32_t reserved; /* for ??? */
uint64_t disk; /* disk size */
char padding[HEADER_SIZE - 64 - 20 - 12];
} QEMU_PACKED redolog;
struct {
uint64_t disk; /* disk size */
char padding[HEADER_SIZE - 64 - 20 - 8];
} QEMU_PACKED redolog_v1;
char padding[HEADER_SIZE - 64 - 20];
uint32_t catalog; // num of entries
uint32_t bitmap; // bitmap size
uint32_t extent; // extent size
uint64_t disk; // disk size
char padding[HEADER_SIZE - 64 - 8 - 20];
} redolog;
char padding[HEADER_SIZE - 64 - 8];
} extra;
} QEMU_PACKED;
};
// always little-endian
struct bochs_header {
char magic[32]; // "Bochs Virtual HD Image"
char type[16]; // "Redolog"
char subtype[16]; // "Undoable" / "Volatile" / "Growing"
uint32_t version;
uint32_t header; // size of header
union {
struct {
uint32_t catalog; // num of entries
uint32_t bitmap; // bitmap size
uint32_t extent; // extent size
uint32_t reserved; // for ???
uint64_t disk; // disk size
char padding[HEADER_SIZE - 64 - 8 - 24];
} redolog;
char padding[HEADER_SIZE - 64 - 8];
} extra;
};
typedef struct BDRVBochsState {
CoMutex lock;
uint32_t *catalog_bitmap;
uint32_t catalog_size;
int catalog_size;
uint32_t data_offset;
int data_offset;
uint32_t bitmap_blocks;
uint32_t extent_blocks;
uint32_t extent_size;
int bitmap_blocks;
int extent_blocks;
int extent_size;
} BDRVBochsState;
static int bochs_probe(const uint8_t *buf, int buf_size, const char *filename)
@@ -93,19 +107,17 @@ static int bochs_probe(const uint8_t *buf, int buf_size, const char *filename)
return 0;
}
static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int bochs_open(BlockDriverState *bs, int flags)
{
BDRVBochsState *s = bs->opaque;
uint32_t i;
int i;
struct bochs_header bochs;
int ret;
struct bochs_header_v1 header_v1;
bs->read_only = 1; // no write support yet
ret = bdrv_pread(bs->file, 0, &bochs, sizeof(bochs));
if (ret < 0) {
return ret;
if (bdrv_pread(bs->file, 0, &bochs, sizeof(bochs)) != sizeof(bochs)) {
goto fail;
}
if (strcmp(bochs.magic, HEADER_MAGIC) ||
@@ -113,107 +125,62 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
strcmp(bochs.subtype, GROWING_TYPE) ||
((le32_to_cpu(bochs.version) != HEADER_VERSION) &&
(le32_to_cpu(bochs.version) != HEADER_V1))) {
error_setg(errp, "Image not in Bochs format");
return -EINVAL;
goto fail;
}
if (le32_to_cpu(bochs.version) == HEADER_V1) {
bs->total_sectors = le64_to_cpu(bochs.extra.redolog_v1.disk) / 512;
memcpy(&header_v1, &bochs, sizeof(bochs));
bs->total_sectors = le64_to_cpu(header_v1.extra.redolog.disk) / 512;
} else {
bs->total_sectors = le64_to_cpu(bochs.extra.redolog.disk) / 512;
}
/* Limit to 1M entries to avoid unbounded allocation. This is what is
* needed for the largest image that bximage can create (~8 TB). */
s->catalog_size = le32_to_cpu(bochs.catalog);
if (s->catalog_size > 0x100000) {
error_setg(errp, "Catalog size is too large");
return -EFBIG;
}
s->catalog_bitmap = g_try_new(uint32_t, s->catalog_size);
if (s->catalog_size && s->catalog_bitmap == NULL) {
error_setg(errp, "Could not allocate memory for catalog");
return -ENOMEM;
}
ret = bdrv_pread(bs->file, le32_to_cpu(bochs.header), s->catalog_bitmap,
s->catalog_size * 4);
if (ret < 0) {
s->catalog_size = le32_to_cpu(bochs.extra.redolog.catalog);
s->catalog_bitmap = qemu_malloc(s->catalog_size * 4);
if (bdrv_pread(bs->file, le32_to_cpu(bochs.header), s->catalog_bitmap,
s->catalog_size * 4) != s->catalog_size * 4)
goto fail;
}
for (i = 0; i < s->catalog_size; i++)
le32_to_cpus(&s->catalog_bitmap[i]);
s->data_offset = le32_to_cpu(bochs.header) + (s->catalog_size * 4);
s->bitmap_blocks = 1 + (le32_to_cpu(bochs.bitmap) - 1) / 512;
s->extent_blocks = 1 + (le32_to_cpu(bochs.extent) - 1) / 512;
s->bitmap_blocks = 1 + (le32_to_cpu(bochs.extra.redolog.bitmap) - 1) / 512;
s->extent_blocks = 1 + (le32_to_cpu(bochs.extra.redolog.extent) - 1) / 512;
s->extent_size = le32_to_cpu(bochs.extent);
if (s->extent_size < BDRV_SECTOR_SIZE) {
/* bximage actually never creates extents smaller than 4k */
error_setg(errp, "Extent size must be at least 512");
ret = -EINVAL;
goto fail;
} else if (!is_power_of_2(s->extent_size)) {
error_setg(errp, "Extent size %" PRIu32 " is not a power of two",
s->extent_size);
ret = -EINVAL;
goto fail;
} else if (s->extent_size > 0x800000) {
error_setg(errp, "Extent size %" PRIu32 " is too large",
s->extent_size);
ret = -EINVAL;
goto fail;
}
s->extent_size = le32_to_cpu(bochs.extra.redolog.extent);
if (s->catalog_size < DIV_ROUND_UP(bs->total_sectors,
s->extent_size / BDRV_SECTOR_SIZE))
{
error_setg(errp, "Catalog size is too small for this disk size");
ret = -EINVAL;
goto fail;
}
qemu_co_mutex_init(&s->lock);
return 0;
fail:
g_free(s->catalog_bitmap);
return ret;
fail:
return -1;
}
static int64_t seek_to_sector(BlockDriverState *bs, int64_t sector_num)
{
BDRVBochsState *s = bs->opaque;
uint64_t offset = sector_num * 512;
uint64_t extent_index, extent_offset, bitmap_offset;
int64_t offset = sector_num * 512;
int64_t extent_index, extent_offset, bitmap_offset;
char bitmap_entry;
int ret;
// seek to sector
extent_index = offset / s->extent_size;
extent_offset = (offset % s->extent_size) / 512;
if (s->catalog_bitmap[extent_index] == 0xffffffff) {
return 0; /* not allocated */
return -1; /* not allocated */
}
bitmap_offset = s->data_offset +
(512 * (uint64_t) s->catalog_bitmap[extent_index] *
bitmap_offset = s->data_offset + (512 * s->catalog_bitmap[extent_index] *
(s->extent_blocks + s->bitmap_blocks));
/* read in bitmap for current extent */
ret = bdrv_pread(bs->file, bitmap_offset + (extent_offset / 8),
&bitmap_entry, 1);
if (ret < 0) {
return ret;
if (bdrv_pread(bs->file, bitmap_offset + (extent_offset / 8),
&bitmap_entry, 1) != 1) {
return -1;
}
if (!((bitmap_entry >> (extent_offset % 8)) & 1)) {
return 0; /* not allocated */
return -1; /* not allocated */
}
return bitmap_offset + (512 * (s->bitmap_blocks + extent_offset));
@@ -226,16 +193,13 @@ static int bochs_read(BlockDriverState *bs, int64_t sector_num,
while (nb_sectors > 0) {
int64_t block_offset = seek_to_sector(bs, sector_num);
if (block_offset < 0) {
return block_offset;
} else if (block_offset > 0) {
if (block_offset >= 0) {
ret = bdrv_pread(bs->file, block_offset, buf, 512);
if (ret < 0) {
return ret;
if (ret != 512) {
return -1;
}
} else {
} else
memset(buf, 0, 512);
}
nb_sectors--;
sector_num++;
buf += 512;
@@ -243,21 +207,10 @@ static int bochs_read(BlockDriverState *bs, int64_t sector_num,
return 0;
}
static coroutine_fn int bochs_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVBochsState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = bochs_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void bochs_close(BlockDriverState *bs)
{
BDRVBochsState *s = bs->opaque;
g_free(s->catalog_bitmap);
qemu_free(s->catalog_bitmap);
}
static BlockDriver bdrv_bochs = {
@@ -265,7 +218,7 @@ static BlockDriver bdrv_bochs = {
.instance_size = sizeof(BDRVBochsState),
.bdrv_probe = bochs_probe,
.bdrv_open = bochs_open,
.bdrv_read = bochs_co_read,
.bdrv_read = bochs_read,
.bdrv_close = bochs_close,
};

View File

@@ -22,18 +22,14 @@
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "block_int.h"
#include "module.h"
#include <zlib.h>
/* Maximum compressed block size */
#define MAX_BLOCK_SIZE (64 * 1024 * 1024)
typedef struct BDRVCloopState {
CoMutex lock;
uint32_t block_size;
uint32_t n_blocks;
uint64_t *offsets;
uint64_t* offsets;
uint32_t sectors_per_block;
uint32_t current_block;
uint8_t *compressed_block;
@@ -43,184 +39,89 @@ typedef struct BDRVCloopState {
static int cloop_probe(const uint8_t *buf, int buf_size, const char *filename)
{
const char *magic_version_2_0 = "#!/bin/sh\n"
const char* magic_version_2_0="#!/bin/sh\n"
"#V2.0 Format\n"
"modprobe cloop file=$0 && mount -r -t iso9660 /dev/cloop $1\n";
int length = strlen(magic_version_2_0);
if (length > buf_size) {
length = buf_size;
}
if (!memcmp(magic_version_2_0, buf, length)) {
int length=strlen(magic_version_2_0);
if(length>buf_size)
length=buf_size;
if(!memcmp(magic_version_2_0,buf,length))
return 2;
}
return 0;
}
static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int cloop_open(BlockDriverState *bs, int flags)
{
BDRVCloopState *s = bs->opaque;
uint32_t offsets_size, max_compressed_block_size = 1, i;
int ret;
uint32_t offsets_size,max_compressed_block_size=1,i;
bs->read_only = 1;
/* read header */
ret = bdrv_pread(bs->file, 128, &s->block_size, 4);
if (ret < 0) {
return ret;
if (bdrv_pread(bs->file, 128, &s->block_size, 4) < 4) {
goto cloop_close;
}
s->block_size = be32_to_cpu(s->block_size);
if (s->block_size % 512) {
error_setg(errp, "block_size %" PRIu32 " must be a multiple of 512",
s->block_size);
return -EINVAL;
}
if (s->block_size == 0) {
error_setg(errp, "block_size cannot be zero");
return -EINVAL;
}
/* cloop's create_compressed_fs.c warns about block sizes beyond 256 KB but
* we can accept more. Prevent ridiculous values like 4 GB - 1 since we
* need a buffer this big.
*/
if (s->block_size > MAX_BLOCK_SIZE) {
error_setg(errp, "block_size %" PRIu32 " must be %u MB or less",
s->block_size,
MAX_BLOCK_SIZE / (1024 * 1024));
return -EINVAL;
}
ret = bdrv_pread(bs->file, 128 + 4, &s->n_blocks, 4);
if (ret < 0) {
return ret;
if (bdrv_pread(bs->file, 128 + 4, &s->n_blocks, 4) < 4) {
goto cloop_close;
}
s->n_blocks = be32_to_cpu(s->n_blocks);
/* read offsets */
if (s->n_blocks > (UINT32_MAX - 1) / sizeof(uint64_t)) {
/* Prevent integer overflow */
error_setg(errp, "n_blocks %" PRIu32 " must be %zu or less",
s->n_blocks,
(UINT32_MAX - 1) / sizeof(uint64_t));
return -EINVAL;
offsets_size = s->n_blocks * sizeof(uint64_t);
s->offsets = qemu_malloc(offsets_size);
if (bdrv_pread(bs->file, 128 + 4 + 4, s->offsets, offsets_size) <
offsets_size) {
goto cloop_close;
}
offsets_size = (s->n_blocks + 1) * sizeof(uint64_t);
if (offsets_size > 512 * 1024 * 1024) {
/* Prevent ridiculous offsets_size which causes memory allocation to
* fail or overflows bdrv_pread() size. In practice the 512 MB
* offsets[] limit supports 16 TB images at 256 KB block size.
*/
error_setg(errp, "image requires too many offsets, "
"try increasing block size");
return -EINVAL;
}
s->offsets = g_try_malloc(offsets_size);
if (s->offsets == NULL) {
error_setg(errp, "Could not allocate offsets table");
return -ENOMEM;
}
ret = bdrv_pread(bs->file, 128 + 4 + 4, s->offsets, offsets_size);
if (ret < 0) {
goto fail;
}
for (i = 0; i < s->n_blocks + 1; i++) {
uint64_t size;
s->offsets[i] = be64_to_cpu(s->offsets[i]);
if (i == 0) {
continue;
}
if (s->offsets[i] < s->offsets[i - 1]) {
error_setg(errp, "offsets not monotonically increasing at "
"index %" PRIu32 ", image file is corrupt", i);
ret = -EINVAL;
goto fail;
}
size = s->offsets[i] - s->offsets[i - 1];
/* Compressed blocks should be smaller than the uncompressed block size
* but maybe compression performed poorly so the compressed block is
* actually bigger. Clamp down on unrealistic values to prevent
* ridiculous s->compressed_block allocation.
*/
if (size > 2 * MAX_BLOCK_SIZE) {
error_setg(errp, "invalid compressed block size at index %" PRIu32
", image file is corrupt", i);
ret = -EINVAL;
goto fail;
}
if (size > max_compressed_block_size) {
max_compressed_block_size = size;
for(i=0;i<s->n_blocks;i++) {
s->offsets[i]=be64_to_cpu(s->offsets[i]);
if(i>0) {
uint32_t size=s->offsets[i]-s->offsets[i-1];
if(size>max_compressed_block_size)
max_compressed_block_size=size;
}
}
/* initialize zlib engine */
s->compressed_block = g_try_malloc(max_compressed_block_size + 1);
if (s->compressed_block == NULL) {
error_setg(errp, "Could not allocate compressed_block");
ret = -ENOMEM;
goto fail;
}
s->uncompressed_block = g_try_malloc(s->block_size);
if (s->uncompressed_block == NULL) {
error_setg(errp, "Could not allocate uncompressed_block");
ret = -ENOMEM;
goto fail;
}
if (inflateInit(&s->zstream) != Z_OK) {
ret = -EINVAL;
goto fail;
}
s->current_block = s->n_blocks;
s->compressed_block = qemu_malloc(max_compressed_block_size+1);
s->uncompressed_block = qemu_malloc(s->block_size);
if(inflateInit(&s->zstream) != Z_OK)
goto cloop_close;
s->current_block=s->n_blocks;
s->sectors_per_block = s->block_size/512;
bs->total_sectors = s->n_blocks * s->sectors_per_block;
qemu_co_mutex_init(&s->lock);
bs->total_sectors = s->n_blocks*s->sectors_per_block;
return 0;
fail:
g_free(s->offsets);
g_free(s->compressed_block);
g_free(s->uncompressed_block);
return ret;
cloop_close:
return -1;
}
static inline int cloop_read_block(BlockDriverState *bs, int block_num)
{
BDRVCloopState *s = bs->opaque;
if (s->current_block != block_num) {
if(s->current_block != block_num) {
int ret;
uint32_t bytes = s->offsets[block_num + 1] - s->offsets[block_num];
uint32_t bytes = s->offsets[block_num+1]-s->offsets[block_num];
ret = bdrv_pread(bs->file, s->offsets[block_num], s->compressed_block,
bytes);
if (ret != bytes) {
if (ret != bytes)
return -1;
}
s->zstream.next_in = s->compressed_block;
s->zstream.avail_in = bytes;
s->zstream.next_out = s->uncompressed_block;
s->zstream.avail_out = s->block_size;
ret = inflateReset(&s->zstream);
if (ret != Z_OK) {
if(ret != Z_OK)
return -1;
}
ret = inflate(&s->zstream, Z_FINISH);
if (ret != Z_STREAM_END || s->zstream.total_out != s->block_size) {
if(ret != Z_STREAM_END || s->zstream.total_out != s->block_size)
return -1;
}
s->current_block = block_num;
}
@@ -233,36 +134,23 @@ static int cloop_read(BlockDriverState *bs, int64_t sector_num,
BDRVCloopState *s = bs->opaque;
int i;
for (i = 0; i < nb_sectors; i++) {
uint32_t sector_offset_in_block =
((sector_num + i) % s->sectors_per_block),
block_num = (sector_num + i) / s->sectors_per_block;
if (cloop_read_block(bs, block_num) != 0) {
for(i=0;i<nb_sectors;i++) {
uint32_t sector_offset_in_block=((sector_num+i)%s->sectors_per_block),
block_num=(sector_num+i)/s->sectors_per_block;
if(cloop_read_block(bs, block_num) != 0)
return -1;
}
memcpy(buf + i * 512,
s->uncompressed_block + sector_offset_in_block * 512, 512);
memcpy(buf+i*512,s->uncompressed_block+sector_offset_in_block*512,512);
}
return 0;
}
static coroutine_fn int cloop_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVCloopState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = cloop_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void cloop_close(BlockDriverState *bs)
{
BDRVCloopState *s = bs->opaque;
g_free(s->offsets);
g_free(s->compressed_block);
g_free(s->uncompressed_block);
if(s->n_blocks>0)
free(s->offsets);
free(s->compressed_block);
free(s->uncompressed_block);
inflateEnd(&s->zstream);
}
@@ -271,7 +159,7 @@ static BlockDriver bdrv_cloop = {
.instance_size = sizeof(BDRVCloopState),
.bdrv_probe = cloop_probe,
.bdrv_open = cloop_open,
.bdrv_read = cloop_co_read,
.bdrv_read = cloop_read,
.bdrv_close = cloop_close,
};

View File

@@ -1,273 +0,0 @@
/*
* Live block commit
*
* Copyright Red Hat, Inc. 2012
*
* Authors:
* Jeff Cody <jcody@redhat.com>
* Based on stream.c by Stefan Hajnoczi
*
* This work is licensed under the terms of the GNU LGPL, version 2 or later.
* See the COPYING.LIB file in the top-level directory.
*
*/
#include "trace.h"
#include "block/block_int.h"
#include "block/blockjob.h"
#include "qemu/ratelimit.h"
enum {
/*
* Size of data buffer for populating the image file. This should be large
* enough to process multiple clusters in a single call, so that populating
* contiguous regions of the image is efficient.
*/
COMMIT_BUFFER_SIZE = 512 * 1024, /* in bytes */
};
#define SLICE_TIME 100000000ULL /* ns */
typedef struct CommitBlockJob {
BlockJob common;
RateLimit limit;
BlockDriverState *active;
BlockDriverState *top;
BlockDriverState *base;
BlockdevOnError on_error;
int base_flags;
int orig_overlay_flags;
char *backing_file_str;
} CommitBlockJob;
static int coroutine_fn commit_populate(BlockDriverState *bs,
BlockDriverState *base,
int64_t sector_num, int nb_sectors,
void *buf)
{
int ret = 0;
ret = bdrv_read(bs, sector_num, buf, nb_sectors);
if (ret) {
return ret;
}
ret = bdrv_write(base, sector_num, buf, nb_sectors);
if (ret) {
return ret;
}
return 0;
}
typedef struct {
int ret;
} CommitCompleteData;
static void commit_complete(BlockJob *job, void *opaque)
{
CommitBlockJob *s = container_of(job, CommitBlockJob, common);
CommitCompleteData *data = opaque;
BlockDriverState *active = s->active;
BlockDriverState *top = s->top;
BlockDriverState *base = s->base;
BlockDriverState *overlay_bs;
int ret = data->ret;
if (!block_job_is_cancelled(&s->common) && ret == 0) {
/* success */
ret = bdrv_drop_intermediate(active, top, base, s->backing_file_str);
}
/* restore base open flags here if appropriate (e.g., change the base back
* to r/o). These reopens do not need to be atomic, since we won't abort
* even on failure here */
if (s->base_flags != bdrv_get_flags(base)) {
bdrv_reopen(base, s->base_flags, NULL);
}
overlay_bs = bdrv_find_overlay(active, top);
if (overlay_bs && s->orig_overlay_flags != bdrv_get_flags(overlay_bs)) {
bdrv_reopen(overlay_bs, s->orig_overlay_flags, NULL);
}
g_free(s->backing_file_str);
block_job_completed(&s->common, ret);
g_free(data);
}
static void coroutine_fn commit_run(void *opaque)
{
CommitBlockJob *s = opaque;
CommitCompleteData *data;
BlockDriverState *top = s->top;
BlockDriverState *base = s->base;
int64_t sector_num, end;
int ret = 0;
int n = 0;
void *buf = NULL;
int bytes_written = 0;
int64_t base_len;
ret = s->common.len = bdrv_getlength(top);
if (s->common.len < 0) {
goto out;
}
ret = base_len = bdrv_getlength(base);
if (base_len < 0) {
goto out;
}
if (base_len < s->common.len) {
ret = bdrv_truncate(base, s->common.len);
if (ret) {
goto out;
}
}
end = s->common.len >> BDRV_SECTOR_BITS;
buf = qemu_blockalign(top, COMMIT_BUFFER_SIZE);
for (sector_num = 0; sector_num < end; sector_num += n) {
uint64_t delay_ns = 0;
bool copy;
wait:
/* Note that even when no rate limit is applied we need to yield
* with no pending I/O here so that bdrv_drain_all() returns.
*/
block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, delay_ns);
if (block_job_is_cancelled(&s->common)) {
break;
}
/* Copy if allocated above the base */
ret = bdrv_is_allocated_above(top, base, sector_num,
COMMIT_BUFFER_SIZE / BDRV_SECTOR_SIZE,
&n);
copy = (ret == 1);
trace_commit_one_iteration(s, sector_num, n, ret);
if (copy) {
if (s->common.speed) {
delay_ns = ratelimit_calculate_delay(&s->limit, n);
if (delay_ns > 0) {
goto wait;
}
}
ret = commit_populate(top, base, sector_num, n, buf);
bytes_written += n * BDRV_SECTOR_SIZE;
}
if (ret < 0) {
if (s->on_error == BLOCKDEV_ON_ERROR_STOP ||
s->on_error == BLOCKDEV_ON_ERROR_REPORT||
(s->on_error == BLOCKDEV_ON_ERROR_ENOSPC && ret == -ENOSPC)) {
goto out;
} else {
n = 0;
continue;
}
}
/* Publish progress */
s->common.offset += n * BDRV_SECTOR_SIZE;
}
ret = 0;
out:
qemu_vfree(buf);
data = g_malloc(sizeof(*data));
data->ret = ret;
block_job_defer_to_main_loop(&s->common, commit_complete, data);
}
static void commit_set_speed(BlockJob *job, int64_t speed, Error **errp)
{
CommitBlockJob *s = container_of(job, CommitBlockJob, common);
if (speed < 0) {
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
}
static const BlockJobDriver commit_job_driver = {
.instance_size = sizeof(CommitBlockJob),
.job_type = BLOCK_JOB_TYPE_COMMIT,
.set_speed = commit_set_speed,
};
void commit_start(BlockDriverState *bs, BlockDriverState *base,
BlockDriverState *top, int64_t speed,
BlockdevOnError on_error, BlockCompletionFunc *cb,
void *opaque, const char *backing_file_str, Error **errp)
{
CommitBlockJob *s;
BlockReopenQueue *reopen_queue = NULL;
int orig_overlay_flags;
int orig_base_flags;
BlockDriverState *overlay_bs;
Error *local_err = NULL;
if ((on_error == BLOCKDEV_ON_ERROR_STOP ||
on_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_setg(errp, "Invalid parameter combination");
return;
}
assert(top != bs);
if (top == base) {
error_setg(errp, "Invalid files for merge: top and base are the same");
return;
}
overlay_bs = bdrv_find_overlay(bs, top);
if (overlay_bs == NULL) {
error_setg(errp, "Could not find overlay image for %s:", top->filename);
return;
}
orig_base_flags = bdrv_get_flags(base);
orig_overlay_flags = bdrv_get_flags(overlay_bs);
/* convert base & overlay_bs to r/w, if necessary */
if (!(orig_base_flags & BDRV_O_RDWR)) {
reopen_queue = bdrv_reopen_queue(reopen_queue, base,
orig_base_flags | BDRV_O_RDWR);
}
if (!(orig_overlay_flags & BDRV_O_RDWR)) {
reopen_queue = bdrv_reopen_queue(reopen_queue, overlay_bs,
orig_overlay_flags | BDRV_O_RDWR);
}
if (reopen_queue) {
bdrv_reopen_multiple(reopen_queue, &local_err);
if (local_err != NULL) {
error_propagate(errp, local_err);
return;
}
}
s = block_job_create(&commit_job_driver, bs, speed, cb, opaque, errp);
if (!s) {
return;
}
s->base = base;
s->top = top;
s->active = bs;
s->base_flags = orig_base_flags;
s->orig_overlay_flags = orig_overlay_flags;
s->backing_file_str = g_strdup(backing_file_str);
s->on_error = on_error;
s->common.co = qemu_coroutine_create(commit_run);
trace_commit_start(bs, base, top, s, s->common.co, opaque);
qemu_coroutine_enter(s->common.co, s);
}

324
block/cow.c Normal file
View File

@@ -0,0 +1,324 @@
/*
* Block driver for the COW format
*
* Copyright (c) 2004 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block_int.h"
#include "module.h"
/**************************************************************/
/* COW block driver using file system holes */
/* user mode linux compatible COW file */
#define COW_MAGIC 0x4f4f4f4d /* MOOO */
#define COW_VERSION 2
struct cow_header_v2 {
uint32_t magic;
uint32_t version;
char backing_file[1024];
int32_t mtime;
uint64_t size;
uint32_t sectorsize;
};
typedef struct BDRVCowState {
int64_t cow_sectors_offset;
} BDRVCowState;
static int cow_probe(const uint8_t *buf, int buf_size, const char *filename)
{
const struct cow_header_v2 *cow_header = (const void *)buf;
if (buf_size >= sizeof(struct cow_header_v2) &&
be32_to_cpu(cow_header->magic) == COW_MAGIC &&
be32_to_cpu(cow_header->version) == COW_VERSION)
return 100;
else
return 0;
}
static int cow_open(BlockDriverState *bs, int flags)
{
BDRVCowState *s = bs->opaque;
struct cow_header_v2 cow_header;
int bitmap_size;
int64_t size;
/* see if it is a cow image */
if (bdrv_pread(bs->file, 0, &cow_header, sizeof(cow_header)) !=
sizeof(cow_header)) {
goto fail;
}
if (be32_to_cpu(cow_header.magic) != COW_MAGIC ||
be32_to_cpu(cow_header.version) != COW_VERSION) {
goto fail;
}
/* cow image found */
size = be64_to_cpu(cow_header.size);
bs->total_sectors = size / 512;
pstrcpy(bs->backing_file, sizeof(bs->backing_file),
cow_header.backing_file);
bitmap_size = ((bs->total_sectors + 7) >> 3) + sizeof(cow_header);
s->cow_sectors_offset = (bitmap_size + 511) & ~511;
return 0;
fail:
return -1;
}
/*
* XXX(hch): right now these functions are extremly ineffcient.
* We should just read the whole bitmap we'll need in one go instead.
*/
static inline int cow_set_bit(BlockDriverState *bs, int64_t bitnum)
{
uint64_t offset = sizeof(struct cow_header_v2) + bitnum / 8;
uint8_t bitmap;
int ret;
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
bitmap |= (1 << (bitnum % 8));
ret = bdrv_pwrite_sync(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
return 0;
}
static inline int is_bit_set(BlockDriverState *bs, int64_t bitnum)
{
uint64_t offset = sizeof(struct cow_header_v2) + bitnum / 8;
uint8_t bitmap;
int ret;
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
return !!(bitmap & (1 << (bitnum % 8)));
}
/* Return true if first block has been changed (ie. current version is
* in COW file). Set the number of continuous blocks for which that
* is true. */
static int cow_is_allocated(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *num_same)
{
int changed;
if (nb_sectors == 0) {
*num_same = nb_sectors;
return 0;
}
changed = is_bit_set(bs, sector_num);
if (changed < 0) {
return 0; /* XXX: how to return I/O errors? */
}
for (*num_same = 1; *num_same < nb_sectors; (*num_same)++) {
if (is_bit_set(bs, sector_num + *num_same) != changed)
break;
}
return changed;
}
static int cow_update_bitmap(BlockDriverState *bs, int64_t sector_num,
int nb_sectors)
{
int error = 0;
int i;
for (i = 0; i < nb_sectors; i++) {
error = cow_set_bit(bs, sector_num + i);
if (error) {
break;
}
}
return error;
}
static int cow_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVCowState *s = bs->opaque;
int ret, n;
while (nb_sectors > 0) {
if (cow_is_allocated(bs, sector_num, nb_sectors, &n)) {
ret = bdrv_pread(bs->file,
s->cow_sectors_offset + sector_num * 512,
buf, n * 512);
if (ret != n * 512)
return -1;
} else {
if (bs->backing_hd) {
/* read from the base image */
ret = bdrv_read(bs->backing_hd, sector_num, buf, n);
if (ret < 0)
return -1;
} else {
memset(buf, 0, n * 512);
}
}
nb_sectors -= n;
sector_num += n;
buf += n * 512;
}
return 0;
}
static int cow_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
BDRVCowState *s = bs->opaque;
int ret;
ret = bdrv_pwrite(bs->file, s->cow_sectors_offset + sector_num * 512,
buf, nb_sectors * 512);
if (ret != nb_sectors * 512)
return -1;
return cow_update_bitmap(bs, sector_num, nb_sectors);
}
static void cow_close(BlockDriverState *bs)
{
}
static int cow_create(const char *filename, QEMUOptionParameter *options)
{
int fd, cow_fd;
struct cow_header_v2 cow_header;
struct stat st;
int64_t image_sectors = 0;
const char *image_filename = NULL;
int ret;
/* Read out options */
while (options && options->name) {
if (!strcmp(options->name, BLOCK_OPT_SIZE)) {
image_sectors = options->value.n / 512;
} else if (!strcmp(options->name, BLOCK_OPT_BACKING_FILE)) {
image_filename = options->value.s;
}
options++;
}
cow_fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY,
0644);
if (cow_fd < 0)
return -errno;
memset(&cow_header, 0, sizeof(cow_header));
cow_header.magic = cpu_to_be32(COW_MAGIC);
cow_header.version = cpu_to_be32(COW_VERSION);
if (image_filename) {
/* Note: if no file, we put a dummy mtime */
cow_header.mtime = cpu_to_be32(0);
fd = open(image_filename, O_RDONLY | O_BINARY);
if (fd < 0) {
close(cow_fd);
goto mtime_fail;
}
if (fstat(fd, &st) != 0) {
close(fd);
goto mtime_fail;
}
close(fd);
cow_header.mtime = cpu_to_be32(st.st_mtime);
mtime_fail:
pstrcpy(cow_header.backing_file, sizeof(cow_header.backing_file),
image_filename);
}
cow_header.sectorsize = cpu_to_be32(512);
cow_header.size = cpu_to_be64(image_sectors * 512);
ret = qemu_write_full(cow_fd, &cow_header, sizeof(cow_header));
if (ret != sizeof(cow_header)) {
ret = -errno;
goto exit;
}
/* resize to include at least all the bitmap */
ret = ftruncate(cow_fd, sizeof(cow_header) + ((image_sectors + 7) >> 3));
if (ret) {
ret = -errno;
goto exit;
}
exit:
close(cow_fd);
return ret;
}
static void cow_flush(BlockDriverState *bs)
{
bdrv_flush(bs->file);
}
static QEMUOptionParameter cow_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
.type = OPT_SIZE,
.help = "Virtual disk size"
},
{
.name = BLOCK_OPT_BACKING_FILE,
.type = OPT_STRING,
.help = "File name of a base image"
},
{ NULL }
};
static BlockDriver bdrv_cow = {
.format_name = "cow",
.instance_size = sizeof(BDRVCowState),
.bdrv_probe = cow_probe,
.bdrv_open = cow_open,
.bdrv_read = cow_read,
.bdrv_write = cow_write,
.bdrv_close = cow_close,
.bdrv_create = cow_create,
.bdrv_flush = cow_flush,
.bdrv_is_allocated = cow_is_allocated,
.create_options = cow_create_options,
};
static void bdrv_cow_init(void)
{
bdrv_register(&bdrv_cow);
}
block_init(bdrv_cow_init);

View File

@@ -22,11 +22,10 @@
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "qapi/qmp/qbool.h"
#include "block_int.h"
#include <curl/curl.h>
// #define DEBUG_CURL
// #define DEBUG
// #define DEBUG_VERBOSE
#ifdef DEBUG_CURL
@@ -35,57 +34,20 @@
#define DPRINTF(fmt, ...) do { } while (0)
#endif
#if LIBCURL_VERSION_NUM >= 0x071000
/* The multi interface timer callback was introduced in 7.16.0 */
#define NEED_CURL_TIMER_CALLBACK
#define HAVE_SOCKET_ACTION
#endif
#ifndef HAVE_SOCKET_ACTION
/* If curl_multi_socket_action isn't available, define it statically here in
* terms of curl_multi_socket. Note that ev_bitmask will be ignored, which is
* less efficient but still safe. */
static CURLMcode __curl_multi_socket_action(CURLM *multi_handle,
curl_socket_t sockfd,
int ev_bitmask,
int *running_handles)
{
return curl_multi_socket(multi_handle, sockfd, running_handles);
}
#define curl_multi_socket_action __curl_multi_socket_action
#endif
#define PROTOCOLS (CURLPROTO_HTTP | CURLPROTO_HTTPS | \
CURLPROTO_FTP | CURLPROTO_FTPS | \
CURLPROTO_TFTP)
#define CURL_NUM_STATES 8
#define CURL_NUM_ACB 8
#define SECTOR_SIZE 512
#define READ_AHEAD_DEFAULT (256 * 1024)
#define CURL_TIMEOUT_DEFAULT 5
#define CURL_TIMEOUT_MAX 10000
#define READ_AHEAD_SIZE (256 * 1024)
#define FIND_RET_NONE 0
#define FIND_RET_OK 1
#define FIND_RET_WAIT 2
#define CURL_BLOCK_OPT_URL "url"
#define CURL_BLOCK_OPT_READAHEAD "readahead"
#define CURL_BLOCK_OPT_SSLVERIFY "sslverify"
#define CURL_BLOCK_OPT_TIMEOUT "timeout"
#define CURL_BLOCK_OPT_COOKIE "cookie"
struct BDRVCURLState;
typedef struct CURLAIOCB {
BlockAIOCB common;
QEMUBH *bh;
BlockDriverAIOCB common;
QEMUIOVector *qiov;
int64_t sector_num;
int nb_sectors;
size_t start;
size_t end;
} CURLAIOCB;
@@ -95,7 +57,6 @@ typedef struct CURLState
struct BDRVCURLState *s;
CURLAIOCB *acb[CURL_NUM_ACB];
CURL *curl;
curl_socket_t sock_fd;
char *orig_buf;
size_t buf_start;
size_t buf_off;
@@ -107,78 +68,46 @@ typedef struct CURLState
typedef struct BDRVCURLState {
CURLM *multi;
QEMUTimer timer;
size_t len;
CURLState states[CURL_NUM_STATES];
char *url;
size_t readahead_size;
bool sslverify;
uint64_t timeout;
char *cookie;
bool accept_range;
AioContext *aio_context;
} BDRVCURLState;
static void curl_clean_state(CURLState *s);
static void curl_multi_do(void *arg);
static void curl_multi_read(void *arg);
#ifdef NEED_CURL_TIMER_CALLBACK
static int curl_timer_cb(CURLM *multi, long timeout_ms, void *opaque)
{
BDRVCURLState *s = opaque;
DPRINTF("CURL: timer callback timeout_ms %ld\n", timeout_ms);
if (timeout_ms == -1) {
timer_del(&s->timer);
} else {
int64_t timeout_ns = (int64_t)timeout_ms * 1000 * 1000;
timer_mod(&s->timer,
qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + timeout_ns);
}
return 0;
}
#endif
static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
void *userp, void *sp)
void *s, void *sp)
{
BDRVCURLState *s;
CURLState *state = NULL;
curl_easy_getinfo(curl, CURLINFO_PRIVATE, (char **)&state);
state->sock_fd = fd;
s = state->s;
DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd);
switch (action) {
case CURL_POLL_IN:
aio_set_fd_handler(s->aio_context, fd, curl_multi_read,
NULL, state);
qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, NULL, NULL, s);
break;
case CURL_POLL_OUT:
aio_set_fd_handler(s->aio_context, fd, NULL, curl_multi_do, state);
qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, NULL, NULL, s);
break;
case CURL_POLL_INOUT:
aio_set_fd_handler(s->aio_context, fd, curl_multi_read,
curl_multi_do, state);
qemu_aio_set_fd_handler(fd, curl_multi_do,
curl_multi_do, NULL, NULL, s);
break;
case CURL_POLL_REMOVE:
aio_set_fd_handler(s->aio_context, fd, NULL, NULL, NULL);
qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL, NULL);
break;
}
return 0;
}
static size_t curl_header_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
static size_t curl_size_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
{
BDRVCURLState *s = opaque;
CURLState *s = ((CURLState*)opaque);
size_t realsize = size * nmemb;
const char *accept_line = "Accept-Ranges: bytes";
size_t fsize;
if (realsize >= strlen(accept_line)
&& strncmp((char *)ptr, accept_line, strlen(accept_line)) == 0) {
s->accept_range = true;
if(sscanf(ptr, "Content-Length: %zd", &fsize) == 1) {
s->s->len = fsize;
}
return realsize;
@@ -193,13 +122,8 @@ static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
DPRINTF("CURL: Just reading %zd bytes\n", realsize);
if (!s || !s->orig_buf)
return 0;
goto read_end;
if (s->buf_off >= s->buf_len) {
/* buffer full, read nothing */
return 0;
}
realsize = MIN(realsize, s->buf_len - s->buf_off);
memcpy(s->orig_buf + s->buf_off, ptr, realsize);
s->buf_off += realsize;
@@ -210,14 +134,15 @@ static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
continue;
if ((s->buf_off >= acb->end)) {
qemu_iovec_from_buf(acb->qiov, 0, s->orig_buf + acb->start,
qemu_iovec_from_buffer(acb->qiov, s->orig_buf + acb->start,
acb->end - acb->start);
acb->common.cb(acb->common.opaque, 0);
qemu_aio_unref(acb);
qemu_aio_release(acb);
s->acb[i] = NULL;
}
}
read_end:
return realsize;
}
@@ -245,15 +170,14 @@ static int curl_find_buf(BDRVCURLState *s, size_t start, size_t len,
{
char *buf = state->orig_buf + (start - state->buf_start);
qemu_iovec_from_buf(acb->qiov, 0, buf, len);
qemu_iovec_from_buffer(acb->qiov, buf, len);
acb->common.cb(acb->common.opaque, 0);
return FIND_RET_OK;
}
// Wait for unfinished chunks
if (state->in_use &&
(start >= state->buf_start) &&
if ((start >= state->buf_start) &&
(start <= buf_fend) &&
(end >= state->buf_start) &&
(end <= buf_fend))
@@ -275,90 +199,47 @@ static int curl_find_buf(BDRVCURLState *s, size_t start, size_t len,
return FIND_RET_NONE;
}
static void curl_multi_check_completion(BDRVCURLState *s)
static void curl_multi_do(void *arg)
{
BDRVCURLState *s = (BDRVCURLState *)arg;
int running;
int r;
int msgs_in_queue;
if (!s->multi)
return;
do {
r = curl_multi_socket_all(s->multi, &running);
} while(r == CURLM_CALL_MULTI_PERFORM);
/* Try to find done transfers, so we can free the easy
* handle again. */
for (;;) {
do {
CURLMsg *msg;
msg = curl_multi_info_read(s->multi, &msgs_in_queue);
/* Quit when there are no more completions */
if (!msg)
break;
if (msg->msg == CURLMSG_NONE)
break;
if (msg->msg == CURLMSG_DONE) {
switch (msg->msg) {
case CURLMSG_DONE:
{
CURLState *state = NULL;
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE,
(char **)&state);
/* ACBs for successful messages get completed in curl_read_cb */
if (msg->data.result != CURLE_OK) {
int i;
for (i = 0; i < CURL_NUM_ACB; i++) {
CURLAIOCB *acb = state->acb[i];
if (acb == NULL) {
continue;
}
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_unref(acb);
state->acb[i] = NULL;
}
}
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE, (char**)&state);
curl_clean_state(state);
break;
}
default:
msgs_in_queue = 0;
break;
}
} while(msgs_in_queue);
}
static void curl_multi_do(void *arg)
{
CURLState *s = (CURLState *)arg;
int running;
int r;
if (!s->s->multi) {
return;
}
do {
r = curl_multi_socket_action(s->s->multi, s->sock_fd, 0, &running);
} while(r == CURLM_CALL_MULTI_PERFORM);
}
static void curl_multi_read(void *arg)
{
CURLState *s = (CURLState *)arg;
curl_multi_do(arg);
curl_multi_check_completion(s->s);
}
static void curl_multi_timeout_do(void *arg)
{
#ifdef NEED_CURL_TIMER_CALLBACK
BDRVCURLState *s = (BDRVCURLState *)arg;
int running;
if (!s->multi) {
return;
}
curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running);
curl_multi_check_completion(s);
#else
abort();
#endif
}
static CURLState *curl_init_state(BlockDriverState *bs, BDRVCURLState *s)
static CURLState *curl_init_state(BDRVCURLState *s)
{
CURLState *state = NULL;
int i, j;
@@ -376,47 +257,32 @@ static CURLState *curl_init_state(BlockDriverState *bs, BDRVCURLState *s)
break;
}
if (!state) {
aio_poll(bdrv_get_aio_context(bs), true);
usleep(100);
curl_multi_do(s);
}
} while(!state);
if (!state->curl) {
if (state->curl)
goto has_curl;
state->curl = curl_easy_init();
if (!state->curl) {
if (!state->curl)
return NULL;
}
curl_easy_setopt(state->curl, CURLOPT_URL, s->url);
curl_easy_setopt(state->curl, CURLOPT_SSL_VERIFYPEER,
(long) s->sslverify);
if (s->cookie) {
curl_easy_setopt(state->curl, CURLOPT_COOKIE, s->cookie);
}
curl_easy_setopt(state->curl, CURLOPT_TIMEOUT, (long)s->timeout);
curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION,
(void *)curl_read_cb);
curl_easy_setopt(state->curl, CURLOPT_TIMEOUT, 5);
curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION, (void *)curl_read_cb);
curl_easy_setopt(state->curl, CURLOPT_WRITEDATA, (void *)state);
curl_easy_setopt(state->curl, CURLOPT_PRIVATE, (void *)state);
curl_easy_setopt(state->curl, CURLOPT_AUTOREFERER, 1);
curl_easy_setopt(state->curl, CURLOPT_FOLLOWLOCATION, 1);
curl_easy_setopt(state->curl, CURLOPT_NOSIGNAL, 1);
curl_easy_setopt(state->curl, CURLOPT_ERRORBUFFER, state->errmsg);
curl_easy_setopt(state->curl, CURLOPT_FAILONERROR, 1);
/* Restrict supported protocols to avoid security issues in the more
* obscure protocols. For example, do not allow POP3/SMTP/IMAP see
* CVE-2013-0249.
*
* Restricting protocols is only supported from 7.19.4 upwards.
*/
#if LIBCURL_VERSION_NUM >= 0x071304
curl_easy_setopt(state->curl, CURLOPT_PROTOCOLS, PROTOCOLS);
curl_easy_setopt(state->curl, CURLOPT_REDIR_PROTOCOLS, PROTOCOLS);
#endif
#ifdef DEBUG_VERBOSE
curl_easy_setopt(state->curl, CURLOPT_VERBOSE, 1);
#endif
}
has_curl:
state->s = s;
@@ -430,287 +296,194 @@ static void curl_clean_state(CURLState *s)
s->in_use = 0;
}
static void curl_parse_filename(const char *filename, QDict *options,
Error **errp)
{
qdict_put(options, CURL_BLOCK_OPT_URL, qstring_from_str(filename));
}
static void curl_detach_aio_context(BlockDriverState *bs)
{
BDRVCURLState *s = bs->opaque;
int i;
for (i = 0; i < CURL_NUM_STATES; i++) {
if (s->states[i].in_use) {
curl_clean_state(&s->states[i]);
}
if (s->states[i].curl) {
curl_easy_cleanup(s->states[i].curl);
s->states[i].curl = NULL;
}
g_free(s->states[i].orig_buf);
s->states[i].orig_buf = NULL;
}
if (s->multi) {
curl_multi_cleanup(s->multi);
s->multi = NULL;
}
timer_del(&s->timer);
}
static void curl_attach_aio_context(BlockDriverState *bs,
AioContext *new_context)
{
BDRVCURLState *s = bs->opaque;
aio_timer_init(new_context, &s->timer,
QEMU_CLOCK_REALTIME, SCALE_NS,
curl_multi_timeout_do, s);
assert(!s->multi);
s->multi = curl_multi_init();
s->aio_context = new_context;
curl_multi_setopt(s->multi, CURLMOPT_SOCKETFUNCTION, curl_sock_cb);
#ifdef NEED_CURL_TIMER_CALLBACK
curl_multi_setopt(s->multi, CURLMOPT_TIMERDATA, s);
curl_multi_setopt(s->multi, CURLMOPT_TIMERFUNCTION, curl_timer_cb);
#endif
}
static QemuOptsList runtime_opts = {
.name = "curl",
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
.desc = {
{
.name = CURL_BLOCK_OPT_URL,
.type = QEMU_OPT_STRING,
.help = "URL to open",
},
{
.name = CURL_BLOCK_OPT_READAHEAD,
.type = QEMU_OPT_SIZE,
.help = "Readahead size",
},
{
.name = CURL_BLOCK_OPT_SSLVERIFY,
.type = QEMU_OPT_BOOL,
.help = "Verify SSL certificate"
},
{
.name = CURL_BLOCK_OPT_TIMEOUT,
.type = QEMU_OPT_NUMBER,
.help = "Curl timeout"
},
{
.name = CURL_BLOCK_OPT_COOKIE,
.type = QEMU_OPT_STRING,
.help = "Pass the cookie or list of cookies with each request"
},
{ /* end of list */ }
},
};
static int curl_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int curl_open(BlockDriverState *bs, const char *filename, int flags)
{
BDRVCURLState *s = bs->opaque;
CURLState *state = NULL;
QemuOpts *opts;
Error *local_err = NULL;
const char *file;
const char *cookie;
double d;
#define RA_OPTSTR ":readahead="
char *file;
char *ra;
const char *ra_val;
int parse_state = 0;
static int inited = 0;
if (flags & BDRV_O_RDWR) {
error_setg(errp, "curl block device does not support writes");
return -EROFS;
file = qemu_strdup(filename);
s->readahead_size = READ_AHEAD_SIZE;
/* Parse a trailing ":readahead=#:" param, if present. */
ra = file + strlen(file) - 1;
while (ra >= file) {
if (parse_state == 0) {
if (*ra == ':')
parse_state++;
else
break;
} else if (parse_state == 1) {
if (*ra > '9' || *ra < '0') {
char *opt_start = ra - strlen(RA_OPTSTR) + 1;
if (opt_start > file &&
strncmp(opt_start, RA_OPTSTR, strlen(RA_OPTSTR)) == 0) {
ra_val = ra + 1;
ra -= strlen(RA_OPTSTR) - 1;
*ra = '\0';
s->readahead_size = atoi(ra_val);
break;
} else {
break;
}
}
}
ra--;
}
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
goto out_noclean;
}
s->readahead_size = qemu_opt_get_size(opts, CURL_BLOCK_OPT_READAHEAD,
READ_AHEAD_DEFAULT);
if ((s->readahead_size & 0x1ff) != 0) {
error_setg(errp, "HTTP_READAHEAD_SIZE %zd is not a multiple of 512",
fprintf(stderr, "HTTP_READAHEAD_SIZE %zd is not a multiple of 512\n",
s->readahead_size);
goto out_noclean;
}
s->timeout = qemu_opt_get_number(opts, CURL_BLOCK_OPT_TIMEOUT,
CURL_TIMEOUT_DEFAULT);
if (s->timeout > CURL_TIMEOUT_MAX) {
error_setg(errp, "timeout parameter is too large or negative");
goto out_noclean;
}
s->sslverify = qemu_opt_get_bool(opts, CURL_BLOCK_OPT_SSLVERIFY, true);
cookie = qemu_opt_get(opts, CURL_BLOCK_OPT_COOKIE);
s->cookie = g_strdup(cookie);
file = qemu_opt_get(opts, CURL_BLOCK_OPT_URL);
if (file == NULL) {
error_setg(errp, "curl block driver requires an 'url' option");
goto out_noclean;
}
if (!inited) {
curl_global_init(CURL_GLOBAL_ALL);
inited = 1;
}
DPRINTF("CURL: Opening %s\n", file);
s->aio_context = bdrv_get_aio_context(bs);
s->url = g_strdup(file);
state = curl_init_state(bs, s);
s->url = file;
state = curl_init_state(s);
if (!state)
goto out_noclean;
// Get file size
s->accept_range = false;
curl_easy_setopt(state->curl, CURLOPT_NOBODY, 1);
curl_easy_setopt(state->curl, CURLOPT_HEADERFUNCTION,
curl_header_cb);
curl_easy_setopt(state->curl, CURLOPT_HEADERDATA, s);
curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION, (void *)curl_size_cb);
if (curl_easy_perform(state->curl))
goto out;
curl_easy_getinfo(state->curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD, &d);
curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION, (void *)curl_read_cb);
curl_easy_setopt(state->curl, CURLOPT_NOBODY, 0);
if (d)
s->len = (size_t)d;
else if(!s->len)
goto out;
if ((!strncasecmp(s->url, "http://", strlen("http://"))
|| !strncasecmp(s->url, "https://", strlen("https://")))
&& !s->accept_range) {
pstrcpy(state->errmsg, CURL_ERROR_SIZE,
"Server does not support 'range' (byte ranges).");
goto out;
}
DPRINTF("CURL: Size = %zd\n", s->len);
curl_clean_state(state);
curl_easy_cleanup(state->curl);
state->curl = NULL;
curl_attach_aio_context(bs, bdrv_get_aio_context(bs));
// Now we know the file exists and its size, so let's
// initialize the multi interface!
s->multi = curl_multi_init();
curl_multi_setopt( s->multi, CURLMOPT_SOCKETDATA, s);
curl_multi_setopt( s->multi, CURLMOPT_SOCKETFUNCTION, curl_sock_cb );
curl_multi_do(s);
qemu_opts_del(opts);
return 0;
out:
error_setg(errp, "CURL: Error opening file: %s", state->errmsg);
fprintf(stderr, "CURL: Error opening file: %s\n", state->errmsg);
curl_easy_cleanup(state->curl);
state->curl = NULL;
out_noclean:
g_free(s->cookie);
g_free(s->url);
qemu_opts_del(opts);
qemu_free(file);
return -EINVAL;
}
static const AIOCBInfo curl_aiocb_info = {
static void curl_aio_cancel(BlockDriverAIOCB *blockacb)
{
// Do we have to implement canceling? Seems to work without...
}
static AIOPool curl_aio_pool = {
.aiocb_size = sizeof(CURLAIOCB),
.cancel = curl_aio_cancel,
};
static void curl_readv_bh_cb(void *p)
static BlockDriverAIOCB *curl_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
CURLState *state;
int running;
CURLAIOCB *acb = p;
BDRVCURLState *s = acb->common.bs->opaque;
qemu_bh_delete(acb->bh);
acb->bh = NULL;
size_t start = acb->sector_num * SECTOR_SIZE;
BDRVCURLState *s = bs->opaque;
CURLAIOCB *acb;
size_t start = sector_num * SECTOR_SIZE;
size_t end;
CURLState *state;
acb = qemu_aio_get(&curl_aio_pool, bs, cb, opaque);
if (!acb)
return NULL;
acb->qiov = qiov;
// In case we have the requested data already (e.g. read-ahead),
// we can just call the callback and be done.
switch (curl_find_buf(s, start, acb->nb_sectors * SECTOR_SIZE, acb)) {
switch (curl_find_buf(s, start, nb_sectors * SECTOR_SIZE, acb)) {
case FIND_RET_OK:
qemu_aio_unref(acb);
qemu_aio_release(acb);
// fall through
case FIND_RET_WAIT:
return;
return &acb->common;
default:
break;
}
// No cache found, so let's start a new request
state = curl_init_state(acb->common.bs, s);
if (!state) {
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_unref(acb);
return;
}
state = curl_init_state(s);
if (!state)
return NULL;
acb->start = 0;
acb->end = (acb->nb_sectors * SECTOR_SIZE);
acb->end = (nb_sectors * SECTOR_SIZE);
state->buf_off = 0;
g_free(state->orig_buf);
if (state->orig_buf)
qemu_free(state->orig_buf);
state->buf_start = start;
state->buf_len = acb->end + s->readahead_size;
end = MIN(start + state->buf_len, s->len) - 1;
state->orig_buf = g_try_malloc(state->buf_len);
if (state->buf_len && state->orig_buf == NULL) {
curl_clean_state(state);
acb->common.cb(acb->common.opaque, -ENOMEM);
qemu_aio_unref(acb);
return;
}
state->orig_buf = qemu_malloc(state->buf_len);
state->acb[0] = acb;
snprintf(state->range, 127, "%zd-%zd", start, end);
DPRINTF("CURL (AIO): Reading %d at %zd (%s)\n",
(acb->nb_sectors * SECTOR_SIZE), start, state->range);
(nb_sectors * SECTOR_SIZE), start, state->range);
curl_easy_setopt(state->curl, CURLOPT_RANGE, state->range);
curl_multi_add_handle(s->multi, state->curl);
curl_multi_do(s);
/* Tell curl it needs to kick things off */
curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running);
}
static BlockAIOCB *curl_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque)
{
CURLAIOCB *acb;
acb = qemu_aio_get(&curl_aiocb_info, bs, cb, opaque);
acb->qiov = qiov;
acb->sector_num = sector_num;
acb->nb_sectors = nb_sectors;
acb->bh = aio_bh_new(bdrv_get_aio_context(bs), curl_readv_bh_cb, acb);
qemu_bh_schedule(acb->bh);
return &acb->common;
}
static void curl_close(BlockDriverState *bs)
{
BDRVCURLState *s = bs->opaque;
int i;
DPRINTF("CURL: Close\n");
curl_detach_aio_context(bs);
g_free(s->cookie);
g_free(s->url);
for (i=0; i<CURL_NUM_STATES; i++) {
if (s->states[i].in_use)
curl_clean_state(&s->states[i]);
if (s->states[i].curl) {
curl_easy_cleanup(s->states[i].curl);
s->states[i].curl = NULL;
}
if (s->states[i].orig_buf) {
qemu_free(s->states[i].orig_buf);
s->states[i].orig_buf = NULL;
}
}
if (s->multi)
curl_multi_cleanup(s->multi);
if (s->url)
free(s->url);
}
static int64_t curl_getlength(BlockDriverState *bs)
@@ -724,15 +497,11 @@ static BlockDriver bdrv_http = {
.protocol_name = "http",
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.bdrv_aio_readv = curl_aio_readv,
.bdrv_detach_aio_context = curl_detach_aio_context,
.bdrv_attach_aio_context = curl_attach_aio_context,
};
static BlockDriver bdrv_https = {
@@ -740,15 +509,11 @@ static BlockDriver bdrv_https = {
.protocol_name = "https",
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.bdrv_aio_readv = curl_aio_readv,
.bdrv_detach_aio_context = curl_detach_aio_context,
.bdrv_attach_aio_context = curl_attach_aio_context,
};
static BlockDriver bdrv_ftp = {
@@ -756,15 +521,11 @@ static BlockDriver bdrv_ftp = {
.protocol_name = "ftp",
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.bdrv_aio_readv = curl_aio_readv,
.bdrv_detach_aio_context = curl_detach_aio_context,
.bdrv_attach_aio_context = curl_attach_aio_context,
};
static BlockDriver bdrv_ftps = {
@@ -772,15 +533,11 @@ static BlockDriver bdrv_ftps = {
.protocol_name = "ftps",
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.bdrv_aio_readv = curl_aio_readv,
.bdrv_detach_aio_context = curl_detach_aio_context,
.bdrv_attach_aio_context = curl_attach_aio_context,
};
static BlockDriver bdrv_tftp = {
@@ -788,15 +545,11 @@ static BlockDriver bdrv_tftp = {
.protocol_name = "tftp",
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.bdrv_aio_readv = curl_aio_readv,
.bdrv_detach_aio_context = curl_detach_aio_context,
.bdrv_attach_aio_context = curl_attach_aio_context,
};
static void curl_block_init(void)

View File

@@ -22,21 +22,12 @@
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/bswap.h"
#include "qemu/module.h"
#include "block_int.h"
#include "bswap.h"
#include "module.h"
#include <zlib.h>
enum {
/* Limit chunk sizes to prevent unreasonable amounts of memory being used
* or truncating when converting to 32-bit types
*/
DMG_LENGTHS_MAX = 64 * 1024 * 1024, /* 64 MB */
DMG_SECTORCOUNTS_MAX = DMG_LENGTHS_MAX / 512,
};
typedef struct BDRVDMGState {
CoMutex lock;
/* each chunk contains a certain number of sectors,
* offsets[i] is the offset in the .dmg file,
* lengths[i] is the length of the compressed chunk,
@@ -59,87 +50,35 @@ typedef struct BDRVDMGState {
static int dmg_probe(const uint8_t *buf, int buf_size, const char *filename)
{
int len;
if (!filename) {
return 0;
}
len = strlen(filename);
if (len > 4 && !strcmp(filename + len - 4, ".dmg")) {
int len=strlen(filename);
if(len>4 && !strcmp(filename+len-4,".dmg"))
return 2;
}
return 0;
}
static int read_uint64(BlockDriverState *bs, int64_t offset, uint64_t *result)
static off_t read_off(BlockDriverState *bs, int64_t offset)
{
uint64_t buffer;
int ret;
ret = bdrv_pread(bs->file, offset, &buffer, 8);
if (ret < 0) {
return ret;
}
*result = be64_to_cpu(buffer);
if (bdrv_pread(bs->file, offset, &buffer, 8) < 8)
return 0;
return be64_to_cpu(buffer);
}
static int read_uint32(BlockDriverState *bs, int64_t offset, uint32_t *result)
static off_t read_uint32(BlockDriverState *bs, int64_t offset)
{
uint32_t buffer;
int ret;
ret = bdrv_pread(bs->file, offset, &buffer, 4);
if (ret < 0) {
return ret;
}
*result = be32_to_cpu(buffer);
if (bdrv_pread(bs->file, offset, &buffer, 4) < 4)
return 0;
return be32_to_cpu(buffer);
}
/* Increase max chunk sizes, if necessary. This function is used to calculate
* the buffer sizes needed for compressed/uncompressed chunk I/O.
*/
static void update_max_chunk_size(BDRVDMGState *s, uint32_t chunk,
uint32_t *max_compressed_size,
uint32_t *max_sectors_per_chunk)
{
uint32_t compressed_size = 0;
uint32_t uncompressed_sectors = 0;
switch (s->types[chunk]) {
case 0x80000005: /* zlib compressed */
compressed_size = s->lengths[chunk];
uncompressed_sectors = s->sectorcounts[chunk];
break;
case 1: /* copy */
uncompressed_sectors = (s->lengths[chunk] + 511) / 512;
break;
case 2: /* zero */
uncompressed_sectors = s->sectorcounts[chunk];
break;
}
if (compressed_size > *max_compressed_size) {
*max_compressed_size = compressed_size;
}
if (uncompressed_sectors > *max_sectors_per_chunk) {
*max_sectors_per_chunk = uncompressed_sectors;
}
}
static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int dmg_open(BlockDriverState *bs, int flags)
{
BDRVDMGState *s = bs->opaque;
uint64_t info_begin, info_end, last_in_offset, last_out_offset;
uint32_t count, tmp;
uint32_t max_compressed_size = 1, max_sectors_per_chunk = 1, i;
off_t info_begin,info_end,last_in_offset,last_out_offset;
uint32_t count;
uint32_t max_compressed_size=1,max_sectors_per_chunk=1,i;
int64_t offset;
int ret;
bs->read_only = 1;
s->n_chunks = 0;
@@ -148,32 +87,21 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
/* read offset of info blocks */
offset = bdrv_getlength(bs->file);
if (offset < 0) {
ret = offset;
goto fail;
}
offset -= 0x1d8;
ret = read_uint64(bs, offset, &info_begin);
if (ret < 0) {
goto fail;
} else if (info_begin == 0) {
ret = -EINVAL;
info_begin = read_off(bs, offset);
if (info_begin == 0) {
goto fail;
}
ret = read_uint32(bs, info_begin, &tmp);
if (ret < 0) {
goto fail;
} else if (tmp != 0x100) {
ret = -EINVAL;
if (read_uint32(bs, info_begin) != 0x100) {
goto fail;
}
ret = read_uint32(bs, info_begin + 4, &count);
if (ret < 0) {
goto fail;
} else if (count == 0) {
ret = -EINVAL;
count = read_uint32(bs, info_begin + 4);
if (count == 0) {
goto fail;
}
info_end = info_begin + count;
@@ -185,47 +113,33 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
while (offset < info_end) {
uint32_t type;
ret = read_uint32(bs, offset, &count);
if (ret < 0) {
count = read_uint32(bs, offset);
if(count==0)
goto fail;
} else if (count == 0) {
ret = -EINVAL;
goto fail;
}
offset += 4;
ret = read_uint32(bs, offset, &type);
if (ret < 0) {
goto fail;
}
type = read_uint32(bs, offset);
if (type == 0x6d697368 && count >= 244) {
size_t new_size;
uint32_t chunk_count;
int new_size, chunk_count;
offset += 4;
offset += 200;
chunk_count = (count - 204) / 40;
chunk_count = (count-204)/40;
new_size = sizeof(uint64_t) * (s->n_chunks + chunk_count);
s->types = g_realloc(s->types, new_size / 2);
s->offsets = g_realloc(s->offsets, new_size);
s->lengths = g_realloc(s->lengths, new_size);
s->sectors = g_realloc(s->sectors, new_size);
s->sectorcounts = g_realloc(s->sectorcounts, new_size);
s->types = qemu_realloc(s->types, new_size/2);
s->offsets = qemu_realloc(s->offsets, new_size);
s->lengths = qemu_realloc(s->lengths, new_size);
s->sectors = qemu_realloc(s->sectors, new_size);
s->sectorcounts = qemu_realloc(s->sectorcounts, new_size);
for (i = s->n_chunks; i < s->n_chunks + chunk_count; i++) {
ret = read_uint32(bs, offset, &s->types[i]);
if (ret < 0) {
goto fail;
}
for(i=s->n_chunks;i<s->n_chunks+chunk_count;i++) {
s->types[i] = read_uint32(bs, offset);
offset += 4;
if (s->types[i] != 0x80000005 && s->types[i] != 1 &&
s->types[i] != 2) {
if (s->types[i] == 0xffffffff && i > 0) {
last_in_offset = s->offsets[i - 1] + s->lengths[i - 1];
last_out_offset = s->sectors[i - 1] +
s->sectorcounts[i - 1];
if(s->types[i]!=0x80000005 && s->types[i]!=1 && s->types[i]!=2) {
if(s->types[i]==0xffffffff) {
last_in_offset = s->offsets[i-1]+s->lengths[i-1];
last_out_offset = s->sectors[i-1]+s->sectorcounts[i-1];
}
chunk_count--;
i--;
@@ -234,160 +148,115 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
}
offset += 4;
ret = read_uint64(bs, offset, &s->sectors[i]);
if (ret < 0) {
goto fail;
}
s->sectors[i] += last_out_offset;
s->sectors[i] = last_out_offset+read_off(bs, offset);
offset += 8;
ret = read_uint64(bs, offset, &s->sectorcounts[i]);
if (ret < 0) {
goto fail;
}
s->sectorcounts[i] = read_off(bs, offset);
offset += 8;
if (s->sectorcounts[i] > DMG_SECTORCOUNTS_MAX) {
error_report("sector count %" PRIu64 " for chunk %" PRIu32
" is larger than max (%u)",
s->sectorcounts[i], i, DMG_SECTORCOUNTS_MAX);
ret = -EINVAL;
goto fail;
}
ret = read_uint64(bs, offset, &s->offsets[i]);
if (ret < 0) {
goto fail;
}
s->offsets[i] += last_in_offset;
s->offsets[i] = last_in_offset+read_off(bs, offset);
offset += 8;
ret = read_uint64(bs, offset, &s->lengths[i]);
if (ret < 0) {
goto fail;
}
s->lengths[i] = read_off(bs, offset);
offset += 8;
if (s->lengths[i] > DMG_LENGTHS_MAX) {
error_report("length %" PRIu64 " for chunk %" PRIu32
" is larger than max (%u)",
s->lengths[i], i, DMG_LENGTHS_MAX);
ret = -EINVAL;
goto fail;
if(s->lengths[i]>max_compressed_size)
max_compressed_size = s->lengths[i];
if(s->sectorcounts[i]>max_sectors_per_chunk)
max_sectors_per_chunk = s->sectorcounts[i];
}
update_max_chunk_size(s, i, &max_compressed_size,
&max_sectors_per_chunk);
}
s->n_chunks += chunk_count;
s->n_chunks+=chunk_count;
}
}
/* initialize zlib engine */
s->compressed_chunk = qemu_try_blockalign(bs->file,
max_compressed_size + 1);
s->uncompressed_chunk = qemu_try_blockalign(bs->file,
512 * max_sectors_per_chunk);
if (s->compressed_chunk == NULL || s->uncompressed_chunk == NULL) {
ret = -ENOMEM;
s->compressed_chunk = qemu_malloc(max_compressed_size+1);
s->uncompressed_chunk = qemu_malloc(512*max_sectors_per_chunk);
if(inflateInit(&s->zstream) != Z_OK)
goto fail;
}
if (inflateInit(&s->zstream) != Z_OK) {
ret = -EINVAL;
goto fail;
}
s->current_chunk = s->n_chunks;
qemu_co_mutex_init(&s->lock);
return 0;
fail:
g_free(s->types);
g_free(s->offsets);
g_free(s->lengths);
g_free(s->sectors);
g_free(s->sectorcounts);
qemu_vfree(s->compressed_chunk);
qemu_vfree(s->uncompressed_chunk);
return ret;
return -1;
}
static inline int is_sector_in_chunk(BDRVDMGState* s,
uint32_t chunk_num, uint64_t sector_num)
uint32_t chunk_num,int sector_num)
{
if (chunk_num >= s->n_chunks || s->sectors[chunk_num] > sector_num ||
s->sectors[chunk_num] + s->sectorcounts[chunk_num] <= sector_num) {
if(chunk_num>=s->n_chunks || s->sectors[chunk_num]>sector_num ||
s->sectors[chunk_num]+s->sectorcounts[chunk_num]<=sector_num)
return 0;
} else {
else
return -1;
}
}
static inline uint32_t search_chunk(BDRVDMGState *s, uint64_t sector_num)
static inline uint32_t search_chunk(BDRVDMGState* s,int sector_num)
{
/* binary search */
uint32_t chunk1 = 0, chunk2 = s->n_chunks, chunk3;
while (chunk1 != chunk2) {
chunk3 = (chunk1 + chunk2) / 2;
if (s->sectors[chunk3] > sector_num) {
uint32_t chunk1=0,chunk2=s->n_chunks,chunk3;
while(chunk1!=chunk2) {
chunk3 = (chunk1+chunk2)/2;
if(s->sectors[chunk3]>sector_num)
chunk2 = chunk3;
} else if (s->sectors[chunk3] + s->sectorcounts[chunk3] > sector_num) {
else if(s->sectors[chunk3]+s->sectorcounts[chunk3]>sector_num)
return chunk3;
} else {
else
chunk1 = chunk3;
}
}
return s->n_chunks; /* error */
}
static inline int dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
static inline int dmg_read_chunk(BlockDriverState *bs, int sector_num)
{
BDRVDMGState *s = bs->opaque;
if (!is_sector_in_chunk(s, s->current_chunk, sector_num)) {
if(!is_sector_in_chunk(s,s->current_chunk,sector_num)) {
int ret;
uint32_t chunk = search_chunk(s, sector_num);
uint32_t chunk = search_chunk(s,sector_num);
if (chunk >= s->n_chunks) {
if(chunk>=s->n_chunks)
return -1;
}
s->current_chunk = s->n_chunks;
switch (s->types[chunk]) {
switch(s->types[chunk]) {
case 0x80000005: { /* zlib compressed */
int i;
/* we need to buffer, because only the chunk as whole can be
* inflated. */
ret = bdrv_pread(bs->file, s->offsets[chunk],
s->compressed_chunk, s->lengths[chunk]);
if (ret != s->lengths[chunk]) {
i=0;
do {
ret = bdrv_pread(bs->file, s->offsets[chunk] + i,
s->compressed_chunk+i, s->lengths[chunk]-i);
if(ret<0 && errno==EINTR)
ret=0;
i+=ret;
} while(ret>=0 && ret+i<s->lengths[chunk]);
if (ret != s->lengths[chunk])
return -1;
}
s->zstream.next_in = s->compressed_chunk;
s->zstream.avail_in = s->lengths[chunk];
s->zstream.next_out = s->uncompressed_chunk;
s->zstream.avail_out = 512 * s->sectorcounts[chunk];
s->zstream.avail_out = 512*s->sectorcounts[chunk];
ret = inflateReset(&s->zstream);
if (ret != Z_OK) {
if(ret != Z_OK)
return -1;
}
ret = inflate(&s->zstream, Z_FINISH);
if (ret != Z_STREAM_END ||
s->zstream.total_out != 512 * s->sectorcounts[chunk]) {
if(ret != Z_STREAM_END || s->zstream.total_out != 512*s->sectorcounts[chunk])
return -1;
}
break; }
case 1: /* copy */
ret = bdrv_pread(bs->file, s->offsets[chunk],
s->uncompressed_chunk, s->lengths[chunk]);
if (ret != s->lengths[chunk]) {
if (ret != s->lengths[chunk])
return -1;
}
break;
case 2: /* zero */
memset(s->uncompressed_chunk, 0, 512 * s->sectorcounts[chunk]);
memset(s->uncompressed_chunk, 0, 512*s->sectorcounts[chunk]);
break;
}
s->current_chunk = chunk;
@@ -401,41 +270,28 @@ static int dmg_read(BlockDriverState *bs, int64_t sector_num,
BDRVDMGState *s = bs->opaque;
int i;
for (i = 0; i < nb_sectors; i++) {
for(i=0;i<nb_sectors;i++) {
uint32_t sector_offset_in_chunk;
if (dmg_read_chunk(bs, sector_num + i) != 0) {
if(dmg_read_chunk(bs, sector_num+i) != 0)
return -1;
}
sector_offset_in_chunk = sector_num + i - s->sectors[s->current_chunk];
memcpy(buf + i * 512,
s->uncompressed_chunk + sector_offset_in_chunk * 512, 512);
sector_offset_in_chunk = sector_num+i-s->sectors[s->current_chunk];
memcpy(buf+i*512,s->uncompressed_chunk+sector_offset_in_chunk*512,512);
}
return 0;
}
static coroutine_fn int dmg_co_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
int ret;
BDRVDMGState *s = bs->opaque;
qemu_co_mutex_lock(&s->lock);
ret = dmg_read(bs, sector_num, buf, nb_sectors);
qemu_co_mutex_unlock(&s->lock);
return ret;
}
static void dmg_close(BlockDriverState *bs)
{
BDRVDMGState *s = bs->opaque;
g_free(s->types);
g_free(s->offsets);
g_free(s->lengths);
g_free(s->sectors);
g_free(s->sectorcounts);
qemu_vfree(s->compressed_chunk);
qemu_vfree(s->uncompressed_chunk);
if(s->n_chunks>0) {
free(s->types);
free(s->offsets);
free(s->lengths);
free(s->sectors);
free(s->sectorcounts);
}
free(s->compressed_chunk);
free(s->uncompressed_chunk);
inflateEnd(&s->zstream);
}
@@ -444,7 +300,7 @@ static BlockDriver bdrv_dmg = {
.instance_size = sizeof(BDRVDMGState),
.bdrv_probe = dmg_probe,
.bdrv_open = dmg_open,
.bdrv_read = dmg_co_read,
.bdrv_read = dmg_read,
.bdrv_close = dmg_close,
};

View File

@@ -1,833 +0,0 @@
/*
* GlusterFS backend for QEMU
*
* Copyright (C) 2012 Bharata B Rao <bharata@linux.vnet.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include <glusterfs/api/glfs.h>
#include "block/block_int.h"
#include "qemu/uri.h"
typedef struct GlusterAIOCB {
int64_t size;
int ret;
QEMUBH *bh;
Coroutine *coroutine;
AioContext *aio_context;
} GlusterAIOCB;
typedef struct BDRVGlusterState {
struct glfs *glfs;
struct glfs_fd *fd;
} BDRVGlusterState;
typedef struct GlusterConf {
char *server;
int port;
char *volname;
char *image;
char *transport;
} GlusterConf;
static void qemu_gluster_gconf_free(GlusterConf *gconf)
{
if (gconf) {
g_free(gconf->server);
g_free(gconf->volname);
g_free(gconf->image);
g_free(gconf->transport);
g_free(gconf);
}
}
static int parse_volume_options(GlusterConf *gconf, char *path)
{
char *p, *q;
if (!path) {
return -EINVAL;
}
/* volume */
p = q = path + strspn(path, "/");
p += strcspn(p, "/");
if (*p == '\0') {
return -EINVAL;
}
gconf->volname = g_strndup(q, p - q);
/* image */
p += strspn(p, "/");
if (*p == '\0') {
return -EINVAL;
}
gconf->image = g_strdup(p);
return 0;
}
/*
* file=gluster[+transport]://[server[:port]]/volname/image[?socket=...]
*
* 'gluster' is the protocol.
*
* 'transport' specifies the transport type used to connect to gluster
* management daemon (glusterd). Valid transport types are
* tcp, unix and rdma. If a transport type isn't specified, then tcp
* type is assumed.
*
* 'server' specifies the server where the volume file specification for
* the given volume resides. This can be either hostname, ipv4 address
* or ipv6 address. ipv6 address needs to be within square brackets [ ].
* If transport type is 'unix', then 'server' field should not be specified.
* The 'socket' field needs to be populated with the path to unix domain
* socket.
*
* 'port' is the port number on which glusterd is listening. This is optional
* and if not specified, QEMU will send 0 which will make gluster to use the
* default port. If the transport type is unix, then 'port' should not be
* specified.
*
* 'volname' is the name of the gluster volume which contains the VM image.
*
* 'image' is the path to the actual VM image that resides on gluster volume.
*
* Examples:
*
* file=gluster://1.2.3.4/testvol/a.img
* file=gluster+tcp://1.2.3.4/testvol/a.img
* file=gluster+tcp://1.2.3.4:24007/testvol/dir/a.img
* file=gluster+tcp://[1:2:3:4:5:6:7:8]/testvol/dir/a.img
* file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
* file=gluster+tcp://server.domain.com:24007/testvol/dir/a.img
* file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
* file=gluster+rdma://1.2.3.4:24007/testvol/a.img
*/
static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
{
URI *uri;
QueryParams *qp = NULL;
bool is_unix = false;
int ret = 0;
uri = uri_parse(filename);
if (!uri) {
return -EINVAL;
}
/* transport */
if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
gconf->transport = g_strdup("tcp");
} else if (!strcmp(uri->scheme, "gluster+tcp")) {
gconf->transport = g_strdup("tcp");
} else if (!strcmp(uri->scheme, "gluster+unix")) {
gconf->transport = g_strdup("unix");
is_unix = true;
} else if (!strcmp(uri->scheme, "gluster+rdma")) {
gconf->transport = g_strdup("rdma");
} else {
ret = -EINVAL;
goto out;
}
ret = parse_volume_options(gconf, uri->path);
if (ret < 0) {
goto out;
}
qp = query_params_parse(uri->query);
if (qp->n > 1 || (is_unix && !qp->n) || (!is_unix && qp->n)) {
ret = -EINVAL;
goto out;
}
if (is_unix) {
if (uri->server || uri->port) {
ret = -EINVAL;
goto out;
}
if (strcmp(qp->p[0].name, "socket")) {
ret = -EINVAL;
goto out;
}
gconf->server = g_strdup(qp->p[0].value);
} else {
gconf->server = g_strdup(uri->server ? uri->server : "localhost");
gconf->port = uri->port;
}
out:
if (qp) {
query_params_free(qp);
}
uri_free(uri);
return ret;
}
static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
Error **errp)
{
struct glfs *glfs = NULL;
int ret;
int old_errno;
ret = qemu_gluster_parseuri(gconf, filename);
if (ret < 0) {
error_setg(errp, "Usage: file=gluster[+transport]://[server[:port]]/"
"volname/image[?socket=...]");
errno = -ret;
goto out;
}
glfs = glfs_new(gconf->volname);
if (!glfs) {
goto out;
}
ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->server,
gconf->port);
if (ret < 0) {
goto out;
}
/*
* TODO: Use GF_LOG_ERROR instead of hard code value of 4 here when
* GlusterFS makes GF_LOG_* macros available to libgfapi users.
*/
ret = glfs_set_logging(glfs, "-", 4);
if (ret < 0) {
goto out;
}
ret = glfs_init(glfs);
if (ret) {
error_setg_errno(errp, errno,
"Gluster connection failed for server=%s port=%d "
"volume=%s image=%s transport=%s", gconf->server,
gconf->port, gconf->volname, gconf->image,
gconf->transport);
/* glfs_init sometimes doesn't set errno although docs suggest that */
if (errno == 0)
errno = EINVAL;
goto out;
}
return glfs;
out:
if (glfs) {
old_errno = errno;
glfs_fini(glfs);
errno = old_errno;
}
return NULL;
}
static void qemu_gluster_complete_aio(void *opaque)
{
GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
qemu_bh_delete(acb->bh);
acb->bh = NULL;
qemu_coroutine_enter(acb->coroutine, NULL);
}
/*
* AIO callback routine called from GlusterFS thread.
*/
static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
{
GlusterAIOCB *acb = (GlusterAIOCB *)arg;
if (!ret || ret == acb->size) {
acb->ret = 0; /* Success */
} else if (ret < 0) {
acb->ret = ret; /* Read/Write failed */
} else {
acb->ret = -EIO; /* Partial read/write - fail it */
}
acb->bh = aio_bh_new(acb->aio_context, qemu_gluster_complete_aio, acb);
qemu_bh_schedule(acb->bh);
}
/* TODO Convert to fine grained options */
static QemuOptsList runtime_opts = {
.name = "gluster",
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
.desc = {
{
.name = "filename",
.type = QEMU_OPT_STRING,
.help = "URL to the gluster image",
},
{ /* end of list */ }
},
};
static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags)
{
assert(open_flags != NULL);
*open_flags |= O_BINARY;
if (bdrv_flags & BDRV_O_RDWR) {
*open_flags |= O_RDWR;
} else {
*open_flags |= O_RDONLY;
}
if ((bdrv_flags & BDRV_O_NOCACHE)) {
*open_flags |= O_DIRECT;
}
}
static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
int bdrv_flags, Error **errp)
{
BDRVGlusterState *s = bs->opaque;
int open_flags = 0;
int ret = 0;
GlusterConf *gconf = g_new0(GlusterConf, 1);
QemuOpts *opts;
Error *local_err = NULL;
const char *filename;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto out;
}
filename = qemu_opt_get(opts, "filename");
s->glfs = qemu_gluster_init(gconf, filename, errp);
if (!s->glfs) {
ret = -errno;
goto out;
}
qemu_gluster_parse_flags(bdrv_flags, &open_flags);
s->fd = glfs_open(s->glfs, gconf->image, open_flags);
if (!s->fd) {
ret = -errno;
}
out:
qemu_opts_del(opts);
qemu_gluster_gconf_free(gconf);
if (!ret) {
return ret;
}
if (s->fd) {
glfs_close(s->fd);
}
if (s->glfs) {
glfs_fini(s->glfs);
}
return ret;
}
typedef struct BDRVGlusterReopenState {
struct glfs *glfs;
struct glfs_fd *fd;
} BDRVGlusterReopenState;
static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
BlockReopenQueue *queue, Error **errp)
{
int ret = 0;
BDRVGlusterReopenState *reop_s;
GlusterConf *gconf = NULL;
int open_flags = 0;
assert(state != NULL);
assert(state->bs != NULL);
state->opaque = g_new0(BDRVGlusterReopenState, 1);
reop_s = state->opaque;
qemu_gluster_parse_flags(state->flags, &open_flags);
gconf = g_new0(GlusterConf, 1);
reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
if (reop_s->glfs == NULL) {
ret = -errno;
goto exit;
}
reop_s->fd = glfs_open(reop_s->glfs, gconf->image, open_flags);
if (reop_s->fd == NULL) {
/* reops->glfs will be cleaned up in _abort */
ret = -errno;
goto exit;
}
exit:
/* state->opaque will be freed in either the _abort or _commit */
qemu_gluster_gconf_free(gconf);
return ret;
}
static void qemu_gluster_reopen_commit(BDRVReopenState *state)
{
BDRVGlusterReopenState *reop_s = state->opaque;
BDRVGlusterState *s = state->bs->opaque;
/* close the old */
if (s->fd) {
glfs_close(s->fd);
}
if (s->glfs) {
glfs_fini(s->glfs);
}
/* use the newly opened image / connection */
s->fd = reop_s->fd;
s->glfs = reop_s->glfs;
g_free(state->opaque);
state->opaque = NULL;
return;
}
static void qemu_gluster_reopen_abort(BDRVReopenState *state)
{
BDRVGlusterReopenState *reop_s = state->opaque;
if (reop_s == NULL) {
return;
}
if (reop_s->fd) {
glfs_close(reop_s->fd);
}
if (reop_s->glfs) {
glfs_fini(reop_s->glfs);
}
g_free(state->opaque);
state->opaque = NULL;
return;
}
#ifdef CONFIG_GLUSTERFS_ZEROFILL
static coroutine_fn int qemu_gluster_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, BdrvRequestFlags flags)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
BDRVGlusterState *s = bs->opaque;
off_t size = nb_sectors * BDRV_SECTOR_SIZE;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
acb->size = size;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
acb->aio_context = bdrv_get_aio_context(bs);
ret = glfs_zerofill_async(s->fd, offset, size, &gluster_finish_aiocb, acb);
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
}
static inline bool gluster_supports_zerofill(void)
{
return 1;
}
static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
int64_t size)
{
return glfs_zerofill(fd, offset, size);
}
#else
static inline bool gluster_supports_zerofill(void)
{
return 0;
}
static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
int64_t size)
{
return 0;
}
#endif
static int qemu_gluster_create(const char *filename,
QemuOpts *opts, Error **errp)
{
struct glfs *glfs;
struct glfs_fd *fd;
int ret = 0;
int prealloc = 0;
int64_t total_size = 0;
char *tmp = NULL;
GlusterConf *gconf = g_new0(GlusterConf, 1);
glfs = qemu_gluster_init(gconf, filename, errp);
if (!glfs) {
ret = -errno;
goto out;
}
total_size = ROUND_UP(qemu_opt_get_size_del(opts, BLOCK_OPT_SIZE, 0),
BDRV_SECTOR_SIZE);
tmp = qemu_opt_get_del(opts, BLOCK_OPT_PREALLOC);
if (!tmp || !strcmp(tmp, "off")) {
prealloc = 0;
} else if (!strcmp(tmp, "full") &&
gluster_supports_zerofill()) {
prealloc = 1;
} else {
error_setg(errp, "Invalid preallocation mode: '%s'"
" or GlusterFS doesn't support zerofill API",
tmp);
ret = -EINVAL;
goto out;
}
fd = glfs_creat(glfs, gconf->image,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
if (!fd) {
ret = -errno;
} else {
if (!glfs_ftruncate(fd, total_size)) {
if (prealloc && qemu_gluster_zerofill(fd, 0, total_size)) {
ret = -errno;
}
} else {
ret = -errno;
}
if (glfs_close(fd) != 0) {
ret = -errno;
}
}
out:
g_free(tmp);
qemu_gluster_gconf_free(gconf);
if (glfs) {
glfs_fini(glfs);
}
return ret;
}
static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov, int write)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
BDRVGlusterState *s = bs->opaque;
size_t size = nb_sectors * BDRV_SECTOR_SIZE;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
acb->size = size;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
acb->aio_context = bdrv_get_aio_context(bs);
if (write) {
ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0,
&gluster_finish_aiocb, acb);
} else {
ret = glfs_preadv_async(s->fd, qiov->iov, qiov->niov, offset, 0,
&gluster_finish_aiocb, acb);
}
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
}
static int qemu_gluster_truncate(BlockDriverState *bs, int64_t offset)
{
int ret;
BDRVGlusterState *s = bs->opaque;
ret = glfs_ftruncate(s->fd, offset);
if (ret < 0) {
return -errno;
}
return 0;
}
static coroutine_fn int qemu_gluster_co_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
{
return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 0);
}
static coroutine_fn int qemu_gluster_co_writev(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
{
return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 1);
}
static coroutine_fn int qemu_gluster_co_flush_to_disk(BlockDriverState *bs)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
BDRVGlusterState *s = bs->opaque;
acb->size = 0;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
acb->aio_context = bdrv_get_aio_context(bs);
ret = glfs_fsync_async(s->fd, &gluster_finish_aiocb, acb);
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
}
#ifdef CONFIG_GLUSTERFS_DISCARD
static coroutine_fn int qemu_gluster_co_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
BDRVGlusterState *s = bs->opaque;
size_t size = nb_sectors * BDRV_SECTOR_SIZE;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
acb->size = 0;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
acb->aio_context = bdrv_get_aio_context(bs);
ret = glfs_discard_async(s->fd, offset, size, &gluster_finish_aiocb, acb);
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
}
#endif
static int64_t qemu_gluster_getlength(BlockDriverState *bs)
{
BDRVGlusterState *s = bs->opaque;
int64_t ret;
ret = glfs_lseek(s->fd, 0, SEEK_END);
if (ret < 0) {
return -errno;
} else {
return ret;
}
}
static int64_t qemu_gluster_allocated_file_size(BlockDriverState *bs)
{
BDRVGlusterState *s = bs->opaque;
struct stat st;
int ret;
ret = glfs_fstat(s->fd, &st);
if (ret < 0) {
return -errno;
} else {
return st.st_blocks * 512;
}
}
static void qemu_gluster_close(BlockDriverState *bs)
{
BDRVGlusterState *s = bs->opaque;
if (s->fd) {
glfs_close(s->fd);
s->fd = NULL;
}
glfs_fini(s->glfs);
}
static int qemu_gluster_has_zero_init(BlockDriverState *bs)
{
/* GlusterFS volume could be backed by a block device */
return 0;
}
static QemuOptsList qemu_gluster_create_opts = {
.name = "qemu-gluster-create-opts",
.head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
.desc = {
{
.name = BLOCK_OPT_SIZE,
.type = QEMU_OPT_SIZE,
.help = "Virtual disk size"
},
{
.name = BLOCK_OPT_PREALLOC,
.type = QEMU_OPT_STRING,
.help = "Preallocation mode (allowed values: off, full)"
},
{ /* end of list */ }
}
};
static BlockDriver bdrv_gluster = {
.format_name = "gluster",
.protocol_name = "gluster",
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
#endif
.create_opts = &qemu_gluster_create_opts,
};
static BlockDriver bdrv_gluster_tcp = {
.format_name = "gluster",
.protocol_name = "gluster+tcp",
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
#endif
.create_opts = &qemu_gluster_create_opts,
};
static BlockDriver bdrv_gluster_unix = {
.format_name = "gluster",
.protocol_name = "gluster+unix",
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
#endif
.create_opts = &qemu_gluster_create_opts,
};
static BlockDriver bdrv_gluster_rdma = {
.format_name = "gluster",
.protocol_name = "gluster+rdma",
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
#endif
.create_opts = &qemu_gluster_create_opts,
};
static void bdrv_gluster_init(void)
{
bdrv_register(&bdrv_gluster_rdma);
bdrv_register(&bdrv_gluster_unix);
bdrv_register(&bdrv_gluster_tcp);
bdrv_register(&bdrv_gluster);
}
block_init(bdrv_gluster_init);

File diff suppressed because it is too large Load Diff

View File

@@ -1,337 +0,0 @@
/*
* Linux native AIO support.
*
* Copyright (C) 2009 IBM, Corp.
* Copyright (C) 2009 Red Hat, Inc.
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu-common.h"
#include "block/aio.h"
#include "qemu/queue.h"
#include "block/raw-aio.h"
#include "qemu/event_notifier.h"
#include <libaio.h>
/*
* Queue size (per-device).
*
* XXX: eventually we need to communicate this to the guest and/or make it
* tunable by the guest. If we get more outstanding requests at a time
* than this we will get EAGAIN from io_submit which is communicated to
* the guest as an I/O error.
*/
#define MAX_EVENTS 128
#define MAX_QUEUED_IO 128
struct qemu_laiocb {
BlockAIOCB common;
struct qemu_laio_state *ctx;
struct iocb iocb;
ssize_t ret;
size_t nbytes;
QEMUIOVector *qiov;
bool is_read;
QSIMPLEQ_ENTRY(qemu_laiocb) next;
};
typedef struct {
int plugged;
unsigned int n;
bool blocked;
QSIMPLEQ_HEAD(, qemu_laiocb) pending;
} LaioQueue;
struct qemu_laio_state {
io_context_t ctx;
EventNotifier e;
/* io queue for submit at batch */
LaioQueue io_q;
/* I/O completion processing */
QEMUBH *completion_bh;
struct io_event events[MAX_EVENTS];
int event_idx;
int event_max;
};
static void ioq_submit(struct qemu_laio_state *s);
static inline ssize_t io_event_ret(struct io_event *ev)
{
return (ssize_t)(((uint64_t)ev->res2 << 32) | ev->res);
}
/*
* Completes an AIO request (calls the callback and frees the ACB).
*/
static void qemu_laio_process_completion(struct qemu_laio_state *s,
struct qemu_laiocb *laiocb)
{
int ret;
ret = laiocb->ret;
if (ret != -ECANCELED) {
if (ret == laiocb->nbytes) {
ret = 0;
} else if (ret >= 0) {
/* Short reads mean EOF, pad with zeros. */
if (laiocb->is_read) {
qemu_iovec_memset(laiocb->qiov, ret, 0,
laiocb->qiov->size - ret);
} else {
ret = -EINVAL;
}
}
}
laiocb->common.cb(laiocb->common.opaque, ret);
qemu_aio_unref(laiocb);
}
/* The completion BH fetches completed I/O requests and invokes their
* callbacks.
*
* The function is somewhat tricky because it supports nested event loops, for
* example when a request callback invokes aio_poll(). In order to do this,
* the completion events array and index are kept in qemu_laio_state. The BH
* reschedules itself as long as there are completions pending so it will
* either be called again in a nested event loop or will be called after all
* events have been completed. When there are no events left to complete, the
* BH returns without rescheduling.
*/
static void qemu_laio_completion_bh(void *opaque)
{
struct qemu_laio_state *s = opaque;
/* Fetch more completion events when empty */
if (s->event_idx == s->event_max) {
do {
struct timespec ts = { 0 };
s->event_max = io_getevents(s->ctx, MAX_EVENTS, MAX_EVENTS,
s->events, &ts);
} while (s->event_max == -EINTR);
s->event_idx = 0;
if (s->event_max <= 0) {
s->event_max = 0;
return; /* no more events */
}
}
/* Reschedule so nested event loops see currently pending completions */
qemu_bh_schedule(s->completion_bh);
/* Process completion events */
while (s->event_idx < s->event_max) {
struct iocb *iocb = s->events[s->event_idx].obj;
struct qemu_laiocb *laiocb =
container_of(iocb, struct qemu_laiocb, iocb);
laiocb->ret = io_event_ret(&s->events[s->event_idx]);
s->event_idx++;
qemu_laio_process_completion(s, laiocb);
}
if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
ioq_submit(s);
}
}
static void qemu_laio_completion_cb(EventNotifier *e)
{
struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, e);
if (event_notifier_test_and_clear(&s->e)) {
qemu_bh_schedule(s->completion_bh);
}
}
static void laio_cancel(BlockAIOCB *blockacb)
{
struct qemu_laiocb *laiocb = (struct qemu_laiocb *)blockacb;
struct io_event event;
int ret;
if (laiocb->ret != -EINPROGRESS) {
return;
}
ret = io_cancel(laiocb->ctx->ctx, &laiocb->iocb, &event);
laiocb->ret = -ECANCELED;
if (ret != 0) {
/* iocb is not cancelled, cb will be called by the event loop later */
return;
}
laiocb->common.cb(laiocb->common.opaque, laiocb->ret);
}
static const AIOCBInfo laio_aiocb_info = {
.aiocb_size = sizeof(struct qemu_laiocb),
.cancel_async = laio_cancel,
};
static void ioq_init(LaioQueue *io_q)
{
QSIMPLEQ_INIT(&io_q->pending);
io_q->plugged = 0;
io_q->n = 0;
io_q->blocked = false;
}
static void ioq_submit(struct qemu_laio_state *s)
{
int ret, len;
struct qemu_laiocb *aiocb;
struct iocb *iocbs[MAX_QUEUED_IO];
QSIMPLEQ_HEAD(, qemu_laiocb) completed;
do {
len = 0;
QSIMPLEQ_FOREACH(aiocb, &s->io_q.pending, next) {
iocbs[len++] = &aiocb->iocb;
if (len == MAX_QUEUED_IO) {
break;
}
}
ret = io_submit(s->ctx, len, iocbs);
if (ret == -EAGAIN) {
break;
}
if (ret < 0) {
abort();
}
s->io_q.n -= ret;
aiocb = container_of(iocbs[ret - 1], struct qemu_laiocb, iocb);
QSIMPLEQ_SPLIT_AFTER(&s->io_q.pending, aiocb, next, &completed);
} while (ret == len && !QSIMPLEQ_EMPTY(&s->io_q.pending));
s->io_q.blocked = (s->io_q.n > 0);
}
void laio_io_plug(BlockDriverState *bs, void *aio_ctx)
{
struct qemu_laio_state *s = aio_ctx;
s->io_q.plugged++;
}
void laio_io_unplug(BlockDriverState *bs, void *aio_ctx, bool unplug)
{
struct qemu_laio_state *s = aio_ctx;
assert(s->io_q.plugged > 0 || !unplug);
if (unplug && --s->io_q.plugged > 0) {
return;
}
if (!s->io_q.blocked && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
ioq_submit(s);
}
}
BlockAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockCompletionFunc *cb, void *opaque, int type)
{
struct qemu_laio_state *s = aio_ctx;
struct qemu_laiocb *laiocb;
struct iocb *iocbs;
off_t offset = sector_num * 512;
laiocb = qemu_aio_get(&laio_aiocb_info, bs, cb, opaque);
laiocb->nbytes = nb_sectors * 512;
laiocb->ctx = s;
laiocb->ret = -EINPROGRESS;
laiocb->is_read = (type == QEMU_AIO_READ);
laiocb->qiov = qiov;
iocbs = &laiocb->iocb;
switch (type) {
case QEMU_AIO_WRITE:
io_prep_pwritev(iocbs, fd, qiov->iov, qiov->niov, offset);
break;
case QEMU_AIO_READ:
io_prep_preadv(iocbs, fd, qiov->iov, qiov->niov, offset);
break;
/* Currently Linux kernel does not support other operations */
default:
fprintf(stderr, "%s: invalid AIO request type 0x%x.\n",
__func__, type);
goto out_free_aiocb;
}
io_set_eventfd(&laiocb->iocb, event_notifier_get_fd(&s->e));
QSIMPLEQ_INSERT_TAIL(&s->io_q.pending, laiocb, next);
s->io_q.n++;
if (!s->io_q.blocked &&
(!s->io_q.plugged || s->io_q.n >= MAX_QUEUED_IO)) {
ioq_submit(s);
}
return &laiocb->common;
out_free_aiocb:
qemu_aio_unref(laiocb);
return NULL;
}
void laio_detach_aio_context(void *s_, AioContext *old_context)
{
struct qemu_laio_state *s = s_;
aio_set_event_notifier(old_context, &s->e, NULL);
qemu_bh_delete(s->completion_bh);
}
void laio_attach_aio_context(void *s_, AioContext *new_context)
{
struct qemu_laio_state *s = s_;
s->completion_bh = aio_bh_new(new_context, qemu_laio_completion_bh, s);
aio_set_event_notifier(new_context, &s->e, qemu_laio_completion_cb);
}
void *laio_init(void)
{
struct qemu_laio_state *s;
s = g_malloc0(sizeof(*s));
if (event_notifier_init(&s->e, false) < 0) {
goto out_free_state;
}
if (io_setup(MAX_EVENTS, &s->ctx) != 0) {
goto out_close_efd;
}
ioq_init(&s->io_q);
return s;
out_close_efd:
event_notifier_cleanup(&s->e);
out_free_state:
g_free(s);
return NULL;
}
void laio_cleanup(void *s_)
{
struct qemu_laio_state *s = s_;
event_notifier_cleanup(&s->e);
if (io_destroy(s->ctx) != 0) {
fprintf(stderr, "%s: destroy AIO context %p failed\n",
__func__, &s->ctx);
}
g_free(s);
}

View File

@@ -1,794 +0,0 @@
/*
* Image mirroring
*
* Copyright Red Hat, Inc. 2012
*
* Authors:
* Paolo Bonzini <pbonzini@redhat.com>
*
* This work is licensed under the terms of the GNU LGPL, version 2 or later.
* See the COPYING.LIB file in the top-level directory.
*
*/
#include "trace.h"
#include "block/blockjob.h"
#include "block/block_int.h"
#include "qemu/ratelimit.h"
#include "qemu/bitmap.h"
#define SLICE_TIME 100000000ULL /* ns */
#define MAX_IN_FLIGHT 16
/* The mirroring buffer is a list of granularity-sized chunks.
* Free chunks are organized in a list.
*/
typedef struct MirrorBuffer {
QSIMPLEQ_ENTRY(MirrorBuffer) next;
} MirrorBuffer;
typedef struct MirrorBlockJob {
BlockJob common;
RateLimit limit;
BlockDriverState *target;
BlockDriverState *base;
/* The name of the graph node to replace */
char *replaces;
/* The BDS to replace */
BlockDriverState *to_replace;
/* Used to block operations on the drive-mirror-replace target */
Error *replace_blocker;
bool is_none_mode;
BlockdevOnError on_source_error, on_target_error;
bool synced;
bool should_complete;
int64_t sector_num;
int64_t granularity;
size_t buf_size;
int64_t bdev_length;
unsigned long *cow_bitmap;
BdrvDirtyBitmap *dirty_bitmap;
HBitmapIter hbi;
uint8_t *buf;
QSIMPLEQ_HEAD(, MirrorBuffer) buf_free;
int buf_free_count;
unsigned long *in_flight_bitmap;
int in_flight;
int sectors_in_flight;
int ret;
} MirrorBlockJob;
typedef struct MirrorOp {
MirrorBlockJob *s;
QEMUIOVector qiov;
int64_t sector_num;
int nb_sectors;
} MirrorOp;
static BlockErrorAction mirror_error_action(MirrorBlockJob *s, bool read,
int error)
{
s->synced = false;
if (read) {
return block_job_error_action(&s->common, s->common.bs,
s->on_source_error, true, error);
} else {
return block_job_error_action(&s->common, s->target,
s->on_target_error, false, error);
}
}
static void mirror_iteration_done(MirrorOp *op, int ret)
{
MirrorBlockJob *s = op->s;
struct iovec *iov;
int64_t chunk_num;
int i, nb_chunks, sectors_per_chunk;
trace_mirror_iteration_done(s, op->sector_num, op->nb_sectors, ret);
s->in_flight--;
s->sectors_in_flight -= op->nb_sectors;
iov = op->qiov.iov;
for (i = 0; i < op->qiov.niov; i++) {
MirrorBuffer *buf = (MirrorBuffer *) iov[i].iov_base;
QSIMPLEQ_INSERT_TAIL(&s->buf_free, buf, next);
s->buf_free_count++;
}
sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
chunk_num = op->sector_num / sectors_per_chunk;
nb_chunks = op->nb_sectors / sectors_per_chunk;
bitmap_clear(s->in_flight_bitmap, chunk_num, nb_chunks);
if (ret >= 0) {
if (s->cow_bitmap) {
bitmap_set(s->cow_bitmap, chunk_num, nb_chunks);
}
s->common.offset += (uint64_t)op->nb_sectors * BDRV_SECTOR_SIZE;
}
qemu_iovec_destroy(&op->qiov);
g_slice_free(MirrorOp, op);
/* Enter coroutine when it is not sleeping. The coroutine sleeps to
* rate-limit itself. The coroutine will eventually resume since there is
* a sleep timeout so don't wake it early.
*/
if (s->common.busy) {
qemu_coroutine_enter(s->common.co, NULL);
}
}
static void mirror_write_complete(void *opaque, int ret)
{
MirrorOp *op = opaque;
MirrorBlockJob *s = op->s;
if (ret < 0) {
BlockDriverState *source = s->common.bs;
BlockErrorAction action;
bdrv_set_dirty_bitmap(source, s->dirty_bitmap, op->sector_num,
op->nb_sectors);
action = mirror_error_action(s, false, -ret);
if (action == BLOCK_ERROR_ACTION_REPORT && s->ret >= 0) {
s->ret = ret;
}
}
mirror_iteration_done(op, ret);
}
static void mirror_read_complete(void *opaque, int ret)
{
MirrorOp *op = opaque;
MirrorBlockJob *s = op->s;
if (ret < 0) {
BlockDriverState *source = s->common.bs;
BlockErrorAction action;
bdrv_set_dirty_bitmap(source, s->dirty_bitmap, op->sector_num,
op->nb_sectors);
action = mirror_error_action(s, true, -ret);
if (action == BLOCK_ERROR_ACTION_REPORT && s->ret >= 0) {
s->ret = ret;
}
mirror_iteration_done(op, ret);
return;
}
bdrv_aio_writev(s->target, op->sector_num, &op->qiov, op->nb_sectors,
mirror_write_complete, op);
}
static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s)
{
BlockDriverState *source = s->common.bs;
int nb_sectors, sectors_per_chunk, nb_chunks;
int64_t end, sector_num, next_chunk, next_sector, hbitmap_next_sector;
uint64_t delay_ns = 0;
MirrorOp *op;
s->sector_num = hbitmap_iter_next(&s->hbi);
if (s->sector_num < 0) {
bdrv_dirty_iter_init(source, s->dirty_bitmap, &s->hbi);
s->sector_num = hbitmap_iter_next(&s->hbi);
trace_mirror_restart_iter(s,
bdrv_get_dirty_count(source, s->dirty_bitmap));
assert(s->sector_num >= 0);
}
hbitmap_next_sector = s->sector_num;
sector_num = s->sector_num;
sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
end = s->bdev_length / BDRV_SECTOR_SIZE;
/* Extend the QEMUIOVector to include all adjacent blocks that will
* be copied in this operation.
*
* We have to do this if we have no backing file yet in the destination,
* and the cluster size is very large. Then we need to do COW ourselves.
* The first time a cluster is copied, copy it entirely. Note that,
* because both the granularity and the cluster size are powers of two,
* the number of sectors to copy cannot exceed one cluster.
*
* We also want to extend the QEMUIOVector to include more adjacent
* dirty blocks if possible, to limit the number of I/O operations and
* run efficiently even with a small granularity.
*/
nb_chunks = 0;
nb_sectors = 0;
next_sector = sector_num;
next_chunk = sector_num / sectors_per_chunk;
/* Wait for I/O to this cluster (from a previous iteration) to be done. */
while (test_bit(next_chunk, s->in_flight_bitmap)) {
trace_mirror_yield_in_flight(s, sector_num, s->in_flight);
qemu_coroutine_yield();
}
do {
int added_sectors, added_chunks;
if (!bdrv_get_dirty(source, s->dirty_bitmap, next_sector) ||
test_bit(next_chunk, s->in_flight_bitmap)) {
assert(nb_sectors > 0);
break;
}
added_sectors = sectors_per_chunk;
if (s->cow_bitmap && !test_bit(next_chunk, s->cow_bitmap)) {
bdrv_round_to_clusters(s->target,
next_sector, added_sectors,
&next_sector, &added_sectors);
/* On the first iteration, the rounding may make us copy
* sectors before the first dirty one.
*/
if (next_sector < sector_num) {
assert(nb_sectors == 0);
sector_num = next_sector;
next_chunk = next_sector / sectors_per_chunk;
}
}
added_sectors = MIN(added_sectors, end - (sector_num + nb_sectors));
added_chunks = (added_sectors + sectors_per_chunk - 1) / sectors_per_chunk;
/* When doing COW, it may happen that there is not enough space for
* a full cluster. Wait if that is the case.
*/
while (nb_chunks == 0 && s->buf_free_count < added_chunks) {
trace_mirror_yield_buf_busy(s, nb_chunks, s->in_flight);
qemu_coroutine_yield();
}
if (s->buf_free_count < nb_chunks + added_chunks) {
trace_mirror_break_buf_busy(s, nb_chunks, s->in_flight);
break;
}
/* We have enough free space to copy these sectors. */
bitmap_set(s->in_flight_bitmap, next_chunk, added_chunks);
nb_sectors += added_sectors;
nb_chunks += added_chunks;
next_sector += added_sectors;
next_chunk += added_chunks;
if (!s->synced && s->common.speed) {
delay_ns = ratelimit_calculate_delay(&s->limit, added_sectors);
}
} while (delay_ns == 0 && next_sector < end);
/* Allocate a MirrorOp that is used as an AIO callback. */
op = g_slice_new(MirrorOp);
op->s = s;
op->sector_num = sector_num;
op->nb_sectors = nb_sectors;
/* Now make a QEMUIOVector taking enough granularity-sized chunks
* from s->buf_free.
*/
qemu_iovec_init(&op->qiov, nb_chunks);
next_sector = sector_num;
while (nb_chunks-- > 0) {
MirrorBuffer *buf = QSIMPLEQ_FIRST(&s->buf_free);
size_t remaining = (nb_sectors * BDRV_SECTOR_SIZE) - op->qiov.size;
QSIMPLEQ_REMOVE_HEAD(&s->buf_free, next);
s->buf_free_count--;
qemu_iovec_add(&op->qiov, buf, MIN(s->granularity, remaining));
/* Advance the HBitmapIter in parallel, so that we do not examine
* the same sector twice.
*/
if (next_sector > hbitmap_next_sector
&& bdrv_get_dirty(source, s->dirty_bitmap, next_sector)) {
hbitmap_next_sector = hbitmap_iter_next(&s->hbi);
}
next_sector += sectors_per_chunk;
}
bdrv_reset_dirty_bitmap(source, s->dirty_bitmap, sector_num,
nb_sectors);
/* Copy the dirty cluster. */
s->in_flight++;
s->sectors_in_flight += nb_sectors;
trace_mirror_one_iteration(s, sector_num, nb_sectors);
bdrv_aio_readv(source, sector_num, &op->qiov, nb_sectors,
mirror_read_complete, op);
return delay_ns;
}
static void mirror_free_init(MirrorBlockJob *s)
{
int granularity = s->granularity;
size_t buf_size = s->buf_size;
uint8_t *buf = s->buf;
assert(s->buf_free_count == 0);
QSIMPLEQ_INIT(&s->buf_free);
while (buf_size != 0) {
MirrorBuffer *cur = (MirrorBuffer *)buf;
QSIMPLEQ_INSERT_TAIL(&s->buf_free, cur, next);
s->buf_free_count++;
buf_size -= granularity;
buf += granularity;
}
}
static void mirror_drain(MirrorBlockJob *s)
{
while (s->in_flight > 0) {
qemu_coroutine_yield();
}
}
typedef struct {
int ret;
} MirrorExitData;
static void mirror_exit(BlockJob *job, void *opaque)
{
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
MirrorExitData *data = opaque;
AioContext *replace_aio_context = NULL;
if (s->to_replace) {
replace_aio_context = bdrv_get_aio_context(s->to_replace);
aio_context_acquire(replace_aio_context);
}
if (s->should_complete && data->ret == 0) {
BlockDriverState *to_replace = s->common.bs;
if (s->to_replace) {
to_replace = s->to_replace;
}
if (bdrv_get_flags(s->target) != bdrv_get_flags(to_replace)) {
bdrv_reopen(s->target, bdrv_get_flags(to_replace), NULL);
}
bdrv_swap(s->target, to_replace);
if (s->common.driver->job_type == BLOCK_JOB_TYPE_COMMIT) {
/* drop the bs loop chain formed by the swap: break the loop then
* trigger the unref from the top one */
BlockDriverState *p = s->base->backing_hd;
bdrv_set_backing_hd(s->base, NULL);
bdrv_unref(p);
}
}
if (s->to_replace) {
bdrv_op_unblock_all(s->to_replace, s->replace_blocker);
error_free(s->replace_blocker);
bdrv_unref(s->to_replace);
}
if (replace_aio_context) {
aio_context_release(replace_aio_context);
}
g_free(s->replaces);
bdrv_unref(s->target);
block_job_completed(&s->common, data->ret);
g_free(data);
}
static void coroutine_fn mirror_run(void *opaque)
{
MirrorBlockJob *s = opaque;
MirrorExitData *data;
BlockDriverState *bs = s->common.bs;
int64_t sector_num, end, sectors_per_chunk, length;
uint64_t last_pause_ns;
BlockDriverInfo bdi;
char backing_filename[1024];
int ret = 0;
int n;
if (block_job_is_cancelled(&s->common)) {
goto immediate_exit;
}
s->bdev_length = bdrv_getlength(bs);
if (s->bdev_length < 0) {
ret = s->bdev_length;
goto immediate_exit;
} else if (s->bdev_length == 0) {
/* Report BLOCK_JOB_READY and wait for complete. */
block_job_event_ready(&s->common);
s->synced = true;
while (!block_job_is_cancelled(&s->common) && !s->should_complete) {
block_job_yield(&s->common);
}
s->common.cancelled = false;
goto immediate_exit;
}
length = DIV_ROUND_UP(s->bdev_length, s->granularity);
s->in_flight_bitmap = bitmap_new(length);
/* If we have no backing file yet in the destination, we cannot let
* the destination do COW. Instead, we copy sectors around the
* dirty data if needed. We need a bitmap to do that.
*/
bdrv_get_backing_filename(s->target, backing_filename,
sizeof(backing_filename));
if (backing_filename[0] && !s->target->backing_hd) {
ret = bdrv_get_info(s->target, &bdi);
if (ret < 0) {
goto immediate_exit;
}
if (s->granularity < bdi.cluster_size) {
s->buf_size = MAX(s->buf_size, bdi.cluster_size);
s->cow_bitmap = bitmap_new(length);
}
}
end = s->bdev_length / BDRV_SECTOR_SIZE;
s->buf = qemu_try_blockalign(bs, s->buf_size);
if (s->buf == NULL) {
ret = -ENOMEM;
goto immediate_exit;
}
sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
mirror_free_init(s);
if (!s->is_none_mode) {
/* First part, loop on the sectors and initialize the dirty bitmap. */
BlockDriverState *base = s->base;
for (sector_num = 0; sector_num < end; ) {
int64_t next = (sector_num | (sectors_per_chunk - 1)) + 1;
ret = bdrv_is_allocated_above(bs, base,
sector_num, next - sector_num, &n);
if (ret < 0) {
goto immediate_exit;
}
assert(n > 0);
if (ret == 1) {
bdrv_set_dirty_bitmap(bs, s->dirty_bitmap, sector_num, n);
sector_num = next;
} else {
sector_num += n;
}
}
}
bdrv_dirty_iter_init(bs, s->dirty_bitmap, &s->hbi);
last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
for (;;) {
uint64_t delay_ns = 0;
int64_t cnt;
bool should_complete;
if (s->ret < 0) {
ret = s->ret;
goto immediate_exit;
}
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
/* s->common.offset contains the number of bytes already processed so
* far, cnt is the number of dirty sectors remaining and
* s->sectors_in_flight is the number of sectors currently being
* processed; together those are the current total operation length */
s->common.len = s->common.offset +
(cnt + s->sectors_in_flight) * BDRV_SECTOR_SIZE;
/* Note that even when no rate limit is applied we need to yield
* periodically with no pending I/O so that qemu_aio_flush() returns.
* We do so every SLICE_TIME nanoseconds, or when there is an error,
* or when the source is clean, whichever comes first.
*/
if (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - last_pause_ns < SLICE_TIME &&
s->common.iostatus == BLOCK_DEVICE_IO_STATUS_OK) {
if (s->in_flight == MAX_IN_FLIGHT || s->buf_free_count == 0 ||
(cnt == 0 && s->in_flight > 0)) {
trace_mirror_yield(s, s->in_flight, s->buf_free_count, cnt);
qemu_coroutine_yield();
continue;
} else if (cnt != 0) {
delay_ns = mirror_iteration(s);
if (delay_ns == 0) {
continue;
}
}
}
should_complete = false;
if (s->in_flight == 0 && cnt == 0) {
trace_mirror_before_flush(s);
ret = bdrv_flush(s->target);
if (ret < 0) {
if (mirror_error_action(s, false, -ret) ==
BLOCK_ERROR_ACTION_REPORT) {
goto immediate_exit;
}
} else {
/* We're out of the streaming phase. From now on, if the job
* is cancelled we will actually complete all pending I/O and
* report completion. This way, block-job-cancel will leave
* the target in a consistent state.
*/
if (!s->synced) {
block_job_event_ready(&s->common);
s->synced = true;
}
should_complete = s->should_complete ||
block_job_is_cancelled(&s->common);
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
}
}
if (cnt == 0 && should_complete) {
/* The dirty bitmap is not updated while operations are pending.
* If we're about to exit, wait for pending operations before
* calling bdrv_get_dirty_count(bs), or we may exit while the
* source has dirty data to copy!
*
* Note that I/O can be submitted by the guest while
* mirror_populate runs.
*/
trace_mirror_before_drain(s, cnt);
bdrv_drain(bs);
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
}
ret = 0;
trace_mirror_before_sleep(s, cnt, s->synced, delay_ns);
if (!s->synced) {
block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, delay_ns);
if (block_job_is_cancelled(&s->common)) {
break;
}
} else if (!should_complete) {
delay_ns = (s->in_flight == 0 && cnt == 0 ? SLICE_TIME : 0);
block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, delay_ns);
} else if (cnt == 0) {
/* The two disks are in sync. Exit and report successful
* completion.
*/
assert(QLIST_EMPTY(&bs->tracked_requests));
s->common.cancelled = false;
break;
}
last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
}
immediate_exit:
if (s->in_flight > 0) {
/* We get here only if something went wrong. Either the job failed,
* or it was cancelled prematurely so that we do not guarantee that
* the target is a copy of the source.
*/
assert(ret < 0 || (!s->synced && block_job_is_cancelled(&s->common)));
mirror_drain(s);
}
assert(s->in_flight == 0);
qemu_vfree(s->buf);
g_free(s->cow_bitmap);
g_free(s->in_flight_bitmap);
bdrv_release_dirty_bitmap(bs, s->dirty_bitmap);
bdrv_iostatus_disable(s->target);
data = g_malloc(sizeof(*data));
data->ret = ret;
block_job_defer_to_main_loop(&s->common, mirror_exit, data);
}
static void mirror_set_speed(BlockJob *job, int64_t speed, Error **errp)
{
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
if (speed < 0) {
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
}
static void mirror_iostatus_reset(BlockJob *job)
{
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
bdrv_iostatus_reset(s->target);
}
static void mirror_complete(BlockJob *job, Error **errp)
{
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
Error *local_err = NULL;
int ret;
ret = bdrv_open_backing_file(s->target, NULL, &local_err);
if (ret < 0) {
error_propagate(errp, local_err);
return;
}
if (!s->synced) {
error_set(errp, QERR_BLOCK_JOB_NOT_READY,
bdrv_get_device_name(job->bs));
return;
}
/* check the target bs is not blocked and block all operations on it */
if (s->replaces) {
AioContext *replace_aio_context;
s->to_replace = check_to_replace_node(s->replaces, &local_err);
if (!s->to_replace) {
error_propagate(errp, local_err);
return;
}
replace_aio_context = bdrv_get_aio_context(s->to_replace);
aio_context_acquire(replace_aio_context);
error_setg(&s->replace_blocker,
"block device is in use by block-job-complete");
bdrv_op_block_all(s->to_replace, s->replace_blocker);
bdrv_ref(s->to_replace);
aio_context_release(replace_aio_context);
}
s->should_complete = true;
block_job_resume(job);
}
static const BlockJobDriver mirror_job_driver = {
.instance_size = sizeof(MirrorBlockJob),
.job_type = BLOCK_JOB_TYPE_MIRROR,
.set_speed = mirror_set_speed,
.iostatus_reset= mirror_iostatus_reset,
.complete = mirror_complete,
};
static const BlockJobDriver commit_active_job_driver = {
.instance_size = sizeof(MirrorBlockJob),
.job_type = BLOCK_JOB_TYPE_COMMIT,
.set_speed = mirror_set_speed,
.iostatus_reset
= mirror_iostatus_reset,
.complete = mirror_complete,
};
static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
const char *replaces,
int64_t speed, int64_t granularity,
int64_t buf_size,
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockCompletionFunc *cb,
void *opaque, Error **errp,
const BlockJobDriver *driver,
bool is_none_mode, BlockDriverState *base)
{
MirrorBlockJob *s;
if (granularity == 0) {
/* Choose the default granularity based on the target file's cluster
* size, clamped between 4k and 64k. */
BlockDriverInfo bdi;
if (bdrv_get_info(target, &bdi) >= 0 && bdi.cluster_size != 0) {
granularity = MAX(4096, bdi.cluster_size);
granularity = MIN(65536, granularity);
} else {
granularity = 65536;
}
}
assert ((granularity & (granularity - 1)) == 0);
if ((on_source_error == BLOCKDEV_ON_ERROR_STOP ||
on_source_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_set(errp, QERR_INVALID_PARAMETER, "on-source-error");
return;
}
s = block_job_create(driver, bs, speed, cb, opaque, errp);
if (!s) {
return;
}
s->replaces = g_strdup(replaces);
s->on_source_error = on_source_error;
s->on_target_error = on_target_error;
s->target = target;
s->is_none_mode = is_none_mode;
s->base = base;
s->granularity = granularity;
s->buf_size = MAX(buf_size, granularity);
s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, errp);
if (!s->dirty_bitmap) {
return;
}
bdrv_set_enable_write_cache(s->target, true);
bdrv_set_on_error(s->target, on_target_error, on_target_error);
bdrv_iostatus_enable(s->target);
s->common.co = qemu_coroutine_create(mirror_run);
trace_mirror_start(bs, s, s->common.co, opaque);
qemu_coroutine_enter(s->common.co, s);
}
void mirror_start(BlockDriverState *bs, BlockDriverState *target,
const char *replaces,
int64_t speed, int64_t granularity, int64_t buf_size,
MirrorSyncMode mode, BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockCompletionFunc *cb,
void *opaque, Error **errp)
{
bool is_none_mode;
BlockDriverState *base;
is_none_mode = mode == MIRROR_SYNC_MODE_NONE;
base = mode == MIRROR_SYNC_MODE_TOP ? bs->backing_hd : NULL;
mirror_start_job(bs, target, replaces,
speed, granularity, buf_size,
on_source_error, on_target_error, cb, opaque, errp,
&mirror_job_driver, is_none_mode, base);
}
void commit_active_start(BlockDriverState *bs, BlockDriverState *base,
int64_t speed,
BlockdevOnError on_error,
BlockCompletionFunc *cb,
void *opaque, Error **errp)
{
int64_t length, base_length;
int orig_base_flags;
int ret;
Error *local_err = NULL;
orig_base_flags = bdrv_get_flags(base);
if (bdrv_reopen(base, bs->open_flags, errp)) {
return;
}
length = bdrv_getlength(bs);
if (length < 0) {
error_setg_errno(errp, -length,
"Unable to determine length of %s", bs->filename);
goto error_restore_flags;
}
base_length = bdrv_getlength(base);
if (base_length < 0) {
error_setg_errno(errp, -base_length,
"Unable to determine length of %s", base->filename);
goto error_restore_flags;
}
if (length > base_length) {
ret = bdrv_truncate(base, length);
if (ret < 0) {
error_setg_errno(errp, -ret,
"Top image %s is larger than base image %s, and "
"resize of base image failed",
bs->filename, base->filename);
goto error_restore_flags;
}
}
bdrv_ref(base);
mirror_start_job(bs, base, NULL, speed, 0, 0,
on_error, on_error, cb, opaque, &local_err,
&commit_active_job_driver, false, base);
if (local_err) {
error_propagate(errp, local_err);
goto error_restore_flags;
}
return;
error_restore_flags:
/* ignore error and errp for bdrv_reopen, because we want to propagate
* the original error */
bdrv_reopen(base, orig_base_flags, NULL);
return;
}

View File

@@ -1,404 +0,0 @@
/*
* QEMU Block driver for NBD
*
* Copyright (C) 2008 Bull S.A.S.
* Author: Laurent Vivier <Laurent.Vivier@bull.net>
*
* Some parts:
* Copyright (C) 2007 Anthony Liguori <anthony@codemonkey.ws>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "nbd-client.h"
#include "qemu/sockets.h"
#define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs))
#define INDEX_TO_HANDLE(bs, index) ((index) ^ ((uint64_t)(intptr_t)bs))
static void nbd_recv_coroutines_enter_all(NbdClientSession *s)
{
int i;
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
}
}
}
static void nbd_teardown_connection(NbdClientSession *client)
{
/* finish any pending coroutines */
shutdown(client->sock, 2);
nbd_recv_coroutines_enter_all(client);
nbd_client_session_detach_aio_context(client);
closesocket(client->sock);
client->sock = -1;
}
static void nbd_reply_ready(void *opaque)
{
NbdClientSession *s = opaque;
uint64_t i;
int ret;
if (s->reply.handle == 0) {
/* No reply already in flight. Fetch a header. It is possible
* that another thread has done the same thing in parallel, so
* the socket is not readable anymore.
*/
ret = nbd_receive_reply(s->sock, &s->reply);
if (ret == -EAGAIN) {
return;
}
if (ret < 0) {
s->reply.handle = 0;
goto fail;
}
}
/* There's no need for a mutex on the receive side, because the
* handler acts as a synchronization point and ensures that only
* one coroutine is called until the reply finishes. */
i = HANDLE_TO_INDEX(s, s->reply.handle);
if (i >= MAX_NBD_REQUESTS) {
goto fail;
}
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
return;
}
fail:
nbd_teardown_connection(s);
}
static void nbd_restart_write(void *opaque)
{
NbdClientSession *s = opaque;
qemu_coroutine_enter(s->send_coroutine, NULL);
}
static int nbd_co_send_request(NbdClientSession *s,
struct nbd_request *request,
QEMUIOVector *qiov, int offset)
{
AioContext *aio_context;
int rc, ret;
qemu_co_mutex_lock(&s->send_mutex);
s->send_coroutine = qemu_coroutine_self();
aio_context = bdrv_get_aio_context(s->bs);
aio_set_fd_handler(aio_context, s->sock,
nbd_reply_ready, nbd_restart_write, s);
if (qiov) {
if (!s->is_unix) {
socket_set_cork(s->sock, 1);
}
rc = nbd_send_request(s->sock, request);
if (rc >= 0) {
ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
rc = -EIO;
}
}
if (!s->is_unix) {
socket_set_cork(s->sock, 0);
}
} else {
rc = nbd_send_request(s->sock, request);
}
aio_set_fd_handler(aio_context, s->sock, nbd_reply_ready, NULL, s);
s->send_coroutine = NULL;
qemu_co_mutex_unlock(&s->send_mutex);
return rc;
}
static void nbd_co_receive_reply(NbdClientSession *s,
struct nbd_request *request, struct nbd_reply *reply,
QEMUIOVector *qiov, int offset)
{
int ret;
/* Wait until we're woken up by the read handler. TODO: perhaps
* peek at the next reply and avoid yielding if it's ours? */
qemu_coroutine_yield();
*reply = s->reply;
if (reply->handle != request->handle) {
reply->error = EIO;
} else {
if (qiov && reply->error == 0) {
ret = qemu_co_recvv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
reply->error = EIO;
}
}
/* Tell the read handler to read another header. */
s->reply.handle = 0;
}
}
static void nbd_coroutine_start(NbdClientSession *s,
struct nbd_request *request)
{
int i;
/* Poor man semaphore. The free_sema is locked when no other request
* can be accepted, and unlocked after receiving one reply. */
if (s->in_flight >= MAX_NBD_REQUESTS - 1) {
qemu_co_mutex_lock(&s->free_sema);
assert(s->in_flight < MAX_NBD_REQUESTS);
}
s->in_flight++;
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i] == NULL) {
s->recv_coroutine[i] = qemu_coroutine_self();
break;
}
}
assert(i < MAX_NBD_REQUESTS);
request->handle = INDEX_TO_HANDLE(s, i);
}
static void nbd_coroutine_end(NbdClientSession *s,
struct nbd_request *request)
{
int i = HANDLE_TO_INDEX(s, request->handle);
s->recv_coroutine[i] = NULL;
if (s->in_flight-- == MAX_NBD_REQUESTS) {
qemu_co_mutex_unlock(&s->free_sema);
}
}
static int nbd_co_readv_1(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
struct nbd_request request = { .type = NBD_CMD_READ };
struct nbd_reply reply;
ssize_t ret;
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, qiov, offset);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
static int nbd_co_writev_1(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
struct nbd_request request = { .type = NBD_CMD_WRITE };
struct nbd_reply reply;
ssize_t ret;
if (!bdrv_enable_write_cache(client->bs) &&
(client->nbdflags & NBD_FLAG_SEND_FUA)) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, qiov, offset);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, NULL, 0);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
/* qemu-nbd has a limit of slightly less than 1M per request. Try to
* remain aligned to 4K. */
#define NBD_MAX_SECTORS 2040
int nbd_client_session_co_readv(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_readv_1(client, sector_num,
NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_readv_1(client, sector_num, nb_sectors, qiov, offset);
}
int nbd_client_session_co_writev(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_writev_1(client, sector_num,
NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_writev_1(client, sector_num, nb_sectors, qiov, offset);
}
int nbd_client_session_co_flush(NbdClientSession *client)
{
struct nbd_request request = { .type = NBD_CMD_FLUSH };
struct nbd_reply reply;
ssize_t ret;
if (!(client->nbdflags & NBD_FLAG_SEND_FLUSH)) {
return 0;
}
if (client->nbdflags & NBD_FLAG_SEND_FUA) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = 0;
request.len = 0;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, NULL, 0);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
int nbd_client_session_co_discard(NbdClientSession *client, int64_t sector_num,
int nb_sectors)
{
struct nbd_request request = { .type = NBD_CMD_TRIM };
struct nbd_reply reply;
ssize_t ret;
if (!(client->nbdflags & NBD_FLAG_SEND_TRIM)) {
return 0;
}
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, NULL, 0);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
void nbd_client_session_detach_aio_context(NbdClientSession *client)
{
aio_set_fd_handler(bdrv_get_aio_context(client->bs), client->sock,
NULL, NULL, NULL);
}
void nbd_client_session_attach_aio_context(NbdClientSession *client,
AioContext *new_context)
{
aio_set_fd_handler(new_context, client->sock,
nbd_reply_ready, NULL, client);
}
void nbd_client_session_close(NbdClientSession *client)
{
struct nbd_request request = {
.type = NBD_CMD_DISC,
.from = 0,
.len = 0
};
if (!client->bs) {
return;
}
if (client->sock == -1) {
return;
}
nbd_send_request(client->sock, &request);
nbd_teardown_connection(client);
client->bs = NULL;
}
int nbd_client_session_init(NbdClientSession *client, BlockDriverState *bs,
int sock, const char *export)
{
int ret;
/* NBD handshake */
logout("session init %s\n", export);
qemu_set_block(sock);
ret = nbd_receive_negotiate(sock, export,
&client->nbdflags, &client->size,
&client->blocksize);
if (ret < 0) {
logout("Failed to negotiate with the NBD server\n");
closesocket(sock);
return ret;
}
qemu_co_mutex_init(&client->send_mutex);
qemu_co_mutex_init(&client->free_sema);
client->bs = bs;
client->sock = sock;
/* Now that we're connected, set the socket to be non-blocking and
* kick the reply mechanism. */
qemu_set_nonblock(sock);
nbd_client_session_attach_aio_context(client, bdrv_get_aio_context(bs));
logout("Established connection with NBD server\n");
return 0;
}

View File

@@ -1,54 +0,0 @@
#ifndef NBD_CLIENT_H
#define NBD_CLIENT_H
#include "qemu-common.h"
#include "block/nbd.h"
#include "block/block_int.h"
/* #define DEBUG_NBD */
#if defined(DEBUG_NBD)
#define logout(fmt, ...) \
fprintf(stderr, "nbd\t%-24s" fmt, __func__, ##__VA_ARGS__)
#else
#define logout(fmt, ...) ((void)0)
#endif
#define MAX_NBD_REQUESTS 16
typedef struct NbdClientSession {
int sock;
uint32_t nbdflags;
off_t size;
size_t blocksize;
CoMutex send_mutex;
CoMutex free_sema;
Coroutine *send_coroutine;
int in_flight;
Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
struct nbd_reply reply;
bool is_unix;
BlockDriverState *bs;
} NbdClientSession;
int nbd_client_session_init(NbdClientSession *client, BlockDriverState *bs,
int sock, const char *export_name);
void nbd_client_session_close(NbdClientSession *client);
int nbd_client_session_co_discard(NbdClientSession *client, int64_t sector_num,
int nb_sectors);
int nbd_client_session_co_flush(NbdClientSession *client);
int nbd_client_session_co_writev(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
int nbd_client_session_co_readv(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
void nbd_client_session_detach_aio_context(NbdClientSession *client);
void nbd_client_session_attach_aio_context(NbdClientSession *client,
AioContext *new_context);
#endif /* NBD_CLIENT_H */

View File

@@ -26,421 +26,168 @@
* THE SOFTWARE.
*/
#include "block/nbd-client.h"
#include "qemu/uri.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "qemu/sockets.h"
#include "qapi/qmp/qdict.h"
#include "qapi/qmp/qjson.h"
#include "qapi/qmp/qint.h"
#include "qapi/qmp/qstring.h"
#include "qemu-common.h"
#include "nbd.h"
#include "module.h"
#include <sys/types.h>
#include <unistd.h>
#define EN_OPTSTR ":exportname="
typedef struct BDRVNBDState {
NbdClientSession client;
QemuOpts *socket_opts;
int sock;
off_t size;
size_t blocksize;
} BDRVNBDState;
static int nbd_parse_uri(const char *filename, QDict *options)
static int nbd_open(BlockDriverState *bs, const char* filename, int flags)
{
URI *uri;
const char *p;
QueryParams *qp = NULL;
int ret = 0;
bool is_unix;
uri = uri_parse(filename);
if (!uri) {
return -EINVAL;
}
/* transport */
if (!strcmp(uri->scheme, "nbd")) {
is_unix = false;
} else if (!strcmp(uri->scheme, "nbd+tcp")) {
is_unix = false;
} else if (!strcmp(uri->scheme, "nbd+unix")) {
is_unix = true;
} else {
ret = -EINVAL;
goto out;
}
p = uri->path ? uri->path : "/";
p += strspn(p, "/");
if (p[0]) {
qdict_put(options, "export", qstring_from_str(p));
}
qp = query_params_parse(uri->query);
if (qp->n > 1 || (is_unix && !qp->n) || (!is_unix && qp->n)) {
ret = -EINVAL;
goto out;
}
if (is_unix) {
/* nbd+unix:///export?socket=path */
if (uri->server || uri->port || strcmp(qp->p[0].name, "socket")) {
ret = -EINVAL;
goto out;
}
qdict_put(options, "path", qstring_from_str(qp->p[0].value));
} else {
QString *host;
/* nbd[+tcp]://host[:port]/export */
if (!uri->server) {
ret = -EINVAL;
goto out;
}
/* strip braces from literal IPv6 address */
if (uri->server[0] == '[') {
host = qstring_from_substr(uri->server, 1,
strlen(uri->server) - 2);
} else {
host = qstring_from_str(uri->server);
}
qdict_put(options, "host", host);
if (uri->port) {
char* port_str = g_strdup_printf("%d", uri->port);
qdict_put(options, "port", qstring_from_str(port_str));
g_free(port_str);
}
}
out:
if (qp) {
query_params_free(qp);
}
uri_free(uri);
return ret;
}
static void nbd_parse_filename(const char *filename, QDict *options,
Error **errp)
{
char *file;
char *export_name;
const char *host_spec;
BDRVNBDState *s = bs->opaque;
const char *host;
const char *unixpath;
if (qdict_haskey(options, "host")
|| qdict_haskey(options, "port")
|| qdict_haskey(options, "path"))
{
error_setg(errp, "host/port/path and a file name may not be specified "
"at the same time");
return;
}
if (strstr(filename, "://")) {
int ret = nbd_parse_uri(filename, options);
if (ret < 0) {
error_setg(errp, "No valid URL specified");
}
return;
}
file = g_strdup(filename);
export_name = strstr(file, EN_OPTSTR);
if (export_name) {
if (export_name[strlen(EN_OPTSTR)] == 0) {
goto out;
}
export_name[0] = 0; /* truncate 'file' */
export_name += strlen(EN_OPTSTR);
qdict_put(options, "export", qstring_from_str(export_name));
}
/* extract the host_spec - fail if it's not nbd:... */
if (!strstart(file, "nbd:", &host_spec)) {
error_setg(errp, "File name string for NBD must start with 'nbd:'");
goto out;
}
if (!*host_spec) {
goto out;
}
/* are we a UNIX or TCP socket? */
if (strstart(host_spec, "unix:", &unixpath)) {
qdict_put(options, "path", qstring_from_str(unixpath));
} else {
InetSocketAddress *addr = NULL;
addr = inet_parse(host_spec, errp);
if (!addr) {
goto out;
}
qdict_put(options, "host", qstring_from_str(addr->host));
qdict_put(options, "port", qstring_from_str(addr->port));
qapi_free_InetSocketAddress(addr);
}
out:
g_free(file);
}
static void nbd_config(BDRVNBDState *s, QDict *options, char **export,
Error **errp)
{
Error *local_err = NULL;
if (qdict_haskey(options, "path") == qdict_haskey(options, "host")) {
if (qdict_haskey(options, "path")) {
error_setg(errp, "path and host may not be used at the same time.");
} else {
error_setg(errp, "one of path and host must be specified.");
}
return;
}
s->client.is_unix = qdict_haskey(options, "path");
s->socket_opts = qemu_opts_create(&socket_optslist, NULL, 0,
&error_abort);
qemu_opts_absorb_qdict(s->socket_opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
if (!qemu_opt_get(s->socket_opts, "port")) {
qemu_opt_set_number(s->socket_opts, "port", NBD_DEFAULT_PORT);
}
*export = g_strdup(qdict_get_try_str(options, "export"));
if (*export) {
qdict_del(options, "export");
}
}
static int nbd_establish_connection(BlockDriverState *bs, Error **errp)
{
BDRVNBDState *s = bs->opaque;
int sock;
off_t size;
size_t blocksize;
int ret;
if (s->client.is_unix) {
sock = unix_connect_opts(s->socket_opts, errp, NULL, NULL);
} else {
sock = inet_connect_opts(s->socket_opts, errp, NULL, NULL);
if (sock >= 0) {
socket_set_nodelay(sock);
}
}
/* Failed to establish connection */
if (sock < 0) {
logout("Failed to establish connection to NBD server\n");
return -errno;
}
return sock;
}
static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVNBDState *s = bs->opaque;
char *export = NULL;
int result, sock;
Error *local_err = NULL;
/* Pop the config into our state object. Exit if invalid. */
nbd_config(s, options, &export, &local_err);
if (local_err) {
error_propagate(errp, local_err);
if (!strstart(filename, "nbd:", &host))
return -EINVAL;
if (strstart(host, "unix:", &unixpath)) {
if (unixpath[0] != '/')
return -EINVAL;
sock = unix_socket_outgoing(unixpath);
} else {
uint16_t port;
char *p, *r;
char hostname[128];
pstrcpy(hostname, 128, host);
p = strchr(hostname, ':');
if (p == NULL)
return -EINVAL;
*p = '\0';
p++;
port = strtol(p, &r, 0);
if (r == p)
return -EINVAL;
sock = tcp_socket_outgoing(hostname, port);
}
/* establish TCP connection, return error if it fails
* TODO: Configurable retry-until-timeout behaviour.
*/
sock = nbd_establish_connection(bs, errp);
if (sock < 0) {
return sock;
}
if (sock == -1)
return -errno;
/* NBD handshake */
result = nbd_client_session_init(&s->client, bs, sock, export);
g_free(export);
return result;
ret = nbd_receive_negotiate(sock, &size, &blocksize);
if (ret == -1)
return -errno;
s->sock = sock;
s->size = size;
s->blocksize = blocksize;
return 0;
}
static int nbd_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
static int nbd_read(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
return nbd_client_session_co_readv(&s->client, sector_num,
nb_sectors, qiov);
request.type = NBD_CMD_READ;
request.handle = (uint64_t)(intptr_t)bs;
request.from = sector_num * 512;;
request.len = nb_sectors * 512;
if (nbd_send_request(s->sock, &request) == -1)
return -errno;
if (nbd_receive_reply(s->sock, &reply) == -1)
return -errno;
if (reply.error !=0)
return -reply.error;
if (reply.handle != request.handle)
return -EIO;
if (nbd_wr_sync(s->sock, buf, request.len, 1) != request.len)
return -EIO;
return 0;
}
static int nbd_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
static int nbd_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
return nbd_client_session_co_writev(&s->client, sector_num,
nb_sectors, qiov);
}
request.type = NBD_CMD_WRITE;
request.handle = (uint64_t)(intptr_t)bs;
request.from = sector_num * 512;;
request.len = nb_sectors * 512;
static int nbd_co_flush(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
if (nbd_send_request(s->sock, &request) == -1)
return -errno;
return nbd_client_session_co_flush(&s->client);
}
if (nbd_wr_sync(s->sock, (uint8_t*)buf, request.len, 0) != request.len)
return -EIO;
static int nbd_co_discard(BlockDriverState *bs, int64_t sector_num,
int nb_sectors)
{
BDRVNBDState *s = bs->opaque;
if (nbd_receive_reply(s->sock, &reply) == -1)
return -errno;
return nbd_client_session_co_discard(&s->client, sector_num,
nb_sectors);
if (reply.error !=0)
return -reply.error;
if (reply.handle != request.handle)
return -EIO;
return 0;
}
static void nbd_close(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
qemu_opts_del(s->socket_opts);
nbd_client_session_close(&s->client);
request.type = NBD_CMD_DISC;
request.handle = (uint64_t)(intptr_t)bs;
request.from = 0;
request.len = 0;
nbd_send_request(s->sock, &request);
close(s->sock);
}
static int64_t nbd_getlength(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
return s->client.size;
}
static void nbd_detach_aio_context(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
nbd_client_session_detach_aio_context(&s->client);
}
static void nbd_attach_aio_context(BlockDriverState *bs,
AioContext *new_context)
{
BDRVNBDState *s = bs->opaque;
nbd_client_session_attach_aio_context(&s->client, new_context);
}
static void nbd_refresh_filename(BlockDriverState *bs)
{
QDict *opts = qdict_new();
const char *path = qdict_get_try_str(bs->options, "path");
const char *host = qdict_get_try_str(bs->options, "host");
const char *port = qdict_get_try_str(bs->options, "port");
const char *export = qdict_get_try_str(bs->options, "export");
qdict_put_obj(opts, "driver", QOBJECT(qstring_from_str("nbd")));
if (path && export) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"nbd+unix:///%s?socket=%s", export, path);
} else if (path && !export) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"nbd+unix://?socket=%s", path);
} else if (!path && export && port) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"nbd://%s:%s/%s", host, port, export);
} else if (!path && export && !port) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"nbd://%s/%s", host, export);
} else if (!path && !export && port) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"nbd://%s:%s", host, port);
} else if (!path && !export && !port) {
snprintf(bs->exact_filename, sizeof(bs->exact_filename),
"nbd://%s", host);
}
if (path) {
qdict_put_obj(opts, "path", QOBJECT(qstring_from_str(path)));
} else if (port) {
qdict_put_obj(opts, "host", QOBJECT(qstring_from_str(host)));
qdict_put_obj(opts, "port", QOBJECT(qstring_from_str(port)));
} else {
qdict_put_obj(opts, "host", QOBJECT(qstring_from_str(host)));
}
if (export) {
qdict_put_obj(opts, "export", QOBJECT(qstring_from_str(export)));
}
bs->full_open_options = opts;
return s->size;
}
static BlockDriver bdrv_nbd = {
.format_name = "nbd",
.instance_size = sizeof(BDRVNBDState),
.bdrv_file_open = nbd_open,
.bdrv_read = nbd_read,
.bdrv_write = nbd_write,
.bdrv_close = nbd_close,
.bdrv_getlength = nbd_getlength,
.protocol_name = "nbd",
.instance_size = sizeof(BDRVNBDState),
.bdrv_parse_filename = nbd_parse_filename,
.bdrv_file_open = nbd_open,
.bdrv_co_readv = nbd_co_readv,
.bdrv_co_writev = nbd_co_writev,
.bdrv_close = nbd_close,
.bdrv_co_flush_to_os = nbd_co_flush,
.bdrv_co_discard = nbd_co_discard,
.bdrv_getlength = nbd_getlength,
.bdrv_detach_aio_context = nbd_detach_aio_context,
.bdrv_attach_aio_context = nbd_attach_aio_context,
.bdrv_refresh_filename = nbd_refresh_filename,
};
static BlockDriver bdrv_nbd_tcp = {
.format_name = "nbd",
.protocol_name = "nbd+tcp",
.instance_size = sizeof(BDRVNBDState),
.bdrv_parse_filename = nbd_parse_filename,
.bdrv_file_open = nbd_open,
.bdrv_co_readv = nbd_co_readv,
.bdrv_co_writev = nbd_co_writev,
.bdrv_close = nbd_close,
.bdrv_co_flush_to_os = nbd_co_flush,
.bdrv_co_discard = nbd_co_discard,
.bdrv_getlength = nbd_getlength,
.bdrv_detach_aio_context = nbd_detach_aio_context,
.bdrv_attach_aio_context = nbd_attach_aio_context,
.bdrv_refresh_filename = nbd_refresh_filename,
};
static BlockDriver bdrv_nbd_unix = {
.format_name = "nbd",
.protocol_name = "nbd+unix",
.instance_size = sizeof(BDRVNBDState),
.bdrv_parse_filename = nbd_parse_filename,
.bdrv_file_open = nbd_open,
.bdrv_co_readv = nbd_co_readv,
.bdrv_co_writev = nbd_co_writev,
.bdrv_close = nbd_close,
.bdrv_co_flush_to_os = nbd_co_flush,
.bdrv_co_discard = nbd_co_discard,
.bdrv_getlength = nbd_getlength,
.bdrv_detach_aio_context = nbd_detach_aio_context,
.bdrv_attach_aio_context = nbd_attach_aio_context,
.bdrv_refresh_filename = nbd_refresh_filename,
};
static void bdrv_nbd_init(void)
{
bdrv_register(&bdrv_nbd);
bdrv_register(&bdrv_nbd_tcp);
bdrv_register(&bdrv_nbd_unix);
}
block_init(bdrv_nbd_init);

Some files were not shown because too many files have changed in this diff Show More